Aug 052017
 

It may be something to do with my long Unix background, or maybe it’s because my first system administration job saw me administer systems over insanely low link speeds, but I’m a big fan of being able to use the CLI whenever I’m in a hurry or just want to do something small. GUIs may be nice, but CLIs are fun.

Under NetWorker 8 and below, if you wanted to run a server initiated backup job from the command line, you’d use the savegrp command. Under NetWorker 9 onwards, groups are there only as containers, and what you really need to work on are workflows.

bigStock Workflow

There’s a command for that – nsrworkflow.

At heart it’s a very simple command:

# nsrworkflow -p policy -w workflow

That’s enough to kick off a backup job. But there’s some additional options that make it more useful, particularly in larger environments. To start with, you’ve got the -a option, which I really like. That tells nsrworkflow you want to perform an ‘adhoc’ execution of a job. Why is that important? Say you’ve got a job you really need to run today but it’s configured to skip … running it in adhoc will disregard the skip for you.

The -A option allows you to specify specific overrides to actions. For instance, if I wanted to run a job workflow today from the command line as a full rather than an incremental, I might use something like the following:

# nsrworkflow -p Gold -w Finance -A "backup -l full"

The -A option there effectively allows me to specify overrides for individual actions – name the action (backup) and name the override (-l full).

Another useful option is -c component which allows you to specify to run the job on just a single or a small list of components – e.g., clients. Extending from the above, if I wanted to run a full for a single client called orilla, it might look as follows:

# nsrworkflow -p Gold -w Finance -c orilla -A "backup -l full"

Note that specifying the action there doesn’t mean it’s the only action you’ll run – you’ll still run the other actions in the workflow (e.g., a clone operation, if it’s configured) – it just means you’re specifying an override for the nominated action.

For virtual machines, the way I’ve found easiest to start an individual client is using the vmid flag – effectively what the saveset name is for a virtual machine started via a proxy. Now, to get that name, you have to do a bit of mminfo scripting:

# mminfo -k -r vmname,name

 vm_name name
vulcan vm:500f21cd-5865-dc0d-7fe5-9b93fad1a059:caprica.turbamentis.int
vulcan vm:500f21cd-5865-dc0d-7fe5-9b93fad1a059:caprica.turbamentis.int
win01 vm:500f444e-4dda-d29d-6741-d23d6169f158:caprica.turbamentis.int
win01 vm:500f444e-4dda-d29d-6741-d23d6169f158:caprica.turbamentis.int
picon vm:500f6871-2300-47d4-7927-f3c799ee200b:caprica.turbamentis.int
picon vm:500f6871-2300-47d4-7927-f3c799ee200b:caprica.turbamentis.int
win02 vm:500ff33e-2f70-0b8d-e9b2-6ef7a5bf83ed:caprica.turbamentis.int
win02 vm:500ff33e-2f70-0b8d-e9b2-6ef7a5bf83ed:caprica.turbamentis.int
vega vm:5029095d-965e-2744-85a4-70ab9efcc312:caprica.turbamentis.int
vega vm:5029095d-965e-2744-85a4-70ab9efcc312:caprica.turbamentis.int
krell vm:5029e15e-3c9d-18be-a928-16e13839f169:caprica.turbamentis.int
krell vm:5029e15e-3c9d-18be-a928-16e13839f169:caprica.turbamentis.int
krell vm:5029e15e-3c9d-18be-a928-16e13839f169:caprica.turbamentis.int

What you’re looking for is the vm:a-b-c-d set, stripping out the :vcenter at the end of the ID.

Now, I’m a big fan of not running extra commands unless I really need to, so I’ve actually got a vmmap.pl Perl script which you’re free to download and adapt/use as you need to streamline that process. Since my lab is pretty basic, the script is too, though I’ve done my best to make the code straight forward. You simply run vmmap.pl as follows:

[root@orilla bin]# vmmap.pl -c krell
vm:5029e15e-3c9d-18be-a928-16e13839f169

With ID in hand, we can invoke nsrworkflow as follows:

# nsrworkflow -p VMware -w "Virtual Machines" -c vm:5029e15e-3c9d-18be-a928-16e13839f169
133550:nsrworkflow: Starting Protection Policy 'VMware' workflow 'Virtual Machines'.
123316:nsrworkflow: Starting action 'VMware/Virtual Machines/backup' with command: 'nsrvproxy_save -s orilla.turbamentis.int -j 705080 -L incr -p VMware -w "Virtual Machines" -A backup'.
123321:nsrworkflow: Action 'VMware/Virtual Machines/backup's log will be in '/nsr/logs/policy/VMware/Virtual Machines/backup_705081.raw'.
123325:nsrworkflow: Action 'VMware/Virtual Machines/backup' succeeded.
123316:nsrworkflow: Starting action 'VMware/Virtual Machines/clone' with command: 'nsrclone -a "*policy name=VMware" -a "*policy workflow name=Virtual Machines" -a "*policy action name=clone" -s orilla.turbamentis.int -b BoostClone -y "1 Months" -o -F -S'.
123321:nsrworkflow: Action 'VMware/Virtual Machines/clone's log will be in '/nsr/logs/policy/VMware/Virtual Machines/clone_705085.raw'.
123325:nsrworkflow: Action 'VMware/Virtual Machines/clone' succeeded.
133553:nsrworkflow: Workflow 'VMware/Virtual Machines' succeeded.

Of course, if you are in front of NMC, you can start individual clients from the GUI if you want to:

Starting an Individual ClientStarting an Individual Client

But it’s always worth knowing what your command line options are!

NetWorker 9.2 Capacity Measurement

 Licensing, NetWorker, Scripting  Comments Off on NetWorker 9.2 Capacity Measurement
Aug 032017
 

As I’ve mentioned in the past, there’s a few different licensing models for NetWorker, but capacity licensing (e.g., 100 TB front end backup size) gives considerable flexibility, effectively enabling all product functionality within a single license, thereby allowing NetWorker usage to adapt to suit the changing needs of the business.

Data Analysis

In the past, measuring utilisation has typically required either the use of DPA or asking your DellEMC account team to review the environment and provide a report. NetWorker 9.2 however gives you a new, self-managed option – the ability to run, whenever you want, a capacity measurement report to determine what your utilisation ratio is.

This is done through a new command line tool, nsrcapinfo, which is incredibly simple to run. In fact, running it without any options at all will give the default 60 day report, providing utilisation details for each of the key data types as well as summary. For instance, against my lab server, here’s the output:

<?xml version="1.0" encoding="UTF8" standalone="yes" ?>
<!--
~ Copyright (c) 2017 Dell EMC Corporation. All Rights Reserved.
~
~ This software contains the intellectual property of Dell EMC Corporation or is licensed to
~ Dell EMC Corporation from third parties. Use of this software and the intellectual property
~ contained therein is expressly limited to the terms and conditions of the License
~ Agreement under which it is provided by or on behalf of Dell EMC.
-->
<Capacity_Estimate_Report>
<Time_Stamp>2017-08-02T21:21:18Z</Time_Stamp>
<Clients>13</Clients>
<DB2>0.0000</DB2>
<Informix>0.0000</Informix>
<IQ>0.0000</IQ>
<Lotus>0.0000</Lotus>
<MySQL>0.0000</MySQL>
<Sybase>0.0000</Sybase>
<Oracle>0.0000</Oracle>
<SAP_HANA>0.0000</SAP_HANA>
<SAP_Oracle>0.0000</SAP_Oracle>
<Exchange_NMM8.x>0.0000</Exchange_NMM8.x>
<Exchange_NMM9.x>0.0000</Exchange_NMM9.x>
<Hyper-V>0.0000</Hyper-V>
<SharePoint>0.0000</SharePoint>
<SQL_VDI>0.0000</SQL_VDI>
<SQL_VSS>0.0000</SQL_VSS>
<Meditech>0.0000</Meditech>
<Other_Applications>2678.0691</Other_Applications>
<Unix_Filesystems>599.9214</Unix_Filesystems>
<VMware_Filesystems>360.3535</VMware_Filesystems>
<Windows_Filesystems>27.8482</Windows_Filesystems>
<Total_Largest_Filesystem_Fulls>988.1231</Total_Largest_Filesystem_Fulls>
<Peak_Daily_Applications>2678.0691</Peak_Daily_Applications>
<Capacity_Estimate>3666.1921</Capacity_Estimate>
<Unit_of_Measure_Bytes_per_GiB>1073741824</Unit_of_Measure_Bytes_per_GiB>
<Days_Measured>60</Days_Measured>
</Capacity_Estimate_Report>

That’s in XML by default – and the numbers are in GiB.

If you do fulls on longer cycles than the default of a 60 day measurement window you can extend the data sampling range by using -d nDays in the command (e.g., “nsrcapinfo -d 90” would provide a measurement over a 90 day window). You can also, if you wish for further analysis, generate additional reports (see the command reference guide or, man nsrcapinfo if you’re on Linux for the full details). One of those reports that I think will be quite popular with backup administrators will be the client report. An example of that is below:

[root@orilla ~]# nsrcapinfo -r clients
"Hostname", "Client_Capacity_GiB", "Application_Names" 
"abydos.turbamentis.int", "2.3518", "Unix_Filesystems"
"vulcan", "16.0158", "VMware_Filesystems"
"win01", "80.0785", "VMware_Filesystems"
"picon", "40.0394", "VMware_Filesystems"
"win02", "80.0788", "VMware_Filesystems"
"vega", "64.0625", "VMware_Filesystems"
"test02", "16.0157", "VMware_Filesystems"
"test03", "16.0157", "VMware_Filesystems"
"test01", "16.0157", "VMware_Filesystems"
"krell", "32.0314", "VMware_Filesystems"
"faraway.turbamentis.int", "27.8482", "Windows_Filesystems"
"orilla.turbamentis.int", "1119.5321", "Other_Applications Unix_Filesystems"
"rama.turbamentis.int", "2156.1067", "Other_Applications Unix_Filesystems"

That’s a straight-up simple view of the FETB estimation for each client you’re protecting in your environment.

There you have it – capacity measurement in NetWorker as a native function in version 9.2.

NetWorker 9.2 – A Focused Release

 NetWorker  Comments Off on NetWorker 9.2 – A Focused Release
Jul 292017
 

NetWorker 9.2 has just been released. Now, normally I pride myself for having kicked the tyres on a new release for weeks before it’s come out via the beta programmes, but unfortunately my June, June and July taught me new definitions of busy (I was busy enough that I did June twice), so instead I’ll be rolling the new release into my lab this weekend, after I’ve done this initial post about it.

bigStock Focus

I’ve been working my way through NetWorker 9.2’s new feature set, though, and it’s impressive.

As you’ll recall, NetWorker 9.1 introduced NVP, or vProxy – the replacement to the Virtual Backup Appliance introduced in NetWorker 8. NVP is incredibly efficient for backup and recovery operations, and delivers hyper-fast file level recovery from image level recovery. (Don’t just take my written word for it though – check out this demo where I recovered almost 8,000 files in just over 30 seconds.)

NetWorker 9.2 expands on the virtual machine backup integration by adding the capability to perform Microsoft SQL Server application consistent backup as part of a VMware image level backup. That’s right, application consistent, image level backup. That’s something Avamar has been able to do for a little while now, and it’s now being adopted in NetWorker, too. We’re starting with Microsoft SQL Server – arguably the simplest one to cover, and the most sought after by customers, too – before tackling other databases and applications. In my mind, application consistent image level backup is a pivot point for simplifying data protection – in fact, it’s a topic I covered as an emerging focus for the next several years of data protection in my book, Data Protection: Ensuring Data Availability. I think in particular app-consistent image level backups will be extremely popular in smaller/mid-market customer environments where there’s not guaranteed to be a dedicated DBA team within the IT department.

It’s not just DBAs that get a boost with NetWorker 9.2 – security officers do, too. In prior versions of NetWorker, it was possible to integrate Data Domain Retention Lock via scripting – now in NetWorker 9.2, it’s rolled into the interface itself. This means you’ll be able to establish retention lock controls as part of the backup process. (For organisations not quite able to go down the path of having a full isolated recovery site, this will be a good mid-tier option.)

Beyond DBAs and security officers, those who are interested in backing up to the cloud, or in the cloud, will be getting a boost as well – CloudBoost 2.2 has been introduced with NetWorker 9.2, and this gives Windows 64-bit clients the CloudBoost API as well, allowing a direct to object storage model from both Windows and Linux (which got CloudBoost client direct in a earlier release). What does this mean? Simple: It’s a super-efficient architecture leveraging an absolute minimum footprint, particularly when you’re running IaaS protection in the Cloud itself. Cloud protection gets another option as well – support for DDVE in the Cloud: AWS or Azure.

NMC isn’t left out – as NetWorker continues to scale, there’s more information and data within NMC for an administrator or operator to sort through. If you’ve got a few thousand clients, or hundred of client groups created for policies and workflows, you might not want to scroll through a long list. Hence, there’s now filtering available in a lot of forms. I’m always a fan of speeding up what I have to do within a GUI, and this will be very useful for those in bigger environments, or who prefer to find things by searching rather than visually eye-balling while scrolling.

If you’re using capacity licensing, otherwise known as Front End TB (FETB) licensing, NetWorker now reports license utilisation estimation. You might think this is a synch, but it’s only a synch if you count whitespace everywhere. That’s not something we want done. Still, if you’ve got capacity licensing, NetWorker will now keep track of it for you.

There’s a big commitment within DellEMC for continued development of automation options within the Data Protection products. NetWorker has always enjoyed a robust command line interface, but a CLI can only take you so far. The REST API that was introduced previously continues to be updated. There’s support for the Data Domain Retention Lock integration and the new application consistent image level backup options, just to name a couple of new features.

NetWorker isn’t just about the core functionality as well – there’s also the various modules for databases and applications, and they’ve not been left unattended, either.

SharePoint and Exchange get tighter integration with ItemPoint for granular recovery. Previously it was a two step process to mount the backup and launch ItemPoint – now the NMM recovery interface can automatically start ItemPoint, directing it to the mounted backup copies for processing.

Microsoft SQL Server is still of course supported for traditional backup/recovery operations via the NetWorker Module for Microsoft, and it’s been updated with some handy new features. Backup an recovery operations no longer need Windows administrative privileges in all instances, and you can do database exclusions now via wild-cards – very handy if you’ve got a lot of databases on a server following a particular naming convention and you don’t need to protect them all, or protect them all in a single backup stream. You also get the option during database recovery now to terminate other user access to the database; previously this had to be managed manually by the SQL administrator for the target database – now it can be controlled as part of the recovery process. There’s also a bunch of new options for SQL Always On Availability Groups, and backup promotion.

In addition to the tighter ItemPoint integration mentioned previously for Exchange, you also get the option to do ItemPoint/Granular Exchange recovery from a client that doesn’t have Exchange installed. This is particularly handy when Exchange administrators want to limit what can happen on an Exchange server. Continuing the tight Data Domain Cloud Tier integration, NMM now handles automatic and seamless recall of data from Cloud Tier should it be required as part of a recovery option.

Hyper-V gets some love, too: there’s processes to remove stale checkpoints, or merge checkpoints that exceed a particular size. Hyper-V allows a checkpoint disk (a differencing disk – AVHDX file) to grow to the same size as its original parent disk. However, that can cause performance issues and when it hits 100% it creates other issues. So you can tell NetWorker during NMM Hyper-V backups to inspect the size of Hyper-V differencing disks and automatically merge if they exceed a certain watermark. (E.g., you might force a merge when the differencing disk is 25% of the size of the original.) You also get the option to exclude virtual hard disks (either VHD or VHDX format) from the backup process should you desire – very handy for virtual machines that have large disks containing transient or other forms of data that have no requirement for backup.

Active Directory recovery browsing gets a performance boost too, particularly for large AD trees.

SAP IQ (formerly known as Sybase IQ) gets support in NetWorker 9.2 NMDA. You’ll need to be running v16 SP11 and a simplex architecture, but you’ll get a variety of backup and recovery options. A growing trend within database vendors is to allow designation of some data files within the database as read-only, and you can choose to either backup or skip read-only data files as part of a SAP IQ backup, amongst a variety of other options. If you’ve got a traditional Sybase ASE server, you’ll find that there’s now support for backing up database servers with >200 databases on them – either in sequence, or with a configured level of parallelism.

DB2 gets some loving, too – NMDA 9.1 gave support for PowerLink little-endian DB2 environments, but with 9.2 we also get a Boost plugin to allow client-direct/Boost backups for DB2 little-endian environments.

(As always, there’s also various fixes included in any new release, incorporating fixes that were under development concurrently in earlier releases.)

As always, when you’re planning to upgrade NetWorker, there’s a few things you should do as a matter of course. There’s a new approach to making sure you’re aware of these steps – when you go to support.emc.com and click to download the NetWorker server installer or either Windows or Linux, you’ll initially find yourself redirected to a PDF: the NetWorker 9.2 Recommendations, Training and Downloads for Customers and Partners. Now, I admit – in my lab I have a tendency sometimes to just leap in and start installing new packages, but in reality when you’re using NetWorker in a real environment, you really do want to make sure you read the documentation and recommendations for upgrades before going ahead with updating your environment. The recommendations guide is only three pages, but it’s three very useful pages – links to technical training, references to the documentation portfolio, where to find NetWorker focused videos on the Community NetWorker and YouTube, and details about licensing and compatibility. There’s also very quick differences details between NetWorker versions, and finally the download location links are provided.

Additional key documentation you should – in my mind, you must – review before upgrading include the release notes, the compatibility guide, and of course, the ever handy updating from a prior version guide. That’s in addition to checking standard installation guides.

Now if you’ll excuse me, I have a geeky data protection weekend ahead of me as I upgrade my lab to NetWorker 9.2.

Basics – Using the vSphere Plugin to Add Clients for Backup

 NetWorker, NVP, vProxy  Comments Off on Basics – Using the vSphere Plugin to Add Clients for Backup
Jul 242017
 

It’s a rapidly changing trend – businesses increasingly want the various Subject Matter Experts (SMEs) running applications and essential services to be involved in the data protection process. In fact, in the 2016 Data Protection Index, somewhere in the order of 93% of respondents said this was extremely important to their business.

It makes sense, too. Backup administrators do a great job, but they can’t be expected to know everything about every product deployed and protected within the organisation. The old way of doing things was to force the SMEs to learn how to use the interfaces of the backup tools. That doesn’t work so well. Like the backup administrators having their own sphere of focus, so too do the SMEs – they understandably want to use their tools to do their work.

What’s more, if we do find ourselves in a disaster situation, we don’t want backup administrators to become overloaded and a bottleneck to the recovery process. The more those operations are spread around, the faster the business can recover.

So in the modern data protection environment, we have to work together and enable each other.

Teams working together

In a distributed control model, the goal will be for the NetWorker administrator to define the protection policies needed, based on the requirements of the business. Once those policies are defined, enabled SMEs should be able to use their tools to work with those policies.

One of the best examples of that is for VMware protection in NetWorker. Using the plugins provided directly into the vSphere Web Client, the VMware administrators can attach and detach virtual machines from protection policies that have been established in NetWorker, and initiate backups and recoveries as they need.

In the video demo below, I’ll take you through the process whereby the NetWorker administrator defines a new virtual machine backup policy, then the VMware administrator attaches a virtual machine to that policy and kicks it off. It’s really quite simple, and it shows the power that you get when you enable SMEs to interact with data protection from within the comfort of their own tools and interfaces. (Don’t forget to ensure you switch to 720p/HD in order to see what’s going on within the session.)


Don’t forget – if you find the NetWorker Blog useful, you’ll be sure to enjoy Data Protection: Ensuring Data Availability.

Jul 212017
 

I want to try something different with this post. Rather than the usual post with screen shots and descriptions, I wanted instead to do a demo video showing just how easy it is to do file level recovery (FLR) from NetWorker VMware Image Level Backup thanks to the new NVP or vProxy system in NetWorker 9.

The video below steps you through the entire FLR process for a Linux virtual machine. (If your YouTube settings don’t default to it, be sure to switch the video to High Def (720) or otherwise the text on the console and within NMC may be difficult to read.)

Don’t forget – if you find the information on the NetWorker Blog useful, I’m sure you’ll get good value out of my latest book, Data Protection: Ensuring Data Availability.

Jul 112017
 

NetWorker 9 modules for SQL, Exchange and Sharepoint now make use of ItemPoint to support granular recovery.bigstock Database

ItemPoint leverages NetWorker’s ability to live-mount a database or application backup from compatible media, such as Advanced File Type devices or Data Domain Boost.

I thought I’d step through the process of performing a table level recovery out of a SQL server backup – as you’ll see below, it’s actually remarkably straight-forward to run granular recoveries in the new configuration. For my lab setup, I installed the Microsoft 180 day evaluation* license of Windows 2012 R2, and in the same spirit, the 180 day evaluation license for SQL Server 2014 (Standard).

Next off, I created a database and within that database, a table. I grabbed a list of English-language dictionary words and populated a table with rows consisting of the words and a unique ID key – just for something simple to test with.

Installing NetWorker on the Client

After getting the database server and a database ready, the next process was to install the NetWorker client within the Windows instance in order to do backup and recovery. After installing the standard NetWorker filesystem client using the base NetWorker for Windows installer, I went on to install the NetWorker Module for Microsoft Applications, choosing the SQL option.

In case you haven’t installed a NMM v9 plugin yet, I thought I’d annotate/show the install process below.

After you’ve unpacked the NMM zip file, you’ll want to run the appropriate setup file – in this case, NWVSS.

NMM SQL Install 01

NMM SQL Install 01

You’ll have to do the EULA acceptance, of course.

NMM SQL Install 02

NMM SQL Install 02

After you’ve agreed and clicked Next, you’ll get to choose what options in NMM you want to install.

NMM SQL Install 03

NMM SQL Install 03

I chose to run the system configuration checker, and you definitely should too. This is an absolute necessity in my mind – the configuration checker will tell you if something isn’t going to work. It works through a gamut of tests to confirm that the system you’re attempting to install NMM on is compatible, and provides guidance if any of those tests aren’t passed. Obviously as well, since I wanted to do SQL backup and recovery, I also selected the Microsoft SQL option. After this, you click Check to start the configuration check process.

Depending on the size and scope of your system, the configuration checker may take a few minutes to run, but after it completes, you’ll get a summary report, such as below.

NMM SQL Install 04

NMM SQL Install 04

Make sure to scroll through the summary and note there’s no errors reported. (Errors will have a result of ‘ERROR’ and will be in red.) If there is an error reported, you can click the ‘Open Detailed Report…’ button to open up the full report and see what actions may be available to rectify the issue. In this case, the check was successful, so it was just a case of clicking ‘Next >’ to continue.

NMM SQL Install 05

NMM SQL Install 05

Next you have to choose whether to configure the Windows firewall. If you’re using a third party firewall product, you’ll typically want to do the firewall configuration manually and choose ‘Do not configure…’. Choose the appropriate option for your environment and click ‘Next >’ to continue again.

NMM SQL Install 06

NMM SQL Install 06

Here’s where you get to the additional options for the plugin install. I chose to enable the SQL Granular Recovery option, and enabled all the SQL Server Management Studio options, per the above. You’ll get a warning when you click Next here to ensure you’ve got a license for ItemPoint.

NMM SQL Install 07

NMM SQL Install 07

I verified I did have an ItemPoint license and clicked Yes to continue. If you’re going with granular recovery, you’ll be prompted next for the mount point directories to be used for those recoveries.

NMM SQL Install 08

NMM SQL Install 08

In this, I was happy to accept the default options and actually start the install by clicking the ‘Install >’ button.

NMM SQL Install 09

NMM SQL Install 09

The installer will then do its work, and when it completes you’ll get a confirmation window.

NMM SQL Install 10

NMM SQL Install 10

That’s the install done – the next step of course is configuring a client resource for the backup.

Configuring the Client in NMC

The next step is to create a client resource for the SQL backups. Within NMC, go into the configuration panel, right-click on Clients and choose to create a new client via the wizard. The sequence I went through was as follows.

NMM SQL Config 01

NMM SQL Config 01

Once you’ve typed the client name in, NetWorker is going to be able to reach out to the client daemons to coordinate configuration. My client was ‘win02’, and as you can see from the client type, a ‘Traditional’ client was the one to pick. Clicking ‘Next >’, you get to choose what sort of backup you want to configure.

NMM SQL Config 02

NMM SQL Config 02

At this point the NetWorker server has contacted the client nsrexecd process and identified what backup/recovery options there are installed on the client. I chose ‘SQL Server’ from the available applications list. ‘Next >’ to continue.

NMM SQL Config 03

NMM SQL Config 03

I didn’t need to change any options here (I wanted to configure a VDI backup rather than a VSS backup, so I left ‘Block Based Backup’ disabled). Clicking ‘Next >’ from here lets you choose the databases you want to backup.

NMM SQL Config 04

NMM SQL Config 04

I wanted to backup everything – the entire WIN02 instance, so I left WIN02 selected and clicked ‘Next >’ to continue the configuration.

NMM SQL Config 05

NMM SQL Config 05

Here you’ll be prompted for the accessing credentials for the SQL backups. Since I don’t run active directory at home, I was just using Windows authentication so in actual fact I entered the ‘Administrator’ username and the password, but you can change it to whatever you need to as part of the backup. Once you’ve got the correct authentication details entered, ‘Next >’ to continue.

NMM SQL Config 06

NMM SQL Config 06

Here’s where you get to choose SQL specific options for the backup. I elected to skip simple databases for incremental backups, and enabled 6-way striping for backups. ‘Next >’ to continue again.

NMM SQL Config 07

NMM SQL Config 07

The Wizard then prompts you to confirm your configuration options, and I was happy with them, so I clicked ‘Create’ to actually have the client resource created in NetWorker.

NMM SQL Config 08

NMM SQL Config 08

The resource was configured without issue, so I was able to click Finish to complete the wizard. After this, it was just a case of adding the client to an appropriate policy and then running that policy from within NMC’s monitoring tab.

NMM SQL Config 09

NMM SQL Config 09

And that was it – module installed, client resource configured, and backup completed. Next – recovery!

Doing a Granular Recovery

To do a granular recovery – a table recovery – I jumped across via remote desktop to the Windows host and launched SQL Management Studio. First thing, of course, was to authenticate.

NMM SQL GLR 01

NMM SQL GLR 01

Once I’d logged on, I clicked the NetWorker plugin option, highlighted below:

NMM SQL GLR 02

NMM SQL GLR 02

That brought up the NetWorker plugin dialog, and I went straight to the Table Restore tab.

NMM SQL GLR 03

NMM SQL GLR 03

In the table restore tab, I chose the NetWorker server, the SQL server host, the SQL instance, then picked the database I wanted to restore from, as well as the backup. (Because there was only one backup, that was a pretty simple choice.) Next was to click Run to initiate the recovery process. Don’t worry – the Run here refers to running the mount; nothing is actually recovered yet.

NMM SQL GLR 04

NMM SQL GLR 04

While the mounting process runs you’ll get output of the process as it is executing. As soon as the database backup is mounted, the ItemPoint wizard will be launched.

NMM SQL GLR 05

NMM SQL GLR 05

When ItemPoint launches, it’ll prompt via the Data Wizard for the source of the recovery. In this case, work with the NetWorker defaults, as the source type (Folder) and Source Folder will be automatically populated as a result of the mount operation previously performed.

NMM SQL GLR 06

NMM SQL GLR 06

You’ll be prompted to provide the SQL Server details here and whether you want to connect to a single database or the entire server. In this case, I went with just the database I wanted – the Silence database. Clicking Finish then opens up the data browser for you.

NMM SQL GLR 07

NMM SQL GLR 07

You’ll see the browser interface is pretty straight forward – expand the backup down to the Tables area so you can select the table you want to restore.

NMM SQL GLR 08

NMM SQL GLR 08

Within ItemPoint, you don’t so much restore a table as copy it out of the backup region. So you literally can right-click on the table you want and choose ‘Copy’.

NMM SQL GLR 09

NMM SQL GLR 09

Logically then the next thing you do is go to the Target area and choose to paste the table.

NMM SQL GLR 10

NMM SQL GLR 10

Because that table still existed in the database, I was prompted to confirm what the pasted table would be called – in this case, just dbo.ImportantData2. Clicking OK then kicks off the data copy operation.

NMM SQL GLR 11

NMM SQL GLR 11

Here you can see the progress indicator for the copy operation. It keeps you up to date on how many rows have been processed, and the amount of time it’s taken so far.

NMM SQL GLR 12

NMM SQL GLR 12

At the end of the copy operation, you’ll have details provided about how many rows were processed, when it was finished and how long it took to complete. In this case I pulled back 370,101 rows in 21 seconds. Clicking Close will return you to the NetWorker Plugin where the backup will be dismounted.

NMM SQL GLR 13

NMM SQL GLR 13

And there you have it. Clicking “Close” will close down the plugin in SQL Management Studio, and your table level recovery has been completed.

ItemPoint GLR for SQL Server is really quite straight forward, and I heartily recommend the investment in the ItemPoint aspect of the plugin so as to get maximum benefit out of your SQL, Exchange or SharePoint backups.


* I have to say, it really irks me that Microsoft don’t have any OS pricing for “non-production” use. I realise the why – that way too many licenses would be finagled into production use. But it makes maintaining a home lab environment a complete pain in the posterior. Which is why, folks, most of my posts end up being around Linux, since I can run CentOS for free. I’d happily pay a couple of hundred dollars for Windows server licenses for a lab environment, but $1000-$2000? Ugh. I only have limited funds for my home lab, and it’s no good exhausting your budget on software if you then don’t have hardware to run it on…

NetWorker 9.1.1 gets out the door

 NetWorker  Comments Off on NetWorker 9.1.1 gets out the door
May 022017
 

I had a fairly full-on weekend so I missed this one – NetWorker 9.1.1 is now available.

Being a minor release, this one is focused on general improvements and currency, as opposed to introducing a wealth of new features.

Upgrade

There’s some really useful updates around NMC, such as:

  • Performance/response improvements
  • Option for NMC to retrieve a vProxy support bundle for you
  • NMC now shows whenever the NetWorker server is running in service mode
  • NMC will give you a list of virtual machines backed up and skipped
  • NMC recoveries now highlight the calendar dates that are available to select backups to recover from

Additionally, NDMP and NDMA get some updates as well:

  • Some NDMP application options can now be set in the NetWorker client resource level, rather than having to establish them as an environment variable
  • NMDA for SAP/Oracle and Oracle/RMAN get more compact debug logs
  • NMDA for Sybase can now recover log-tail backups.

Finally, there’s the version currency:

  • NetWorker Server High Availability is now supported on SuSE 12 SP2 with HAE, and RHEL 7.3 in a High Availability Cluster (with Pacemaker).
  • NVP/vProxy supports vSphere 6.0u3
  • Meditech module supports Unity 4.1 and RecoverPoint 5.0.

As always for upgrades, make sure you read the release notes before diving in.


Also, don’t forget my new book is out: Data Protection: Ensuring Data Availability. It’s the perfect resource for any data protection architect.

Jan 132017
 

Introduction

There’s something slightly deceptive about the title for my blog post. Did you spot it?

It’s: vs. It’s a common mistake to think that Cloud Boost and Cloud Tier compete with one another. That’s like suggesting a Winnebago and a hatchback compete with each other. Yes, they both can have one or more people riding in them and they can both be used to get you around, but the actual purpose of each is typically quite different.

It’s the same story when you look at Cloud Boost and Cloud Tier. Of course, both can move data from A to B. But the reason behind each, the purpose for each is quite different. (Does that mean there’s no overlap? Not necessarily. If you need to go on a 500km holiday and sleep in the car, you can do that in a hatchback or a Winnebago, too. You can often get X to do Y even if it wasn’t built with that in mind.)

So let’s examine them, and look at their workflows as well as a few usage examples.

Cloud Boost

First off, let’s consider Cloud Boost. Version 1 was released in 2014, and since then development has continued to the point where CloudBoost now looks like the following:

CloudBoost Workflow

Cloud Boost Workflow

Cloud Boost exists to allow NetWorker (or NetBackup or Avamar) to write deduplicated data out to cloud object storage, regardless of whether that’s on-premises* in something like ECS, or writing out to a public cloud’s object storage system, like Virtustream Storage or Amazon S3. When Cloud Boost was first introduced back in 2014, the Cloud Boost appliance was also a storage node and data had to be cloned from another device to the Cloud Boost storage node, which would push data out to object. Fast forward a couple of years, and with Cloud Boost 2.1 introduced in the second half of 2016, we’re now at the point where there’s a Cloud Boost API sitting in NetWorker clients allowing full distributed data processing, with each client talking directly to the object storage – the Cloud Boost appliance now just facilitates the connection.

In the Cloud Boost model, regardless of whether we’re backing up in a local datacentre and pushing to object, or whether all the systems involved in the backup process are sitting in public cloud, the actual backup data never lands on conventional block storage – after it is deduplicated, compressed and encrypted it lands first and only in object storage.

Cloud Tier

Cloud Tier is new functionality released in the Data Domain product range – it became available with Data Domain OS v6, released in the second half of 2016. The workflow for Cloud Tier looks like the following:

CloudTier Workflow

CloudTier Workflow

Data migration with Cloud Tier is handled as a function of the Data Domain operating system (or controlled by a fully integrated application such as NetWorker or Avamar); the general policy process is that once data has reached a certain age on the Active Tier of the Data Domain, it is migrated to the Cloud Tier without any need for administrator or user involvement.

The key for the differences – and the different use cases – between Cloud Boost and Cloud Tier is in the above sentence: “once data has reached a certain age on the Active Tier”. In this we’re reminded of the primary use case for Cloud Tier – supporting Long Term Retention (LTR) in a highly economical format and bypassing any need for tape within an environment. (Of course, the other easy differentiator is that Cloud Tier is a Data Domain feature – depending on your environment that may form part of the decision process.)

Example use cases

To get a feel for the differences in where you might deploy Cloud Boost or Cloud Tier, I’ve drawn up a few use cases below.

Cloning to Cloud

You currently backup to disk (Data Domain or AFTD) within your environment, and have been cloning to tape. You want to ensure you’ve got a second copy of your data, and you want to keep that data off-site. Instead of using tape, you want to use Cloud object storage.

In this scenario, you might look at replacing your tape library with a Cloud Boost system instead. You’d backup to your local protection storage, then when it’s time to generate your secondary copy, you’d clone to your Cloud Boost device which would push the data (compressed, deduplicated and encrypted) up into object storage. At a high level, that might result in a workflow such as the following:

CloudBoost Clone To Cloud

CloudBoost Clone To Cloud

Backing up to the Cloud

You’re currently backing up locally within your datacentre, but you want to remove all local backup targets.  In this scenario, you might replace your local backup storage with a Cloud Boost appliance, connected to an object store, and backup via Cloud Boost (via client direct), landing data immediately off-premises and into object storage at a cloud provider (public or hosted).

At a high level, the workflow for this resembles the following:

CloudBoost Backup to Cloud

CloudBoost Backup to Cloud

Backing up in Cloud

You’ve got some IaaS systems sitting in the Cloud already. File, web and database servers sitting in say, Amazon, and you need to ensure you can protect the data they’re hosting. You want greater control than say, Amazon snapshots, and since you’re using a NetWorker Capacity license or a DPS capacity license, you know you can just spin up another NetWorker server without an issue – sitting in the cloud itself.

In that case, you’d spin up not only the NetWorker server but a Cloud Boost appliance as well – after all, Amazon love NetWorker + Cloud Boost:

“The availability of Dell EMC NetWorker with CloudBoost on AWS is a particularly exciting announcement for all of the customers who have come to depend on Dell EMC solutions for data protection in their on-premises environments,” said Bill Vass, Vice President, Technology, Amazon Web Services, Inc. “Now these customers can get the same data protection experience on AWS, providing seamless operational backup and recovery, and long-term retention across all of their environments.”

That’ll deliver the NetWorker functionality you’ve come to use on a daily basis, but in the Cloud and writing directly to object storage.

The high level view of the backup workflow here is effectively the same as the original diagram used to introduce Cloud Boost.

Replacing Tape for Long Term Retention

You’ve got a Data Domain in each datacentre; the backups at each site go to the local Data Domain then using Clone Controlled Replication are copied to the other Data Domain as soon as each saveset finishes. You’d like to replace tape for your long term retention, but since you’re protecting a lot of data, you want to push data you rarely need to recover from (say, older than 2 months) out to object storage. When you do need to recover that data, you want to absolutely minimise the amount of data that needs to be retrieved from the Cloud.

This is a definite Cloud Tier solution. Cloud Tier can be used to automatically extend the Data Domain storage, providing a storage tier for long term retention data that’s very cheap and highly reliable. Cloud Tier can be configured to automatically migrate data older than 2 months out to object storage, and the great thing is, it can do it automatically for anything written to the Data Domain. So if you’ve got some databases using DDBoost for Enterprise Apps writing directly, you can setup migration policies for them, too. Best of all, when you do need to recall data from Cloud Tier, Boost for Enterprise Apps and NetWorker can handle that recall process automatically for you, and the Data Domain only ever recalls the delta between deduplicated data already sitting on the active tier and what’s out in the Cloud.

The high level view of the workflow for this use case will resemble the following:

Cloud Tier to LTR NSR+DDBEA

Cloud Tier to LTR for NetWorker and DDBEA

…Actually, you hear there’s an Isilon being purchased and the storage team are thinking about using Cloud Pools to tier really old data out to object storage. Your team and the storage team get to talking and decide that by pooling the protection and storage budget, you get Isilon, Cloud Tier and ECS, providing oodles of cheap object storage on-site at a fraction of the cost of a public cloud, and with none of the egress costs or cloud vendor lock-in.

Wrapping Up

Cloud Tier and Cloud Boost are both able to push data into object storage, but they don’t have exactly the same use cases. There’s good, clear reasons why you would work with one in particular, and hopefully the explanation and examples above has helped to set the scene on their use cases.


* Note, ‘on-premise’ would mean ‘on my argument’. The correct term is ‘on-premises’ 🙂

Dec 232016
 

I know, I know, it’s winter up there in the Northern Hemisphere, but NetWorker 9.1 is landing and given I’m in Australia, that makes NetWorker 9.1 a Summer Fresh release. (In fact, my local pub for the start of summer started doing a pale ale infused with pineapple and jalapeños, and that’s sort of reminding me of NetWorker 9.1: fresh, light and inviting you to put your heels up and rest a while.)

NetWorker 9.1

 

NetWorker 9 was a big – no, a huge – release. It’s a switch to a more service catalogue driven approach to backups, Linux block based filesystem backups, block based application backups, deep snapshot integration and more recently in NetWorker 9.0 SP1, REST API control as well.

NetWorker 9.1 as you’d expect is a smaller jump from 9.0 than we had from 8.2 to 9.0. That being said, it’s introduced some excellent new features:

  • VMAX SmartSnap integration – the ability to backup and restore a VMAX device based on the device WWN, increasing the depth of snapshot support in NetWorker further.
  • Snapshot Alternate Location Rollback – this lets you do a snapshot rollback, but to a different set of devices.
  • Data Domain High Availability integration – Data Domain now supports high-availability on the earlier 9500 platform, in addition to the 9800, 9300 and 6800 systems. And with v9.1, NetWorker fully understands and integrates with DDHA platforms.
  • Cloud Tier Integration – NetWorker gets deep integration into the Cloud Tier functionality introduced in Data Domain OS 6.0. This lets NetWorker cloning policies control the migration of data out to the Cloud Tier, and more seamlessly integrate with the recall process.

Cloud Tier integration is more than just a tick in the box to though. Consider the module space – NetWorker Module for Microsoft Applications, for instance, doesn’t just get the option to recover data from Cloud Tier, but also perform granular recoveries from Cloud Tier – SQL table level recoveries and Exchange granular recoveries as well.


By the way, the NetWorker Usage Survey is still running – don’t forget to fill in how you’re using NetWorker! (And be in the running for a prize.)


I’ve saved the best – and biggest – feature for last, though. This is a doozy. Say goodbye to needing a EBR/VBA for VMware backups. That EBR/VBA functionality is now embedded in the NetWorker server itself, leaving you to just deploy some very lightweight proxies to handle the data transport processes, all controlled by NetWorker.

The current EBR appliance and proxies will continue to work with NetWorker 9.1, but I can’t think of anyone who’d want to upgrade to 9.1 without rapidly transitioning to the new platform. Here are just some of the advantages of the new process:

  • Less virtual infrastructure required – no EBRs
  • Virtual machines stored in raw VMDK file – no additional processing required for the backup, and this will also mean faster instant access processes, too
  • The FLR web GUI now runs on the NetWorker server itself
  • NMC can be used for FLR instead of the web GUI, making it more accessible to the NetWorker administrators if they don’t have access to the virtual machines being protected
  • Proxies support more concurrent virtual machine backups:
    • Maximum 25 concurrent hotadd operations;
    • Maximum 25 concurrent NBD operations
  • Significantly increased File Level Recovery (FLR) counts from VMware Image Level Backups (recommended 20,000 – more on that in a minute)
  • Significantly faster FLR operations.

In fact, I’m going to spend a little bit of time on FLR for this post, and step through the new NMC-based FLR process to give you an overview of the process. This is using the newly deployed NetWorker VMware Protection (NVP) system, with backup to and recovery from Data Domain virtual edition.

Fig 01: Starting a recovery in NMC

Fig 01: Starting a recovery in NMC

You start by telling NMC you want to do a virtual machine recovery and choose the vCenter server that owns the virtual machine(s) you want to recover data from.

Fig 02: Choosing the virtual machine to recover from

Fig 02: Choosing the virtual machine to recover from

There’s various options for choosing the virtual machine to recover data for – you can enter the name directly, search for it, browse the various backups that have been performed, or browse the vCenter server itself.

Fig 03: Virtual Machine selected

Fig 03: Virtual Machine selected

Once you’ve selected a virtual machine for recovery, you can click Next to choose the backup to recover from.

Fig 04: Choosing the backup to recover from

Fig 04: Choosing the backup to recover from

In this case, I only had a single backup under the new NVP system for that virtual machine, so I was able to just click Next to continue the process. At this point you get to choose the type of recovery you want to perform:

Fig 05: Choosing the type of recovery to perform

Fig 05: Choosing the type of recovery to perform

As you can see, there’s a gamut of recovery options for virtual machines within NMC. I’m focusing on the FLR options here so I chose the bottom option and clicked Next.

Fig 06: Choosing backup instance to recover from

Fig 06: Choosing backup instance to recover from

Next you get to choose the backup instance you want to recover from. If the backup has been cloned it may be that there’s topologically a better backup to recover from than the original, and choosing an alternate is as simple as scrolling through a list of clones.

At that point you get to choose where you want to recover to:

Fig 07: Choosing where to recover data to

Fig 07: Choosing where to recover data to

Next, you’ll supply appropriate credentials for the virtual machine to be able to perform the recovery and initiate a mount of the backup into the proxy server:

Fig 08: Supplying virtual machine credentials to mount the backup

Fig 08: Supplying virtual machine credentials to mount the backup

After you’ve supplied the credentials you’ll click “Start Mount” to make the specific backup available for recovery purposes, and after a few seconds that’ll result in log information such as:

Fig 09: Mounted and ready

Fig 09: Mounted and ready

When the mount is done, you’re ready to click Next and start browsing files for recovery.

Fig 10: Choosing files to recover from an image level backup

Fig 10: Choosing files to recover from an image level backup

In this example, I selected a directory with about 7,800 files in it and the marking of files for recovery took just a few seconds to complete. After which, Next to choose where to recover the data to on the selected virtual machine:

Fig 11: Choosing where to recover data to on the virtual machine

Fig 11: Choosing where to recover data to on the virtual machine

In this case I choose to recover to C:\tmp on the virtual machine. Clicking Next allows finalisation of the recovery preparation:

Fig 12: Finalising the recovery configuration

Fig 12: Finalising the recovery configuration

As you would expect with the tightly integrated controls now, FLR is fully visible within the NetWorker environment – even nsrwatch:

Fig 13: FLR in progress shown in nsrwatch

Fig 13: FLR in progress shown in nsrwatch

And finally we have a completed recovery:

Fig 14: Completed recovery

Fig 14: Completed recovery

That’s 7,918 files recovered from an image level backup in 54 seconds:

Fig 15: Recovered content

Fig 15: Recovered content

I wanted to check out the FLR capabilities a little more and decided to risk pushing the system beyond the recommendations. Instead of just recovering a single folder with 7,900 files or thereabouts, I elected to recover the entire E:\ drive on the virtual machine – comprising over 47,000 files. Here’s the results:

Fig 16: Large scale FLR results

Fig 16: Large scale FLR results

The recovered folder:

Fig 17: Recovered Content

Fig 17: Recovered Content

47,198 files, 1,488 folders, 5.01GB of data recovered as an FLR from an image level backup in just 5 minutes and 42 seconds.

If you’re using NetWorker for VMware backups, here’s the version you want to be on.

You can get it from the EMC Support page for NetWorker today.

Nov 302016
 

Folks, it’s that time of the year again! Each year I run a survey to gauge NetWorker usage patterns – how many clients you’ve got, what plugins you’re using, whether you’re using deduplication, and a plethora of other questions. The survey runs from December 1 (ish) through to January 31 the next year. (This year I’m kicking it off on November 30, just because I have time.)

Take the survey!

That gets assembled into a report in February of the following year, reporting trends across the various years the NetWorker survey has been conducted. If you’d like to see what the reports look like, you can view last year’s report here.

You can fill out the survey anonymously if you’d like, but if you submit your email address at the end you’ll be in the running for a copy of my upcoming book, Data Protection: Ensuring Data Availability, due out February 2017. (Last year’s winner hasn’t been forgotten – the book just got delayed.) If you submit your email address, it will not be used for any purpose other than to notify you if you’re the winner.

The survey is closed now. Results will be published in February 2017.