Apr 272017
 

Regardless of whether you’re new to NetWorker or have been using it for a long time, if there’s any change happening within the overall computing environment of your business, there’s one thing you always need to have access to: compatibility guides.

As I’ve mentioned in a few times, including my book, there won’t be anything in your environment that touches more things other than the network itself than your enterprise backup and recovery product. With this in mind, it’s always critical to understand the potential impact of changes to our environment on the backups.

NetWorker, as well as majority of the rest of the Dell EMC Data Protection Suite, no longer has a static software compatibility guide. There’s enough variables that a static software compatibility guide is going to be tedious to search and maintain. So instead of being a document, it’s now a database, and you get to access it and generate custom compatibility information for exactly what you need.

bigStock Compatibility

If you’ve not used the interactive compatibility guide before, you’ll find it at:

http://compatibilityguide.emc.com:8080/CompGuideApp/

My recommendation? Bookmark it. Make sure it’s part of your essential NetWorker toolkit. (And for that matter: Data Domain OS, Boost Plugins, ProtectPoint, Avamar, etc.)

When you first hit the compatibility landing page, you’ll note a panel on the left-hand from where you can choose the product. In this case, when you expand NetWorker, you’ll get a list of versions since the interactive guide was introduced:

Part 01

Once you’ve picked the NetWorker version, the central panel will be updated to reflect the options you can check compatibility against:

Part 02

As you can see, there’s some ‘Instant’ reports for specific quick segments of information; beyond that, there’s the ability to create a specific drill-down report using the ‘Custom Reports’ option. For instance, say you’re thinking of deploying a new NetWorker server on Linux and you want to know what versions of Linux you can deploy on. To do that, under ‘NetWorker Component’ you’d select ‘Server OS’, then get a sub-prompt for broad OS type, then optionally drill down further. In this case:

Part 03

Here I’ve selected Server OS > Linux > All to get information about all compatible versions of Linux for running a NetWorker 9.1 server on. After you’ve made your selections, all you have to do is click ‘Generate Report’ to actually create a compatibility report. The report itself will look something like the following:

Part 04

Any area in the report that’s underlined is a hover prompt: hovering the mouse cursor over will popup the additional clarifying information referenced. Also note the “Print/Save Results” option – if say, as part of a change request, you need to submit documentary evidence, you can generate yourself a PDF that covers exactly what you need.

If you need to generate multiple and different compatibility reports to generate, you may need to click the ‘Reset’ button to blank out all options. (This will avoid a situation where you end up say, trying to find out about what versions of Exchange on Linux are supported!)

As far as the instant reports are concerned – this is about quickly generating information that you want to get straight away – you click on the option in the Instant Reports, and you don’t even need to click ‘Generate Report’. For instance, the NVP option:

Part 05

Part 06That’s really all there is to the interactive compatibility guide – it’s straight forward and it’s a really essential tool in the arsenal of a NetWorker or Dell EMC Data Protection Suite user.

Oh, there’s one more thing – there’s other compatibility guides of course: older NetWorker and Avamar software guides, NetWorker hardware guides, etc. You can get access to the legacy and traditional hardware compatibility guides via the right-hand area on the guide page:

Part 07

There you have it. If you need to check NetWorker or DPS compatibility, make the Interactive Compatibility Guide your first point of call.


Hey, don’t forget my book is available now in paperback and Kindle formats!

Basics – Understanding NetWorker Architecture

 Architecture, Basics, NetWorker  Comments Off on Basics – Understanding NetWorker Architecture
Nov 142016
 

With the NetWorker 9 architecture now almost 12 months old, I thought it was long past time I do a Basics post covering how the overall revised architecture for data protection with NetWorker functions.

There are two distinct layers of architecture I’ll cover off – Enterprise and Operational. In theory an entire NetWorker environment can be collapsed down to a single host – the NetWorker server, backing up to itself – but in practice we will typically see multiple hosts in an overall NetWorker environment, and as has been demonstrated by the regular NetWorker Usage Surveys, it’s not uncommon nowadays to see two or more NetWorker servers deployed in a business.

Enterprise Layer

The Enterprise Layer consists of the components that technically sit ‘above’ any individual NetWorker install within your environment, and can be depicted simply with the following diagram:

Enterprise Layer

The key services that will typically be run within the Enterprise Layer are the NetWorker License Server, and the NetWorker Management Console Server (NMC Server). While NetWorker has previously had the option of running an independent license server, with NetWorker 9 this has been formalised, and the recommendation is now to run a single license server for all NetWorker environments within your business, unless network or security rules prevent this.

The License server can be used by a single NetWorker server, or if you’ve got multiple NetWorker servers, by each NetWorker server in your environment, allowing all licenses to be registered against a single host, reducing ‘relicensing’ requirements if NetWorker server details change, etc. This is a very light-weight server, and it’s quite common to the license services run concurrently on the same host as the NMC Server.

Like many applications, NetWorker has separated the GUI management from the core software functionality. This has multiple architectural advantages, such as:

  • The GUI and the Server functionality can be developed with more agility
  • The GUI can be used to administer multiple servers
  • The functional load of providing GUI services does not impact the core Server functionality (i.e., providing backup and recovery services).

While you could, if you wanted to, deploy a NMC Server for each NetWorker Server, it’s by no means necessary, and so it’s reasonably common to see a single NMC Server deployed across multiple NetWorker servers. This allows centralised reporting, management and control for backup administrators and operators.

Operational Layer

At the operational layer we have what is defined as a NetWorker datazone. In fact, at the operational layer we can have as many datazones as is required by the business, all subordinate to the unified Enterprise Layer. In simple parlance, a NetWorker datazone is the collection of all hosts within your environment for which a single NetWorker server provides backup and recovery services. A high level view of a NetWorker datazone resembles the following:

NetWorker Datazone (Operational Layer)

The three key types of hosts within a NetWorker datazone are as follows:

  • Server – A host that provides backup and recovery services (with all the associated management functions) for systems within your environment. There will either be (usually) a single NetWorker server in the datazone, or (in less common situations), a clustered pair of hosts acting as an active/passive NetWorker server.
  • Client – Any system that has backup and recovery services managed by a NetWorker Server
  • Storage Node – A host with access to one or more backup devices, either providing device mapping access to clients (I’ll get to that in a moment) or transferring backup/recovery to/from devices on behalf of clients. (A NetWorker server, by the way, can also function as a storage node.) A storage node can either be a full storage node, meaning it can perform those actions previously described for any number of clients, or a dedicated storage node, meaning it provides those services just to itself.

With such a long pedigree, NetWorker (as described above) is capable of running in a classic three-tier architecture – the server managing the overall environment with clients backing up to and recovering from storage nodes. However, NetWorker is equally able to ditch that legacy mode of operation and function without storage nodes thanks to the benefits of distributed deduplication in tightly integrated systems such as Data Domain and CloudBoost and ClientDirect. That being said, NetWorker still supports a broad range of device types ranging from simple tape through to purpose built backup appliances (Data Domain), Cloud targets, VTL and plain disk. (In fact, I remember years ago NetWorker actually supporting VHS as a tape format!)

ClientDirect, which I mentioned previously, is where clients can communicate directly with network accessible devices such as Data Domain deduplication storage. In these cases, both the NetWorker server and any storage node in the environment is removed from the data path – making for a highly efficient and scalable environment when distributed deduplciation is taking place. (For a more in-depth understanding of the architectural implications of Client Direct, I suggest you review this earlier post.)

Within this operational layer, I’ve drawn the devices off to the side for the following reasons:

  • Devices can (and will) provide backup/recovery media to all layers in the NetWorker datazone – server, storage nodes (if deployed) and individual clients
  • Devices that support appropriate multi-tenancy or partitioning can actually be shared between multiple NetWorker datazones. In years gone by you might have deployed a large tape library with two or more NetWorker servers accessing virtualised autochangers from it, and more recently it’s quite easy to have the same Data Domain system for instance being accessed by multiple NetWorker servers if you want to.

Wrapping Up

The NetWorker architecture has definitely grown since I started using it in 1996. Back then each datazone required completely independent licensing and management, using per-OS-type GUI interfaces or CLI, and it was a very flattened architecture – clients and the server only. Since then the architecture has grown to accommodate the changing business landscape. My largest NetWorker datazone in 1996 had approximately 50 clients in it – these days I have customers with well over 2,000 clients in a single datazone, and have colleagues with customers running even larger environments. As the NetWorker Usage Survey has shown, the number of datazones has also been growing as businesses merge, consolidate functions, and take advantage of simplified capacity based licensing schemes.

By necessity then, the architecture available to NetWorker has grown as well. Perhaps the most important architectural lesson for newcomers to NetWorker is understanding the difference between the enterprise layer and the operational layer (the individual datazones).

If you’ve got any questions on any of the above, drop me a line or add a comment and I’ll clarify in a subsequent post.

Jul 112016
 

Overview

As I mentioned in the previous article, NetWorker 9 SP1 has introduced a REST API. I’ve never previously got around to playing with REST API interfaces, but as is always the case with programming, you either do it because you’re getting paid to or because it’s something that strikes you as interesting.

Accessing NetWorker via a REST API does indeed strike me as interesting. Even more so if I can do it using my favourite language, Perl.

This is by no means meant to be a programming tutorial, nor am I claiming to be the first to experiment with it. If you want to check out an in-development use of the REST API, check out Karsten Bott’s PowerShell work to date over at the NetWorker Community Page. This post covers just the process of bootstrapping myself to the point I have working code – the real fun and work comes next!

REST API

What you’ll need

For this to work, you’ll need a suitably recent Perl 5.x implementation. I’m practicing on my Mac laptop, running Perl 5.18.2.

You’ll also need the following modules:

  • MIME::Base64
  • REST::Client
  • Data::Dumper
  • JSON

And of course, you’ll need a NetWorker server running NetWorker 9, SP1.

Getting started

I’m getting old an crotchety when it comes to resolving dependencies. When I was younger I used to manually download each CPAN module I needed, try to compile, strike dependency requirements, recurse down those modules and keep going until I’d either solved all the dependencies or threw the computer out the window and became a monk.

So to get the above modules I invoked the cpan install function on my Mac as follows:

pmdg@ganymede$ cpan install MIME::Base64
pmdg@ganymede$ cpan install REST::Client
pmdg@ganymede$ cpan install Data::Dumper
pmdg@ganymede$ cpan install JSON

There was a little bit of an exception thrown in the REST::Client installation about packages that could be used for testing, but overall the CPAN based installer worked well and saved me a lot of headaches.

The code

The code itself is extremely simple – as I mentioned this is a proof of concept, not intended to be an interface as such. It’s from here I’ll start as I play around in greater detail. My goal for the code was as follows:

  • Prompt for username and password
  • Connect via REST API
  • Retrieve a complete list of clients
  • Dump out the data in a basic format to confirm it was successful

The actual code therefore is:

pmdg@ganymede$ cat tester.pl

#!/usr/bin/perl -w

use strict;
use MIME::Base64();
use REST::Client;
use Data::Dumper;
use JSON;

my $username = "";
my $password = "";

print "Username: ";
$username = <>;
chomp $username;

print "Password: ";
$password = <>;
chomp $password;

my $encoded = MIME::Base64::encode($username . ":" . $password);
$ENV{PERL_LWP_SSL_VERIFY_HOSTNAME} = 0;
my $client = REST::Client->new();
my $headers = { Accept => 'application/json', Authorization => 'Basic ' . $encoded};
$client->setHost('https://orilla.turbamentis.int:9090');
$client->GET('/nwrestapi/v1/global/clients',$headers);
my $response = from_json($client->responseContent);
print Dumper($response);

Notes on the Code

If you’re copying and pasting the code, about the only thing you should need to change is the hostname in the line starting $client->setHost.

It’s not particularly secure in the password prompt as Perl will automatically echo the password as you’re entering it. There are ways of disabling this echo, but they require the Term::Readkey library and that may not be readily available on all systems. So just keep this in mind…

The Results

Here’s the starting output for the code:

pmdg@ganymede$ ./tester.pl
Username: administrator
Password: MySuperSecretPassword
$VAR1 = {
          'clients' => [
                         {
                           'ndmpMultiStreamsEnabled' => bless( do{\(my $o = 0)}, 'JSON::PP::Boolean' ),
                           'ndmpVendorInformation' => [],
                           'protectionGroups' => [],
                           'resourceId' => {
                                                'sequence' => 79,
                                                'id' => '198.0.72.12.0.0.0.0.132.105.45.87.192.168.100.4'
                                           },
                           'links' => [
                                        {
                                           'rel' => 'item',
                                           'href' => 'https://orilla.turbamentis.int:9090/nwrestapi/v1/global/clients/198.0.72.12.0.0.0.0.132.105.45.87.192.168.100.4'
                                        }
                                      ],
                           'parallelSaveStreamsPerSaveSet' => $VAR1->{'clients'}[0]{'ndmpMultiStreamsEnabled'},
                           'hostname' => 'archon.turbamentis.int',

And so on…

In Summary

The script isn’t pretty at the moment, but I wanted to get it out there as an example. As I hack around with it and get more functionality, I’ll provide updates.

Hopefully however you can see that it’s pretty straight-forward overall to access the REST API!

References

May 112016
 

Backing up data from an NFS mount-point is not ideal, but sometimes we don’t have a choice.

NFS Backup

There’s a few reasons you might end up in this situation – you might need to backup data on a particularly old system that no longer has a NetWorker client available (or perhaps never did), or you might need to backup a consumer-grade NAS that doesn’t support NDMP.

In this case, it’s the latter I’m doing having rejigged my home test lab. Having real data to test with is always good, and rather than using my filesystem generator tool I decided to backup my Synology NAS over NFS, with the fileshares directly mounted on the backup server. A backup is all well and good, but being able to recover the data is always important. While I’m not worried about ACLs/etc, I did want to know I was successfully backing up the data, so I ran a recovery test and was reminded of an old chestnut in how recoveries work.

[root@orilla Documents]# recover -s orilla
4181:recover: Path /synology/pmdg/Documents is within othalla:/volume1/pmdg
53362:recover: Cannot start session with server orilla: Client 'othalla.turbamentis.int' is not properly configured on the NetWorker Server or 'othalla.turbamentis.int'(if not a virtual host) is not in the aliases list for client 'orilla.turbamentis.int'.
88866:nsrd: Client 'othalla.turbamentis.int' is not properly configured on the NetWorker Server
or 'othalla.turbamentis.int'(if not a virtual host) is not in the aliases list for client 'orilla.turbamentis.int'.

Basically what the recovery error is saying that NetWorker has detected the path we’re sitting on/trying to recover from actually resides on a different host, and that host doesn’t appear to be a valid NetWorker client. Luckily, there’s a simple solution. (While the best solution might be a budget request with the home change board to buy a small Unity system, I’d just spent my remaining budget on home lab server upgrades, so I felt it best not to file that request.)

In this case the NFS mount was on the NetWorker server itself, so all I had to do was to tell NetWorker I wanted to recover from the NetWorker client:

root@orilla Documents]# recover -s orilla -c orilla
Current working directory is /synology/pmdg/Documents/
recover> add "Stop, Collaborate and Listen.pdf"
/synology/pmdg/Documents
1 file(s) marked for recovery
recover> relocate /tmp
recover> recover
Recovering 1 file from /synology/pmdg/Documents/ into /tmp
Volumes needed (all on-line):
  Backup.01 at Backup_01
Total estimated disk space needed for recover is 1532 KB.
Requesting 1 file(s), this may take a while...
Recover start time: Sun 08 May 2016 18:28:46 AEST
Requesting 1 recover session(s) from server.
129290:recover: Successfully established direct file retrieve session for save-set ID '2922310001' with adv_file volume 'Backup.01'.
./Stop, Collaborate and Listen.pdf
Received 1 file(s) from NSR server `orilla'
Recover completion time: Sun 08 May 2016 18:28:46 AEST
recover> quit

And that’s how simple the process is.

While ideally we shouldn’t be doing this sort of backup – a double network transfer is hardly bandwidth efficient, it’s always good to keep it in your repertoire just in case you need it.

NetWorker Scales El Capitan

 Basics, NetWorker  Comments Off on NetWorker Scales El Capitan
Dec 202015
 

When Mac OS X 10.11 (El Capitan) came out, I upgraded my personal desktop and laptop to El Capitan. The upgrade went pretty smoothly on both machines, but then I noticed overnight my home lab server reported backup errors for both systems.

When I checked the next day I noticed that NetWorker actually wasn’t installed any more on either system. It seemed odd for NetWorker to be removed as part of the install, but hardly an issue. I found an installer, fired it up and to my surprise found the operating system warning me the installer might cause system problems if I continued.

Doing what I should have done from the start, I set up an OS X virtual machine to test the installation on, and it seemed to go through smoothly until the very end when it reported a failure and backed out of the process. That was when I started digging into some of the changes in El Capitan. Apple, it turns out, is increasing system security by locking down third party access to /bin, /sbin, /usr/bin and /usr/sbin. As NetWorker’s binaries on Unix systems install into /usr/bin and /usr/sbin, this meant NetWorker was no longer allowed to be installed on the system.

El Capitan

Fast forward a bit, and as of NetWorker 8.2 SP2 Cumulative Release 2 (aka 8.2.2.2), was released a week or so ago including a relocated NetWorker installer for Mac OS X – now the binaries are located in /usr/local/bin and /usr/local/sbin instead. (Same goes for NetWorker 9.) Having run 8.2.2.2 on my home Macs for a couple of weeks, with backup and recovery testing, the new location works.

If you’ve got Mac OS X systems being upgraded to El Capitan, be sure to download NetWorker 8.2.2.2.

Oh, and don’t forget to fill in the 2015 NetWorker Usage Survey!

Basics – Taking a turn about the filesystem

 Basics, NetWorker  Comments Off on Basics – Taking a turn about the filesystem
Apr 272015
 

“Miss Eliza Bennet, let me persuade you to follow my example, and take a turn about the room. — I assure you it is very refreshing after sitting so long in one attitude.”

Jane Austin: Pride and Prejudice.

The NetWorker savegrp command has a lot of different command line options, but one which falls into that useful-for-debugging category for me has always been the -n option. This allows you to invoke the save commands for a group (or a single client in the group) in walk/don’t do mode.

While filesystems have become considerably more capable at self-repair and resilient towards minor corruption, there was a time in the past where you could encounter an operating system crash as a result of attempting to access a particularly corrupt file or part of the filesystem. Backups, of course, want to walk all the filesystems (unless you direct them otherwise), and so being able to see what NetWorker might do during a backup was helpful to diagnose such issues. (Even if it meant one more crash.)

These days, if a host being backed up by NetWorker via a filesystem agent gets a lot of changes during a day, you might simply be interested in seeing just how many files are going to be backed up.

The command is pretty straight forward:

# savegrp -nv [-c client] groupName

For instance, consider the following execution:

[root@orilla ~]# savegrp -nv -c mondas Servers
90528:savegrp: mondas:All level=incr
7236:savegrp: Group will not limit job parallelism
83643:savegrp: mondas:All started
savefs -s orilla -c mondas -g Servers -p -n -l full -R -v
mondas:/ level=incr, vers=pools, p=4
mondas:/d/01 level=incr, vers=pools, p=4
mondas:/boot level=incr, vers=pools, p=4
mondas:/d/backup level=incr, vers=pools, p=4
90491:savegrp: mondas:All succeeded.
83647:savegrp: Servers mondas:All See the file /nsr/logs/sg/Servers/832077 for command output
83643:savegrp: mondas:/ started
save -s orilla -g Servers -n -LL -f - -m mondas -t 1430050510 -o MODIFIED_ASOF_TIME:timeval=1430050506;RENAMED_DIRECTORIES:index_lookup=on;BACKUPTIME:lookup_range=1429877707:1430050510; -l incr -W 78 -N / /
83643:savegrp: mondas:/d/01 started
save -s orilla -g Servers -n -LL -f - -m mondas -t 1430050508 -o MODIFIED_ASOF_TIME:timeval=1430050506;RENAMED_DIRECTORIES:index_lookup=on;BACKUPTIME:lookup_range=1429877710:1430050508; -l incr -W 78 -N /d/01 /d/01
83643:savegrp: mondas:/boot started
save -s orilla -g Servers -n -LL -f - -m mondas -t 1430050507 -o MODIFIED_ASOF_TIME:timeval=1430050506;RENAMED_DIRECTORIES:index_lookup=on;BACKUPTIME:lookup_range=1429877709:1430050507; -l incr -W 78 -N /boot /boot
83643:savegrp: mondas:/d/backup started
save -s orilla -g Servers -n -LL -f - -m mondas -t 1430050509 -o MODIFIED_ASOF_TIME:timeval=1430050506;RENAMED_DIRECTORIES:index_lookup=on;BACKUPTIME:lookup_range=1429877708:1430050509; -l incr -W 78 -N /d/backup /d/backup
77562:savegrp: job (832078) host: mondas savepoint: / had WARNING indication(s) at completion
90491:savegrp: mondas:/ succeeded.
83647:savegrp: Servers mondas:/ See the file /nsr/logs/sg/Servers/832078 for command output
90491:savegrp: mondas:/boot succeeded.
83647:savegrp: Servers mondas:/boot See the file /nsr/logs/sg/Servers/832080 for command output
90491:savegrp: mondas:/d/01 succeeded.
83647:savegrp: Servers mondas:/d/01 See the file /nsr/logs/sg/Servers/832079 for command output
90491:savegrp: mondas:/d/backup succeeded.
83647:savegrp: Servers mondas:/d/backup See the file /nsr/logs/sg/Servers/832081 for command output
83643:savegrp: mondas:index started
save -s orilla -S -g Servers -n -LL -f - -m orilla -V -t 1429878349 -l 9 -W 78 -N index:147f6a46-00000004-5457fce2-5457fce1-0016b3a0-02efe8cc /nsr/index/mondas
128137:savegrp: Group Servers waiting for 1 jobs (0 awaiting restart) to complete.
90491:savegrp: mondas:index succeeded.
83647:savegrp: Servers mondas:index See the file /nsr/logs/sg/Servers/832082 for command output
* mondas:All savefs mondas: succeeded.
* mondas:/ suppressed 2038 bytes of output.
...snip...

You’ll see there the output reaches a point where NetWorker tells you “suppressed X bytes of output”. That’s a protection mechanism for NetWorker to prevent savegroup completion notifications growing to massive sizes. However, because we’ve used the verbose option, the output is captured – it’s just directed to the appropriate log file for the group. In this case, the output (underlined above) tells me I can check out the file /nsr/logs/sg/Servers/832078 to see the details of the root filesystem backup for the client mondas.

Checking that file, I can see what files would have been backed up:

[root@orilla Servers]# more /nsr/logs/sg/Servers/832078
96311:save: Ignoring Parallel savestreams per saveset setting due to incompatibl
e -n/-E option(s)
75146:save: Saving files modified since Sun Apr 26 22:15:06 2015
/var/log/rpmpkgs
/var/log/secure
/var/log/audit/audit.log
/var/log/audit/
/var/log/lastlog
/var/log/cron
/var/log/wtmp
/var/log/maillog
/var/log/
/var/run/utmp
/var/run/
/var/lock/subsys/
/var/lock/
/var/spool/clientmqueue/qft3QI22Tg020700
/var/spool/clientmqueue/dft3QI22Tg020700
/var/spool/clientmqueue/
/var/spool/cups/tmp/
/var/spool/cups/
/var/spool/anacron/cron.daily
/var/spool/anacron/
/var/spool/
...snip...

This command only works for filesystem backups performed by the core NetWorker agent. It’s not compatible for instance, with a database module or VBA – but regardless, it is the sort of debugging/analysis tool you should be aware of. (Forewarned is forearmed, and forearmed is a lot of arms… Ahem.)

Check out savegrp -n on a client/group when you have time to familiarise yourself with how it works. It’s reasonably straightforward and is a good addition to your NetWorker utility belt.

file walk

Basics – virtual machine names in VBA backups

 Basics, NetWorker, VBA  Comments Off on Basics – virtual machine names in VBA backups
Mar 262015
 

If you’ve been backing up your virtual machines with VBA, you’ve probably hit that moment when you’ve run an mminfo query and seen output looking like the following:

mminfo_vm_backups_01

As you can see, that’s not the most effective way to see virtual machine names – vm:<id> doesn’t allow you to easily match it back to the virtual machine in question.

However, not all is lost. With VBA backups came a couple of new options. The first one is a “VBA backups” style report, using the command:

# mminfo -k

Using mminfo -k you’ll get a very tailored output focused entirely on your VBA backups, and it’ll resemble the following:

mminfo_vm_backups_02

That’s a really good way of seeing a quick listing of all your VBA-based virtual machine backups, but if you’re wanting a way of reconciling in normal mminfo output, you can also make use of a new mminfo report field, vmname. For example:

mminfo_vm_backups_03

(In the above command I could have used name and vmname in order to reconcile vm:<id> entries to virtual machine names, but elected not to for brevity.)

There you have it – a couple of quick and easy ways of quickly seeing details of your virtual machine backups via mminfo.

 

Basics – Running VMware Protection Policies from the Command Line

 Basics, NetWorker, VBA  Comments Off on Basics – Running VMware Protection Policies from the Command Line
Mar 102015
 

If you’ve been adapting VMware Protection Policies via VBA in your environment (like so many businesses have been!), you’ll likely reach a point where you want to be able to run a protection policy from the command line. Two immediate example scenarios would be:

  • Quick start of a policy via remote access*
  • External scheduler control

(* May require remote command line access. You can tell I’m still a Unix fan, right?)

Long-term users of NetWorker will know a group can be initiated from the backup server by using the savegrp command. When EMC introduced VMware Protection Policies, they also introduced a new command, nsrpolicy.

The simplest way to invoke a policy is as follows:

# nsrpolicy -p policyName

For example:

[root@centaur ~]# nsrpolicy -p SqueezeProtect
99528:nsrpolicy: Starting Vmware Protection Policy 'SqueezeProtect'.
97452:nsrpolicy: Starting action 'SqueezeProtect/SqueezeBackup' with command: 'nsrvba_save -s centaur -j 544001 -L incr -p SqueezeProtect -a SqueezeBackup'.
97457:nsrpolicy: Action 'SqueezeProtect/SqueezeBackup's log will be in /nsr/logs/policy/SqueezeProtect/544002.
97461:nsrpolicy: Action 'SqueezeProtect/SqueezeBackup' succeeded.
99529:nsrpolicy: Vmware Protection Policy 'SqueezeProtect' succeeded.

There you go – it’s that easy.

Records retention and NMC

 Basics, Best Practice, Security  Comments Off on Records retention and NMC
Dec 102014
 

For those of us who have been using NetWorker for a very long time, we can remember back to when the NetWorker Management Console didn’t exist. If you wanted reports in those days, you wrote them yourself, either by parsing your savegroup completion results, processing the NetWorker daemon.log, or interrogating mminfo.

Over time since its introduction, NMC has evolved in functionality and usefulness. These days there are still some things that I find easier to do on the command line, but more often than not I find myself reaching for NMC for various administrative functions. Reporting is one of those.


(Just a quick interrupt. The NetWorker Usage Survey is happening again. Every year I ask readers to participate and tell me a bit about their environment. It’s short – I promise! – you only need around 5 minutes to answer the questions. When you’re finished reading this article, I’d really appreciate if you could jump over and do the survey.) 


There’s a wealth of reports in NMC, but some of the ones I find particularly useful often end up being:

  • User auditing
  • Success/failure results and percentages
  • Backup volume over time
  • Deduplication statistics

In order to get maximum use out of those, you want to make sure those details are kept for as long as you need them. In newer versions of NetWorker, if you go to the Enterprise Console and check out the Reports menu, you’ll see an option labelled “Data Retention”, and the default values are as follows:

default NMC data retention values

Those values are OK for using NMC reporting just for casual checking, but if you’re intending to perform longer-term checking, reporting or compliance based auditing, you might want to extend those values somewhat. Based on conversations with a couple of colleagues, I’m inclined to extend everything except for the Completion Message section to at 3 years in sites where longer-term compliance and auditing reporting is required. The completion messages are generally a little bigger in scope, and I’d be inclined to limit those to 3 months at the most. So that means the resulting fields would look like:

alternate NMC data retention values

Ultimately the values you set in the NMC Reports Data Retention area should be specific to the requirements of your business, but be certain to check them out and tweak the defaults as necessary to align them with your needs.


(Hey, now you’ve finished reading this article, just a friendly reminder: The NetWorker Usage Survey is happening again. Every year I ask readers to participate and tell me a bit about their environment. It’s short – I promise! – you only need around 5 minutes to answer the questions. When you’re finished reading this article, I’d really appreciate if you could jump over and do the survey.)


 

Sep 012014
 

A question I get asked from time to time is “How do I do X in NetWorker?”, and by how, I mean what’s the order of steps, rather than a general description.

Workflow for adding a new client

To me, the configuration steps in NetWorker are often quite minimal compared to the operational and organisational processes that typically should be followed to ensure an appropriately maintained system. Configuring a new client is a perfect example of this, so below is the procedure I normally recommend following:

  1. Determine if there are any databases or applications on the host that require module-based backups.
  2. Determine if there is anything on the host that should be excluded from backup.
  3. Determine any special retention requirements (vs ‘default’ retention requirements used in the business).
  4. Determine if any SLAs require integration between backup and other data protection processes (e.g., with snapshots, replication targets, etc.)
  5. Check OS and application versions against the compatibility guide if they’re not standard/already known versions.
  6. Ensure the backup system has sufficient capacity for bringing the client on-board.
  7. Determine what tests are to be applied to this client to confirm it’s successfully brought on-board.
  8. Determine whether any backup software to be installed will require an OS or application restart – for example:
    • NMM with GLR might require reboots (and if .Net needs to be installed, 2 reboots may be required).
    • Oracle and other databases may require restarting for library linking.
  9. Determine if any firewalls will need to be adjusted to allow for backup traffic.
  10. Confirm forward/reverse lookups between all appropriate hosts – for example:
    • New client and backup server
    • New client and storage node(s)
    • New client and IP backup storage (e.g., Data Domains)
  11. Confirm network connectivity between all appropriate hosts.
  12. File change requests or work plans as appropriate within the organisation, supplying appropriate installation/back-out plans, peripheral configuration activity (e.g., changing firewalls, etc.)
  13. Confirm change approval and schedule.
  14. Install filesystem client.
  15. Install database module (if required).
  16. Configure filesystem backups in NetWorker.
  17. Test filesystem backups in NetWorker and remediate.
  18. Configure database backups.
  19. Test database backups and remediate.
  20. Integrate client instances with appropriate retention policies and schedules.
  21. Confirm successful next-day operation of automated backups.
  22. Add client into any custom reporting (should fold automatically into standard reporting).
  23. Close off change as required.

Depending on your environment, those processes may change a bit – or they may even be less formal, but cutting corners in data protection can easily lead to a mishap, so if you’re looking for a procedure for adding a client, you could do a lot worse than the one above.