NetWorker 9.2 – A Focused Release

 NetWorker  Comments Off on NetWorker 9.2 – A Focused Release
Jul 292017

NetWorker 9.2 has just been released. Now, normally I pride myself for having kicked the tyres on a new release for weeks before it’s come out via the beta programmes, but unfortunately my June, June and July taught me new definitions of busy (I was busy enough that I did June twice), so instead I’ll be rolling the new release into my lab this weekend, after I’ve done this initial post about it.

bigStock Focus

I’ve been working my way through NetWorker 9.2’s new feature set, though, and it’s impressive.

As you’ll recall, NetWorker 9.1 introduced NVP, or vProxy – the replacement to the Virtual Backup Appliance introduced in NetWorker 8. NVP is incredibly efficient for backup and recovery operations, and delivers hyper-fast file level recovery from image level recovery. (Don’t just take my written word for it though – check out this demo where I recovered almost 8,000 files in just over 30 seconds.)

NetWorker 9.2 expands on the virtual machine backup integration by adding the capability to perform Microsoft SQL Server application consistent backup as part of a VMware image level backup. That’s right, application consistent, image level backup. That’s something Avamar has been able to do for a little while now, and it’s now being adopted in NetWorker, too. We’re starting with Microsoft SQL Server – arguably the simplest one to cover, and the most sought after by customers, too – before tackling other databases and applications. In my mind, application consistent image level backup is a pivot point for simplifying data protection – in fact, it’s a topic I covered as an emerging focus for the next several years of data protection in my book, Data Protection: Ensuring Data Availability. I think in particular app-consistent image level backups will be extremely popular in smaller/mid-market customer environments where there’s not guaranteed to be a dedicated DBA team within the IT department.

It’s not just DBAs that get a boost with NetWorker 9.2 – security officers do, too. In prior versions of NetWorker, it was possible to integrate Data Domain Retention Lock via scripting – now in NetWorker 9.2, it’s rolled into the interface itself. This means you’ll be able to establish retention lock controls as part of the backup process. (For organisations not quite able to go down the path of having a full isolated recovery site, this will be a good mid-tier option.)

Beyond DBAs and security officers, those who are interested in backing up to the cloud, or in the cloud, will be getting a boost as well – CloudBoost 2.2 has been introduced with NetWorker 9.2, and this gives Windows 64-bit clients the CloudBoost API as well, allowing a direct to object storage model from both Windows and Linux (which got CloudBoost client direct in a earlier release). What does this mean? Simple: It’s a super-efficient architecture leveraging an absolute minimum footprint, particularly when you’re running IaaS protection in the Cloud itself. Cloud protection gets another option as well – support for DDVE in the Cloud: AWS or Azure.

NMC isn’t left out – as NetWorker continues to scale, there’s more information and data within NMC for an administrator or operator to sort through. If you’ve got a few thousand clients, or hundred of client groups created for policies and workflows, you might not want to scroll through a long list. Hence, there’s now filtering available in a lot of forms. I’m always a fan of speeding up what I have to do within a GUI, and this will be very useful for those in bigger environments, or who prefer to find things by searching rather than visually eye-balling while scrolling.

If you’re using capacity licensing, otherwise known as Front End TB (FETB) licensing, NetWorker now reports license utilisation estimation. You might think this is a synch, but it’s only a synch if you count whitespace everywhere. That’s not something we want done. Still, if you’ve got capacity licensing, NetWorker will now keep track of it for you.

There’s a big commitment within DellEMC for continued development of automation options within the Data Protection products. NetWorker has always enjoyed a robust command line interface, but a CLI can only take you so far. The REST API that was introduced previously continues to be updated. There’s support for the Data Domain Retention Lock integration and the new application consistent image level backup options, just to name a couple of new features.

NetWorker isn’t just about the core functionality as well – there’s also the various modules for databases and applications, and they’ve not been left unattended, either.

SharePoint and Exchange get tighter integration with ItemPoint for granular recovery. Previously it was a two step process to mount the backup and launch ItemPoint – now the NMM recovery interface can automatically start ItemPoint, directing it to the mounted backup copies for processing.

Microsoft SQL Server is still of course supported for traditional backup/recovery operations via the NetWorker Module for Microsoft, and it’s been updated with some handy new features. Backup an recovery operations no longer need Windows administrative privileges in all instances, and you can do database exclusions now via wild-cards – very handy if you’ve got a lot of databases on a server following a particular naming convention and you don’t need to protect them all, or protect them all in a single backup stream. You also get the option during database recovery now to terminate other user access to the database; previously this had to be managed manually by the SQL administrator for the target database – now it can be controlled as part of the recovery process. There’s also a bunch of new options for SQL Always On Availability Groups, and backup promotion.

In addition to the tighter ItemPoint integration mentioned previously for Exchange, you also get the option to do ItemPoint/Granular Exchange recovery from a client that doesn’t have Exchange installed. This is particularly handy when Exchange administrators want to limit what can happen on an Exchange server. Continuing the tight Data Domain Cloud Tier integration, NMM now handles automatic and seamless recall of data from Cloud Tier should it be required as part of a recovery option.

Hyper-V gets some love, too: there’s processes to remove stale checkpoints, or merge checkpoints that exceed a particular size. Hyper-V allows a checkpoint disk (a differencing disk – AVHDX file) to grow to the same size as its original parent disk. However, that can cause performance issues and when it hits 100% it creates other issues. So you can tell NetWorker during NMM Hyper-V backups to inspect the size of Hyper-V differencing disks and automatically merge if they exceed a certain watermark. (E.g., you might force a merge when the differencing disk is 25% of the size of the original.) You also get the option to exclude virtual hard disks (either VHD or VHDX format) from the backup process should you desire – very handy for virtual machines that have large disks containing transient or other forms of data that have no requirement for backup.

Active Directory recovery browsing gets a performance boost too, particularly for large AD trees.

SAP IQ (formerly known as Sybase IQ) gets support in NetWorker 9.2 NMDA. You’ll need to be running v16 SP11 and a simplex architecture, but you’ll get a variety of backup and recovery options. A growing trend within database vendors is to allow designation of some data files within the database as read-only, and you can choose to either backup or skip read-only data files as part of a SAP IQ backup, amongst a variety of other options. If you’ve got a traditional Sybase ASE server, you’ll find that there’s now support for backing up database servers with >200 databases on them – either in sequence, or with a configured level of parallelism.

DB2 gets some loving, too – NMDA 9.1 gave support for PowerLink little-endian DB2 environments, but with 9.2 we also get a Boost plugin to allow client-direct/Boost backups for DB2 little-endian environments.

(As always, there’s also various fixes included in any new release, incorporating fixes that were under development concurrently in earlier releases.)

As always, when you’re planning to upgrade NetWorker, there’s a few things you should do as a matter of course. There’s a new approach to making sure you’re aware of these steps – when you go to and click to download the NetWorker server installer or either Windows or Linux, you’ll initially find yourself redirected to a PDF: the NetWorker 9.2 Recommendations, Training and Downloads for Customers and Partners. Now, I admit – in my lab I have a tendency sometimes to just leap in and start installing new packages, but in reality when you’re using NetWorker in a real environment, you really do want to make sure you read the documentation and recommendations for upgrades before going ahead with updating your environment. The recommendations guide is only three pages, but it’s three very useful pages – links to technical training, references to the documentation portfolio, where to find NetWorker focused videos on the Community NetWorker and YouTube, and details about licensing and compatibility. There’s also very quick differences details between NetWorker versions, and finally the download location links are provided.

Additional key documentation you should – in my mind, you must – review before upgrading include the release notes, the compatibility guide, and of course, the ever handy updating from a prior version guide. That’s in addition to checking standard installation guides.

Now if you’ll excuse me, I have a geeky data protection weekend ahead of me as I upgrade my lab to NetWorker 9.2.

Apr 182016

I’ve been working my way through a pretty intense cold the last few days. To avoid spending the entire weekend playing Minecraft while I convalesce, I downloaded the newly released Data Domain Virtual Edition to start refreshing my lab. With DDVE including a performance tester, I was curious to see what my current lab setup would yield. (Until I finish updating my server, my lab is VMware ESX running within VMware Fusion on my late-2015 iMac. With 32GB of RAM and Thunderbolt-2 RAID, it’s serviceable but hardly ideal.)

I should point out – the title of this blog article is slightly inaccurate. It took me less than 30 minutes to install DDVE including filing a change request* – but it did take me about two hours to download the OVA file, thanks to ADSL speeds. The OVA is just 1.2GB in a zip file though, so if you’re not using internet based on RFC 1149 you should find it coming down very quickly.

Installing DDVE is such a cinch there’s no excuse not to have one running in your lab already! (Here’s the download link. Don’t forget to mosey along to the site as well and download the Installation guide for DDVE.)

Once the DDVE OVA was installed and my DNS was prepped, it was an incredibly straight forward install.

DDVE OVA Deployment

DDVE OVA Deployment #1


DDVE Deployment #2

DDVE Deployment #2


DDVE Deployment #3

DDVE Deployment #3


DDVE Deployment #4

DDVE Deployment #4


DDVE Deployment #5

DDVE Deployment #5

(Being the “free and frictionless” version, I chose the option for the 4TB configuration – there are a few tiers of options, and the 4TB option covers everything from the free 0.5TB through to the 4TB option.)

DDVE Deployment #6

DDVE Deployment #6


DDVE Deployment #7

DDVE Deployment #7


DDVE Deployment #8

DDVE Deployment #8


If you’re deploying DDVE for production use, or for earnest testing, you really should deploy it with thick provisioning (recommended in the install guide). Because I’m doing this just in my home lab, I switched over to thin provisioning, which I’ve done in the past and had adequate home testing performance.

After the OVA was deployed I edited the virtual machine before powering it up, adding a 500GB virtual disk (again for my purposes, thinly provisioned – you should use thick). The “free and frictionless” version of DDVE does not expire, but is limited to 0.5TB. (Even at this size, it’s actually quite generous once deduplication sizes come into play.)

DDVE Deployment #9

DDVE Deployment #9

Once the deployment was completed, I did something I’ve never done with a Data Domain before – elected to use the GUI configuration. This consisted of providing enough networking configuration to allow a web-browser connection to the DDVE, and then once logged in I could start configuring it graphically.

DDVE Deployment #10

DDVE Deployment #10


DDVE Deployment #11

DDVE Deployment #11


DDVE Deployment #12

DDVE Deployment #12


DDVE Deployment #13

DDVE Deployment #13

I was pretty stoked by this! Not only did my DDVE deployment assessment pass, but it passed by flying colours. That’s on a late 2015 iMac running DDVE within VMware ESX within VMware Fusion sitting on a 4 x 2TB 7200 RPM drives in a Thunderbolt-2 RAID-5 enclosure. (When I’ve done DDVE tests in the past on my iMac I’ve actually got great performance out of it so I’m not surprised, but it’s great to see the test results.)

It was just a few short steps after that and I had a Data Domain fully up and running, fully virtualised within my network.

In coming posts I’ll walk through connecting NetWorker to Data Domain and show some performance results of this setup, but I felt it worthwhile stepping through just how simple and easy it is to get a Data Domain setup in your environment now thanks to DDVE. If you’ve not worked with Data Domain before, there’s never been a better time to give it a go!

* The change request, roughly put, was to shout up the stairway, “Hey, I’m going to restart DNS for a few seconds for some hostname updates. Is that OK?”

Apr 092016

This week was a big one for the data protection industry, with the official release of Data Domain Virtual Edition (DDVE).

HDD and Cloud

DDVE offers the same deduplication capabilities as its physical cousin, encapsulated in a virtual machine. On release it can scale up to 16TB of pre-dedupe storage (i.e., the size of the VMDK you can allocate for it to use as its storage); it’s the same deduplication algorithm so you’ll get the same level of deduplication out of DDVE as you would a physical Data Domain system. Licensing is per-TB, and you can “slice and dice” licenses based on changing requirements in your infrastructure.

The initial use cases for DDVE are edge and entry-level, with a bonus use-case for later. Entry-level is straight-forward; whereas previously a business might have bought say, the Data Domain 2200 with the 4TB capacity option, now a business can start with a DDVE as small as 1TB. It’s not uncommon particularly in the small end of the mid-market space to see a lack of replication being performed on backups, and usually this is due to budgetary limitations in small businesses. DDVE will help to relieve that cost constraint and allow even the smallest of businesses to get robust data protection.

Now, let’s consider the edge use case. It’s usually reasonably straight-forward to design and implement data protection infrastructure within datacentres – there’s space that can be allocated, IT staff present and robust networking. So it’s typical to see in a dual-datacentre arrangement two appropriately sized Data Domains acting as local data protection storage with replication between one-another.

Out at the edge – the remote or branch offices for a business – things get a little more tricky. Increasingly as edge environments become virtualised or even hyperconverged, there’s limited physical space, few IT staff and there’s always that requirement to get the data (primary and backup) replicated back into the datacentre for site recoverability. Physical space is often paramount, and as the amount of data at the edge decreases, the more cost-prohibitive it is seen to deploy physical data protection hardware. Yet that edge data is still important and still needs to be protected.

This is where DDVE is going to make a big impact. Businesses that previously looked at deploying DD160s or its successor, the DD2200 at edge locations can now go with Software Defined Data Protection Storage (SDDPS) and eliminate the need for additional physical hosts at the edge. With the flexible licensing, DDVE units at the edge can grow or shrink with the data usage patterns in those remote offices.

DDVE deployment example

Thus, remote sites that might have been protected via workgroup based products, adhoc replication or even not at all can now be folded into the central control of the backup administrators and get the same quality level of protection as we get in the datacentre.

In addition to being flexible on the capacity split-up, DDVE licensing is also very inclusive: it’s got Boost, Replication and Encryption bundled into the per-TB capacity license. So even stretched right out to the edge of your environment, you’ll get the advantages of distributed segment processing for minimised network traffic, and when those backups get replicated in it’ll be bandwidth efficient, which is always a big concern at the edge. With encryption included, you’ll even be able to consider at-rest or in-flight encryption depending on the business needs.

DDVE uses the same management interface and CLI, and the same operating system as the physical Data Domains. The same upgrade RPM you might download for your Data Domain 9500 will be as applicable to the 9500 as it will be to a DDVE system. That also leads into the bonus use case I mentioned before: giving you a test environment.

I’m a big proponent of having a proper and permanent test environment for data protection, and having a test environment is almost a fundamental requirement of any formal change control processes. Like all other production activities in your environment, data protection should require the same levels of change control you apply to production system changes. So if you require tests conducted on non-production systems before you go and upgrade from Oracle 11 to Oracle 12, why wouldn’t you require tests to be conducted before you upgrade core protection infrastructure?

Getting the business to agree to test environments is sometimes difficult though – particularly in those businesses where backup/data protection is not treated as “production”. (Hint: it is production, it’s just not business function production – unless you’re a service provider of course.) Now with DDVE there’s no excuse in my mind for any business to not have a test environment for one simple reason: you can deploy a half terabyte DDVE unit for free within your environment, and it never expires. So if you’ve still not got Data Domain in your environment and want to test it out yourself, or you want to test out VBA backups, or in-flight encryption, or practically anything else, you can spin up your own free Data Domain and do whatever tests you require.

There’s even going to be a try-and-buy option where you start with that 0.5TB free version, and when you’re ready you can convert it over to a production licensed DDVE that you can scale up to 16TB.

DDVE is going to be a game changer for a lot of businesses and a lot of data protection options, and you should definitely be checking it out.

Here’s a few resources you might want to review: