Mar 062015

A little over 5 years ago now, I wrote an article titled, Things not to virtualise: backup servers and storage nodes. It’s long past time to revisit this topic and say that’s no longer a recommendation I’d make.


At the time I suggested there were a two key reasons why you wouldn’t virtualise these systems:

  • Dependencies
  • Performance

The dependencies point related to the potentially thorny situation of needing to recreate a certain level of your virtualised environment before you could commence disaster recovery operations using NetWorker, and the second related to guaranteeing maximum performance for your backup server (and for that matter, storage nodes).

With appropriate planning, I believe neither of these considerations represent reasons to avoid virtualising backup infrastructure any longer. But if you disagree, first consider a few statistics from the 2014 NetWorker Usage Report:

  • 10% of respondents said some of their NetWorker servers were virtualised.
  • 10% of respondents said some of their Storage Nodes were virtualised.
  • 5% of respondents said all of their Storage Nodes were virtualised.
  • 9% of respondents said all of their NetWorker servers were virtualised.

Stepping back to the original data from that report, of the 9% of respondents who said all of their NetWorker servers were virtual, there were small environments, but there were just as many environments with 501+ clients, and some with 5001+ clients backing up 5+PB of data. Similar correlations were applicable for environments where all storage nodes were virtualised.

Clearly size or scale is not an impediment towards virtualised backup infrastructure.

So what’s changed?

There’s a few key things from my perspective that have changed:

  • Substantially reduced reliance on tape
  • Big uptake in Data Domain backup solutions
  • More advanced and mature virtualisation disaster recovery options

Let’s tackle each of those. First, consider tape – getting tape access (physical or virtual) within a virtual machine has always been painful. While VMware still technically supports virtual machine access to tape, it’s fraught with considerations that impact the options available to other virtual machines on the same ESX server. That’s not really a portable option.

At the same time, we’re seeing a big switch away from tape as a primary backup target. The latest NetWorker usage report showed that just 9% of sites weren’t using any form of backup to disk. As soon as tape is removed as a primary backup target, virtualisation becomes a much simpler proposition, for any storage node or backup server.

Second, Data Domain. As soon as you have Data Domain as a primary backup target, your need for big, powerful storage nodes drastically decreases. Client Direct, where the individual clients are tasked with performing data segmentation and send data directly to an accessible device practically eliminates storage node requirements in many environments. Rather than being hosts capable of handling the throughput of gigabytes of data a second, a storage node simply becomes the host responsible for giving individual clients a path to write to or read from on the target system. Rather than revisit that here, I’ll point you at an article I wrote in August 2014 – Understanding Client Direct. In case you’re thinking Data Domain is only just a single product, keep in mind from the recent usage report that a whopping 78% of respondents said they were using some form of deduplication, and of those respondents, 47% were using Data Domain Boost. In fact, once you take VTL and CIFS/NFS into account, 80% of respondents using deduplication were using Data Domain. (Room, meet gorilla.)

Finally – more advanced virtualisation disaster recovery options. At the time I’d written the previous article, I’d just seen a demo of SRM, but since then it’s matured and datacentres have matured as well. It’s not uncommon for instance to see stretched networks between primary and disaster recovery datacentres … when coupled with SRM, a virtual backup server that fails on one site can be brought up on the other site with the same IP address and hostname within minutes.

Of course, a virtual backup server or storage node may somehow fail in such a way that the replicated version is unusable. But the nature of virtualisation allows a new host to be stood up very quickly (compared to say, a physical server). I’d argue when coupled with backup to disk that isn’t directly inside the virtual machine (and who would do that?) the disaster recovery options are more useful and comprehensive for virtual backup servers and storage nodes than they are for physical versions of the same hosts.

Now dropping back briefly to performance: the advanced functionality in VMware to define guaranteed performance characteristics and resources to virtual machines allows you to ensure that storage nodes and backup servers deliver the performance required.

vCenter clustering and farms of ESX servers also drastically reduces the chance of losing so much of the virtual infrastructure that it must be redeployed prior to commencing a recovery. Of course, that’s a risks vs costs game, but what part of disaster recovery planning isn’t?

So here I am, 5 years later, very openly saying I disagree with 2009-me: now is the time to seriously consider virtualising as much as possible of your backup infrastructure. (Of course, that’s dependent on your underlying infrastructure, but again, what part of disaster recovery planning isn’t dependent on that?)

  6 Responses to “Virtualised servers and storage nodes”

  1. […] Note: It’s 2015, and I now completely disagree with what I wrote below. Feel free to read what I had to say, but then check out Virtualised Servers and Storage Nodes. […]

  2. i have implemented similar approach to a Fortune 500 client in Europe and it worked flawlessly. The server commissioning become so easy and DR would be just a matter of fewer clicks and bang. As you also pointed out that dd played harbinger to this journey from physical to virtual shift.

    Kamal Kanwadia

  3. You’re definitely not alone in making these new recommendations.

    NetWorker Engineering recently put together a presentation on NetWorker best practices gathered from the most experienced field and implementation people. In this presentation, it is recommended to use a virtual machine for the NetWorker server because it simplifies certain cases such as failover or disaster recovery. The guide also recommends using virtual storage nodes unless there is a tape-out requirement or there will be high I/O.

    • Hello Ian

      Do you have a link you can share as to where this presentation is? I think this would be very beneficial to myself and others.

      Thank you,


      • Unfortunately the presentation itself is EMC internal only because it contains some internal information. I’ll check with the team that put it together and see if they can generate a customer friendly version.

      • I had a chance to follow up with the team that put together the document. The current plan is to publish this information in an Operational Best Practices Guide similar to the current Avamar Operational Best Practices Guide. They are hoping to make this available in the coming quarter but as always, timelines may slip.

Sorry, the comment form is closed at this time.

%d bloggers like this: