Survey – Virtual Backup Servers

I have to admit, I have great personal reservations towards virtualising backup servers. There’s a simple, fundamental reason for this: the backup server should have as few dependencies as possible in an environment. Therefore to me it seems completely counter-intuitive to make the backup server dependent on an entire virtualisation layer existing before it can be used.

For this reason I also have some niggling concerns with running a backup server as a blade server.

Personally, at this point in time, I would never willingly advocate deploying a NetWorker server as a virtual machine (except in a lab situation) – even when running in director mode.

Let me qualify: I consider ‘director’ mode to be where the NetWorker server acts almost like a dedicated storage node – it only backs up its own index/bootstrap information; with all other backups in the datazone being sent to storage nodes. Hence, as much as possible, all it is doing is ‘directing’ the backups.

But I’m keen to understand your thoughts on the matter.

This survey has now closed.

10 thoughts on “Survey – Virtual Backup Servers”

  1. Preston,
    I agree with you that a backup server (master, director) should be as independent as possible – and right for that specific reason, I’d prefer the server virtualised. Virtualisation solves the problem of a hardware, a hardware-bound OS, location and redundancy.

    – if my hardware breaks (and it will at a certain point in time) I will have to keep a spare machine or go with reinstall-recovery, which, as you will agree, poses its own very peculiar set of problems
    – the OS, regardless which one, is bound to the hardware, be it for licensing, MAC address, or drivers. A change in the OS (because of a move to another datacenter for example) may hurt (although it probably won’t, in all fairness)
    – I can move my VM anywhere, to another rack, datacenter, or country without much hassle, I can copy, make a snap and even export it. Hardware will prevent this.
    – Although I could use a cluster, which – again – comes, no surprise, with another set of quite unique issues

    A storage node (slave, mediaserver) doesn’t keep configs, tape DBs, or file indexes and needs top HW performance. A server needs top redundancy, which is very well taken care of with virtualisation – but not with blades IMHO.

  2. Hi,
    NW server should be standalone i agree.
    But storage node can bi virtual machine. We have a virtual storage node like seperate ldom in solaris (T5220) and it is working very good. At the same time other storage nodes are stil stand alone so we can switch over NW server to one of them in case of DR.

    Cheers,
    Matjaz

  3. Preston, I’d be happy to run my Server in ‘Director’ mode as a vm. Generally speaking, I agree with AKaasjager’s logic, but here’s another reason.

    I recently had to migrate our server to new hardware because the underlying kit was out of maintenance and a bit underpowered as well. Because the server migration for Networker is unnecessarily complex (I’ve got a whole rant on that btw) I’d just as soon never do that again for those reasons.

    If the server was virtualized, that entire exercise would be unnecessary. I think that there would be some very specific design considerations I’d want to make sure were in place beforehand, mostly regarding the shared storage that would hold the installation files, media db, and indexes. But the dependence on the virtualization layer (VMware’s anyway) does not give me any indigestion, and in fact alleviates several concerns.

  4. I had the same reservations a few years ago, once we decided to virtualise our production backup server along with all other servers, but once I made the leap, it was totally worth it. It absolutely has to be in director mode as you describe. All the benefits of hardware abstraction and HA/FT that you get with VM are just as relevant to a critical an app as NetWorker, especially for storage mobility and expansion for a growing and changing datazone. Snapshots before major upgrades? Cloning for testing or redeployment to another site? Yes please. You have to be more confident than ever in your ability to recover NetWorker with bootstraps and indices (even onto a physical host if you need to, to solve your virtualisation layer dependency conundrum) if and when the time comes. Plan for it, practice it, and sleep easy. AFTD local staged to an AFTD on DD NFS share works for me. When VCB was still in fashion, I even had the NetWorker server do a VCB snapshot backup of itself, going to Avamar. But that was just for giggles.

  5. Hello,

    It’s not where your backup server in director mode run that is important, provided that it have access to good enough hardware. It’s the “link to the outside world”. For me there is no problem to have a networker server in director mode in a virtual machine BUT never forget that a networker server is a talking man. A backup server is always sending and receiving datas so the network must be good and the network configuration must also be good.

    Kind regards

  6. All depends on each case, when we do a full backups, we are using 1,5 Gbps, and we need two weekend to finish our full backups. If we virualiced the server our server use all the bandwidth of the host machine….

    excuse me english

  7. The only comment I would make is that virtualised environments are not all made equal. I’d have no problem running the server on an IBM LPAR or similar as long it had passthrough HBA’s to replicated storage and an equivalent DR server. I’d not be putting it on VMware though!
    I’d not virtualise storage nodes – I view them as expendable assets and prefer to deploy multiple cheap storage nodes, using DDS etc so that loss of one has minimal impact.

  8. I have installed networker Servers on HP C-class Blades quite a few times and it works very well.
    Most of the servers in a blade environment will be blade servers so most of the backup traffic occurs inside the blades chassis. Most blade environments use internal switches or “HP virtual connect”. While the internal blade switches have bandwidth of 40-100 GBit/sec, the uplinks are often only 1 to 4 Gbit or 10GBit if we have a 10GB Ethernet capable backbone. With this in concern using a blade server as an backupserver in a blade environment is sometimes a smart thing to do !
    Now about the I/O capability: a HP BL460 Blade Server comes with two 10GBit NICs and has the capability of adding a dualport FC-HBA with a Bandwidth of 8GB/sec and port.
    Thats a lot of I/O. Using an optional SAS Switch in the Blade enclosere and an additional Mezzanine RAID-Controller you can add a MDS600 JBOD with up to 140TB of “cheap” SATA Storage to the backupserver doing “Backup-To-Disk”.
    So what do you have against a bladed backup Server ???

    About virtualisation : The Director solution is cool and to my opinion preferrable to a Networker Cluster.
    In one customer environment with some ten branch offices i have deployed virtulized networker servers.
    Why ? it was the cheapest solution, the customer had already Networker in his datacenter and wanted to controll all his backups in one mmc.
    So we used the Workgroup Edition with eight clients in his branch offices. There we had a maximum of thre virtualized clients ana a maximum fullbackup of 150 GB. We backup to an iSCSI attached JBOD using Backup-To-Disk. Maximum througput is some 30MB/sec, not very good but sufficient for the amount of data.
    So dont sneer at virtualisation of backup servers but take a look at the environment (and the budget !) you are designing the solution for.

    Cheers Uwe

  9. Thanks to all for sharing their knowledge. I have a question: are all of you backing up to disk only? With a FC LTO library, I think there is no way around using a non-virtual storage node accessing it?

    Cheers, Tom

  10. @Tom: as stated, personally I prefer my storage nodes (what some people refer to as e.g. media servers or even data pumps) physical.
    So, I can hook up any fitting controller with blazing-speed-tape or disk in any form… just as the situation demands.

    That said, at least in VMware you should be able to actually hook up SCSI and FC devices (in ESX at least). What possibilities you may have with LPARs and LDOMs, that I cannot say.

    And last but not least, there is a tape hardware vendor that is actually selling iSCSI drives. Although I have no experience with those devices, I trust this vendor enough to assume the performance of this device is adequate at least:

    http://www.spectralogic.com/index.cfm?fuseaction=products.showContentAndChildren&CatID=410

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.