While virtualisation has introduced additional complexities into our lives from a backup perspective, you can’t argue with the benefits it has brought.
A decade or more ago when I’d talk to customers about test environments for backups, I’d usually get a slightly shocked response. It was one thing to spend money on backup infrastructure, but spending it on test backup infrastructure seemed to many like I’d descended into madness.
Yet just like any other aspect of production systems, a backup environment is something you need to be able to test. At minimum you need to be able to test:
- New backup scenarios.
- New recovery scenarios.
- Upgrades.
In the above, I’m not referencing your normal testing (periodic recoveries, media verification, etc.) That sort of testing goes without saying, and for the most part will operate using your production backups.
As a backup administrator though, you’ll periodically need to conduct tests that don’t interfere with the production backup environment. Sometimes if you’re in a big enough organisation you may very well have some physical hardware to do that on (e.g., previous generation models of libraries or storage that have been decommissioned and replaced). For the most part, you’ll likely be resorting to a virtualised environment.
So, what do you have to do that sort of testing on? What’s your backup test lab look like?
Here’s mine:
- Backup server running CentOS Linux 5.6, with 256GB VTL capacity and 512GB AFTD capacity
- Storage node running CentOS Linux 5.6, 128GB VTL and 128GB AFTD capacity
- 4 x Linux Virtual Machine clients with around 20GB occupied capacity each;
- 8 x Linux Virtual Machine clients with around 6GB of occupied capacity each (base OS only);
- 1 x Linux Virtual Machine with ~100GB capacity and 1,000,000 files;
- 1 x Linux Virtual Machine running Oracle 11 ;
- As required:
- Windows 2012 storage node in eval mode;
- Windows 2012 clients in eval mode
That’s Linux top-heavy of course, but I don’t have access to a Microsoft volume license or anything that makes deployment of a lot of Windows VMs easy (and legal). Not all VMs are ever active at once – generally speaking at most I’ll have 12-16 running at any point, and that’ll depend on which ones are running. The basic Linux clients are usually configured with just 128MB of RAM so it’s easy to have all 12 running at once. The Linux machine running Oracle on the other hand needs 2 CPUs and 4GB of RAM, so it tends to hog resources a bit more.
Overall it’s not meant to be about providing super-quick performance, just a testing platform.
New NetWorker versions? NetWorker upgrade techniques? NetWorker disaster recovery testing? NetWorker platform migrations? They’re the sort of things I use my virtual lab to test with.
What’s your virtual backup lab? And what do you test in it?
Hi,
You are right , as a backup administrator I also use Linux most of the time. And lately have used it to simulating migration of a backup strategy of a Data Zone to another, and last week DSA NDMP …
Thanks for sharing.
Hi,
for testing NDMP backup and recovery you should consider to add a NetApp ontap simulator. It’s a virtual machine and I think it’s worth being part of a test environment, besause NDMP is a little tricky in some aspects.
B.t.w. I wished I had such an excellent test lab!
My home lab which I use for my own upskilling and for testing new scripts consists of whitebox builds.
1 vSphere lab, built on a quad core i7 with 32gb of ram, 512mb ssd and 2tb of hdd storage.
1 Openstack lab, built on an older quad core i7 with 24gb of ram & 1tb of ssd storage.
1 NAS 16TB useable capacity on raid 5 running Ubuntu and served via NFS.
1 NAS 24TB useable capacity on raidz2 running nas4free serving via iSCSI.
Yeah that’s right, I’m a nerd.
I have the Avamar AVE loaded on vSphere, and run a NetWorker server on CentOS backing up to AFTD. I have ambitions to link the NetWorker server to the Avamar AVE, but haven’t tried yet.
I have approximately 10 Linux vms that I test backups on, a combination of CentOS, SuSE & Ubuntu 32 and 64 bit, along with a FreeBSD and Open Indiana installs. I have a 2008 and 2012 Windows server and Windows 7 desktops that are Technet licensed. I imagine I’ll move to eval when that expires.
I used to have the VNX simulator running as well for testing NDMP.
Obviously this lab does a lot more than backup, but it has proved itself useful time and time again when companies I worth at don’t budget for development equipment, and expect me to write scripts on the production environment.
My current use is mostly working on learning Ruby so I can write a Puppet module for NetWorker (It’s slow going so far).
Hi fellow Networkers,
this is truly the blind spot of EMC networker, recovertests.
And it is time for EMC to introduce a funtion to plan fully automated recover test´s from any productive system.
One step could be a dedicated recovertest interface on the networker server where i can directly connect my encapsulated test network.
Our hosting customers for example expect regular test´s with live data. To get this running was a bit tricky, but now we can make recovertests from running systems with actual and data without interfering the live systems. (Thank´s to SAN snapshots)
But i expect this out of the box from an “enterprise” solution.
Hi Andy,
In theory the NMC recovery interface with scheduled recoveries could be extended by EMC to support automated recovery testing.
While I’m a big fan of automated recovery testing, I’m also a big fan of framework backup products, which NetWorker most definitely is. It’s a fairly straightforward task to use command line scripting to automate recovery testing across a plethora of hosts, OS types and application types. I don’t think everything needs to be available out of the box.
I’m curious as to your experience with automated testing options in other enterprise backup products?
Cheers.