May 232017

I’m going to keep this one short and sweet. In Cloud Boost vs Cloud Tier I go through a few examples of where and when you might consider using Cloud Boost instead of Cloud Tier.

One interesting thing I’m noticing of late is a variety of people talking about “VTL in the Cloud”.

BigStock Exhausted

I want to be perfectly blunt here: if your vendor is talking to you about “VTL in the Cloud”, they’re talking to you about transferring your workloads rather than transforming your workloads. When moving to the Cloud, about the worst thing you can do is lift and shift. Even in Infrastructure as a Service (IaaS), you need to closely consider what you’re doing to ensure you minimise the cost of running services in the Cloud.

Is your vendor talking to you about how they can run VTL in the Cloud? That’s old hat. It means they’ve lost the capacity to innovate – or at least, lost interest in it. They’re not talking to you about a modern approach, but just repeating old ways in new locations.

Is that really the best that can be done?

In a coming blog article I’ll talk about the criticality of ensuring your architecture is streamlined for running in the Cloud; in the meantime I just want to make a simple point: talking about VTL in the Cloud isn’t a “modern” discussion – in fact, it’s quite the opposite.

May 232017


A seemingly straight-forward question, what constitutes a successful backup may not engender the same response from everyone you ask. On the surface, you might suggest the answer is simply “a backup that completes without error”, and that’s part of the answer, but it’s not the complete answer.


Instead, I’m going to suggest there’s actually at least ten factors that go into making up a successful backup, and explain why each one of them is important.

The Rules

One – It finishes without a failure

This is the most simple explanation of a successful backup. One that literally finishes successfully. It makes sense, and it should be a given. If a backup fails to transfer the data it is meant to transfer during the process, it’s obviously not successful.

Now, there’s a caveat here, something I need to cover off. Sometimes you might encounter situations where a backup completes successfully  but triggers or produces a spurious error as it finishes. I.e., you’re told it failed, but it actually succeeded. Is that a successful backup? No. Not in a useful way, because it’s encouraging you to ignore errors or demanding manual cross-checking.

Two – Any warnings produced are acceptable

Sometimes warnings will be thrown during a backup. It could be that a file had to be re-read, or a file was opened at the time of backup (e.g., on a Unix/Linux system) and could only be partially read.

Some warnings are acceptable, some aren’t. Some warnings that are acceptable on one system may not be acceptable on another. Take for instance, log files. On a lot of systems, if a log file is being actively written to when the backup is running, it could be that the warning of an incomplete capture of the file is acceptable. If the host is a security logging system and compliance/auditing requirements dictate all security logs are to be recoverable, an open-file warning won’t be acceptable.

Three – The end-state is captured and reported on

I honestly can’t say the number of times over the years I’ve heard of situations where a backup was assumed to have been running successfully, then when a recovery is required there’s a flurry of activity to determine why the recovery can’t work … only to find the backup hadn’t been completing successfully for days, weeks, or even months. I really have dealt with support cases in the past where critical data that had to be recovered was unrecoverable due to a recurring backup failure – and one that had been going on, being reported in logs and completion notifications, day-in, day-out, for months.

So, a successful backup is also a backup here the end-state is captured and reported on. The logical result is that if the backup does fail, someone knows about it and is able to choose an action for it.

When I first started dealing with NetWorker, that meant checking the savegroup completion reports in the GUI. As I learnt more about the importance of automation, and systems scaled (my system administration team had a rule: “if you have to do it more than once, automate it”), I built parsers to automatically interpret savegroup completion results and provide emails that would highlight backup failures.

As an environment scales further, automated parsing needs to scale as well – hence the necessity of products like Data Protection Advisor, where you not only get simple dashboards for overnight success ratios with drill-downs, root cause analysis, and all the way up to SLA adherence reports and beyond.

In short, a backup needs to be reported on to be successful.

Four – The backup method allows for a successful recovery

A backup exists for one reason alone – to allow the retrieval and reconstruction of data in the event of loss or corruption. If the way in which the backup is run doesn’t allow for a successful recovery, then the backup should not be counted as a successful backup, either.

Open files are a good example of this – particularly if we move into the realm of databases. For instance, on a regular Linux filesystem (e.g., XFS or EXT4), it would be perfectly possible to configure a filesystem backup of an Oracle server. No database plugin, no communication with RMAN, just a rolling sweep of the filesystem, writing all content encountered to the backup device(s).

But it wouldn’t be recoverable. It’s a crash-consistent backup, not an application-consistent backup. So, a successful backup must be a backup that can be successfully recovered from, too.

Five – If an off-site/redundant copy is required, it is successfully performed

Ideally, every backup should get a redundant copy – a clone. Practically, this may not always be the case. The business may decide, for instance, that ‘bronze’ tiered backups – say, of dev/test systems, do not require backup replication. Ultimately this becomes a risk decision for the business and so long as the right role(s) have signed off against the risk, and it’s deemed to be a legally acceptable risk, then there may not be copies made of specific types of backups.

But for the vast majority of businesses, there will be backups for which there is a legal/compliance requirement for backup redundancy. As I’ve said before, your backups should not be a single point of failure within your data protection environment.

So, if a backup succeeds but its redundant copy fails, the backup should, to a degree, be considered to have failed. This doesn’t mean you have to necessarily do the backup again, but if redundancy is required, it means you do have to make sure the copy gets made. That then hearkens back to requirement three – the end state has to be captured and reported on. If you’re not capturing/reporting on end-state, it means you won’t be aware if the clone of the backup has succeeded or not.

Six – The backup completes within the required timeframe

You have a flight to catch at 9am. Because of heavy traffic, you don’t arrive at the airport until 1pm. Did you successfully make it to the airport?

It’s the same with backups. If, for compliance reasons you’re required to have backups complete within 8 hours, but they take 16 to run, have they successfully completed? They might exit without an error condition, but if SLAs have been breached, or legal requirements have not been met, it technically doesn’t matter that they finished without error. The time it took them to exit was, in fact, the error condition. Saying it’s a successful backup at this point is sophistry.

Seven – The backup does not prevent the next backup from running

This can happen one of two different ways. The first is actually a special condition of rule six – even if there are no compliance considerations, if a backup meant to run once a day takes longer than 24 hours to complete, then by extension, it’s going to prevent the next backup from running. This becomes a double failure – not only does the next backup run, but the next backup doesn’t run because the earlier backup is blocking it.

The second way is not necessarily related to backup timing – this is where a backup completes, but it leaves system in state that prevents next backup from running. This isn’t necessarily a common thing, but I have seen situations where for whatever reason, the way a backup finished prevented the next backup from running. Again, that becomes a double failure.

Eight – It does not require manual intervention to complete

There’s two effective categories of backups – those that are started automatically, and those that are started manually. A backup may in fact be started manually (e.g., in the case of an ad-hoc backup), but should still be able to complete without manual intervention.

As soon as manual intervention is required in the backup process, there’s a much greater risk of the backup not completing successfully, or within the required time-frame. This is, effectively, about designing the backup environment to reduce risk by eliminating human intervention. Think of it as one step removed from the classic challenge that if your backups are required but don’t start without human intervention, they likely won’t run. (A common problem with ‘strategies’ around laptop/desktop self-backup requirements.)

There can be workarounds for this – for example, if you need to trigger a database dump as part of the backup process (e.g., for a database without a plugin), then it could be a password needs to be entered, and the dump tool only accepts passwords interactively. Rather than having someone actually manually enter the password, the dump command could instead be automated with tools such as Expect.

Nine – It does not unduly impact access to the data it is protecting

(We’re in the home stretch now.)

A backup should be as light-touch as possible. The best example perhaps of a ‘heavy touch’ backup is a cold database backup. That’s where the database is shutdown for the duration of the backup, and it’s a perfect situation of a backup directly impacting/impeding access to the data being protected. Sometimes it’s more subtle though – high performance systems may have limited IO and system resources to handle the steaming of a backup, for instance. If system performance is degraded by the backup, then it should be considered the case the backup is unsuccessful.

I liken this to uptime vs availability. A server might be up, but if the performance of the system is so poor that users consider the service offered by the system, it’s not usable. That’s where, for instance, systems like ProtectPoint can be so important – in high performance systems it’s not just about getting a high speed backup, but limiting the load of the database server during the backup process.

Ten – It is predictably repeatable

Of course, there are ad-hoc backups that might only ever need to be run once, or backups that you may never need to run again (e.g., pre-decommissioning backup).

The vast majority of backups within an environment though will be repeated daily. Ideally, the result of each backup should be predictably repeatable. If the backup succeeds today, and there’s absolutely no changes to the systems or environment, for instance, then it should be reasonable to expect the backup will succeed tomorrow. That doesn’t ameliorate the requirement for end-state capturing and reporting; it does mean though that the backup results shouldn’t effectively be random.

In Summary

It’s easy to understand why the simplest answer (“it completes without error”) can be so easily assumed to be the whole answer to “what constitutes a successful backup?” There’s no doubt it forms part of the answer, but if we think beyond the basics, there are definitely a few other contributing factors to achieving really successful backups.

Consistency, impact, recovery usefulness and timeliness, as well as all the other rules outlined above also come into how we can define a truly successful backup. And remember, it’s not about making more work for us, it’s about preventing future problems.

If you’ve thought the above was useful, I’d suggest you check out my book, Data Protection: Ensuring Data Availability. Available in paperback and Kindle formats.

May 102017

Dell EMC World is currently on in Las Vegas, and one of the most exciting announcements to come out of the show (in my opinion) is the Integrated Data Protection Appliance (IDPA).

Hyperconverged is eating into the infrastructure landscape – it’s a significantly growing market for Dell EMC, as evidenced by the VxRail and VxRack product lines. These allow you to deploy fast, efficiently and with a modernised consumption approach thanks to Enterprise Hybrid Cloud.

The next step in that hyperconverged path is hyperconverged data protection, which is where the IDPA comes in.


Hyperconverged data protection works on the same rationale as hyperconverged primary production infrastructure: you can go out to market and buy a backup product, data protection storage, systems infrastructure to run it on, etc., then when it all arrives, assemble, test and configure it, or, you could buy a single appliance with the right starting and growth capacity for you, get it delivered on-site pre-built and tested, and a few hours later be up and running your first backup.

The IDPA is an important step in the evolution of data protection, recognising the changing landscape in the IT infrastructure environment, notably:

  • Businesses want to see results realised from their investment as soon as it arrives
  • Businesses don’t want IT staff spending time doing ‘one-off’ installation activities.
  • The silo, ‘communicate via service tickets’ approach to IT is losing ground as the infrastructure administrator becomes a real role within organisations. It’s not just infrastructure becoming hyperconverged – it’s people, too.
  • The value of automation is finally being understood, since it frees IT resources to concentrate on projects and issue resolution, rather than BAU button pressing.
  • Mobile workforces and remote office environments increasingly means you may not have an IT person present on-site to physically make a change, etc.
  • Backup administrators need to become data protection administrators, and data protection architects.

And finally, there’s another, final aspect to the IDPA that cannot be overstated in the realm of hyper-virtualised environments: the IDPA is natural physical separation of your protection data from your operational infrastructure. Consider a traditional protection environment:

Traditional Environment

In a traditional protection environment, you’ll typically have separated protection storage (e.g., Data Domain), but it’s very typical these days, particularly in hyper-virtualised environments, to see the backup services themselves running within the same environment they’re protecting. That means if there is a significant primary systems infrastructure issue, your recovery time may take longer because you’ll have to first get the backup services up and running again.

IDPA provides complete separation though:

IDPA Environment

The backup services and configuration no longer run on your primary systems infrastructure, instead running in a separate appliance. This gives you higher levels of redundancy and protection for your protection environment, decreasing risk within your business.

Top picks for where you should consider an IDPA:

  • When deploying large-scale hyperconverged environments (e.g., VxRack)
  • For remote offices
  • For greenfields computer-rooms
  • For dealing with large new workloads
  • For modernising your approach to data protection
  • Whenever you want a single, turnkey approach to data protection with a single vendor supporting the entire stack

The IDPA can scale with your business; there’s models starting as low as 34TB usable (pre-dedupe) and scaling all the way to 1PB usable (and that’s before you consider cloud-tiering).

If you’re wanting to read more about IDPA, check out the official Dell EMC blog post for the release here.

 Posted by at 8:34 am  Tagged with:
May 052017

There was a time, comparatively not that long ago, when the biggest governing factor in LAN capacity for a datacentre was not the primary production workloads, but the mechanics of getting a full backup from each host over to the backup media. If you’ve been around in the data protection industry long enough you’ll have had experience of that – for instance, the drive towards 1Gbit networks over Fast Ethernet started more often than not in datacentres I was involved in thanks to backup. Likewise, the first systems I saw being attached directly to 10Gbit backbones in datacentres were the backup infrastructure.

Well architected deduplication can eliminate that consideration. That’s not to say you won’t eventually need 10Gbit, 40Gbit or even more in your datacentre, but if deduplication is architected correctly, you won’t need to deploy that next level up of network performance to meet your backup requirements.

In this blog article I want to take you through an example of why deduplication architecture matters, and I’ll focus on something that amazingly still gets consideration from time to time: post-ingest deduplication.

Before I get started – obviously, Data Domain doesn’t use post-ingest deduplication. Its pre-ingest deduplication ensures the only data written to the appliance is already deduplicated, and it further increases efficiency by pushing deduplication segmentation and processing out to the individual clients (in a NetWorker/Avamar environment) to limit the amount of data flowing across the network.

A post-deduplication architecture though has your protection appliance feature two distinct tiers of storage – the landing or staging tier, and the deduplication tier. So that means when it’s time to do a backup, all your clients send all their data across the network to sit, in original sized format, on the staging tier:

Post Process Dedupe 01

In the example above we’ve already had backups run to the post-ingest deduplication appliance; so there’s a heap of deduplicated data sitting in the deduplication tier, but our staging tier has just landed all the backups from each of the clients in the environment. (If it were NetWorker writing to the appliance, each of those backups would be the full sized savesets.)

Now, at some point after the backup completes (usually a preconfigured time), post-processing kicks in. This is effectively a data-migration window in a post-ingest appliance where all the data in the staging tier has to be read and processed for deduplication. For example, using the example above, we might start with inspecting ‘Backup01’ for commonality to data on the deduplication tier:

Post Process Dedupe 02

So the post-ingest processing engine starts by reading through all the content of Backup01 and constructs fingerprint analysis of the data that has landed.

Post Process Dedupe 03

As fingerprints are assembled, data can be compared against the data already residing in the deduplication tier. This may result in signature matches or signature misses, indicating new data that needs to be copied into the deduplication tier.

Post Process Dedupe 04

In this it’s similar to regular deduplication – signature matches result in pointers for existing data being updated and extended, and a signature miss results in needing to store new data on the deduplication tier.

Post Process Dedupe 05

Once the first backup file written to the staging tier has been dealt with, we can delete that file from the staging area and move onto the second backup file to start the process all over again. And we keep doing that over and over and over on the staging tier until we’re left with an empty staging tier:

Post Process Dedupe 06

Of course, that’s not the end of the process – then the deduplication tier will have to run its regular housekeeping operations to remove data that’s no longer referenced by anything.

Architecturally, post-ingest deduplication is a kazoo to pre-ingest deduplication’s symphony orchestra. Sure, you might technically get to hear the 1812 Overture, but it’s not really going to be the same, right?

Let’s go through where architecturally, post-ingest deduplication fails you:

  1. The network becomes your bottleneck again. You have to send all your backup data to the appliance.
  2. The staging tier has to have at least as much capacity available as the size of your biggest backup, assuming it can execute its post-process deduplication within the window between when your previous backup finishes and your next backup starts.
  3. The deduplication process becomes entirely spindle bound. If you’re using spinning disk, that’s a nightmare. If you’re using SSD, that’s $$$.
  4. There’s no way of telling how much space will be occupied on the deduplication tier after deduplication processing completes. This can lead you into very messy situations where say, the staging tier can’t empty because the deduplication tier has filled. (Yes, capacity maintenance is a requirement still on pre-ingest deduplication systems, but it’s half the effort.)

What this means is simple: post-ingest deduplication architectures are asking you to pay for their architectural inefficiencies. That’s where:

  1. You have to pay to increase your network bandwidth to get a complete copy of your data from client to protection storage within your backup window.
  2. You have to pay for both the staging tier storage and the deduplication tier storage. (In fact, the staging tier is often a lot bigger than the size of your biggest backups in a 24-hour window so the deduplication can be handled in time.)
  3. You have to factor the additional housekeeping operations into blackout windows, outages, etc. Housekeeping almost invariably becomes a daily rather than a weekly task, too.

Compare all that to pre-ingest deduplication:

Pre-Ingest Deduplication

Using pre-ingest deduplication, especially Boost based deduplication, the segmentation and hashing happen directly where the data is, and rather than sending the entire data to be protected from the client to the Data Domain, we only send the unique data. Data that already resides on the Data Domain? All we’ll have sent is a tiny fingerprint so the Data Domain can confirm it’s already there (and update its pointers for existing data), then moved on. After your first backup, that potentially means that on a day to day basis your network requirements for backup are reduced by 95% or more.

That’s why architecture matters: you’re either doing it right, or you’re paying the price for someone else’s inefficiency.

If you want to see more about how a well architected backup environment looks – technology, people and processes, check out my book, Data Protection: Ensuring Data Availability.

May 022017

I had a fairly full-on weekend so I missed this one – NetWorker 9.1.1 is now available.

Being a minor release, this one is focused on general improvements and currency, as opposed to introducing a wealth of new features.


There’s some really useful updates around NMC, such as:

  • Performance/response improvements
  • Option for NMC to retrieve a vProxy support bundle for you
  • NMC now shows whenever the NetWorker server is running in service mode
  • NMC will give you a list of virtual machines backed up and skipped
  • NMC recoveries now highlight the calendar dates that are available to select backups to recover from

Additionally, NDMP and NDMA get some updates as well:

  • Some NDMP application options can now be set in the NetWorker client resource level, rather than having to establish them as an environment variable
  • NMDA for SAP/Oracle and Oracle/RMAN get more compact debug logs
  • NMDA for Sybase can now recover log-tail backups.

Finally, there’s the version currency:

  • NetWorker Server High Availability is now supported on SuSE 12 SP2 with HAE, and RHEL 7.3 in a High Availability Cluster (with Pacemaker).
  • NVP/vProxy supports vSphere 6.0u3
  • Meditech module supports Unity 4.1 and RecoverPoint 5.0.

As always for upgrades, make sure you read the release notes before diving in.

Also, don’t forget my new book is out: Data Protection: Ensuring Data Availability. It’s the perfect resource for any data protection architect.

Apr 272017

Regardless of whether you’re new to NetWorker or have been using it for a long time, if there’s any change happening within the overall computing environment of your business, there’s one thing you always need to have access to: compatibility guides.

As I’ve mentioned in a few times, including my book, there won’t be anything in your environment that touches more things other than the network itself than your enterprise backup and recovery product. With this in mind, it’s always critical to understand the potential impact of changes to our environment on the backups.

NetWorker, as well as majority of the rest of the Dell EMC Data Protection Suite, no longer has a static software compatibility guide. There’s enough variables that a static software compatibility guide is going to be tedious to search and maintain. So instead of being a document, it’s now a database, and you get to access it and generate custom compatibility information for exactly what you need.

bigStock Compatibility

If you’ve not used the interactive compatibility guide before, you’ll find it at:

My recommendation? Bookmark it. Make sure it’s part of your essential NetWorker toolkit. (And for that matter: Data Domain OS, Boost Plugins, ProtectPoint, Avamar, etc.)

When you first hit the compatibility landing page, you’ll note a panel on the left-hand from where you can choose the product. In this case, when you expand NetWorker, you’ll get a list of versions since the interactive guide was introduced:

Part 01

Once you’ve picked the NetWorker version, the central panel will be updated to reflect the options you can check compatibility against:

Part 02

As you can see, there’s some ‘Instant’ reports for specific quick segments of information; beyond that, there’s the ability to create a specific drill-down report using the ‘Custom Reports’ option. For instance, say you’re thinking of deploying a new NetWorker server on Linux and you want to know what versions of Linux you can deploy on. To do that, under ‘NetWorker Component’ you’d select ‘Server OS’, then get a sub-prompt for broad OS type, then optionally drill down further. In this case:

Part 03

Here I’ve selected Server OS > Linux > All to get information about all compatible versions of Linux for running a NetWorker 9.1 server on. After you’ve made your selections, all you have to do is click ‘Generate Report’ to actually create a compatibility report. The report itself will look something like the following:

Part 04

Any area in the report that’s underlined is a hover prompt: hovering the mouse cursor over will popup the additional clarifying information referenced. Also note the “Print/Save Results” option – if say, as part of a change request, you need to submit documentary evidence, you can generate yourself a PDF that covers exactly what you need.

If you need to generate multiple and different compatibility reports to generate, you may need to click the ‘Reset’ button to blank out all options. (This will avoid a situation where you end up say, trying to find out about what versions of Exchange on Linux are supported!)

As far as the instant reports are concerned – this is about quickly generating information that you want to get straight away – you click on the option in the Instant Reports, and you don’t even need to click ‘Generate Report’. For instance, the NVP option:

Part 05

Part 06That’s really all there is to the interactive compatibility guide – it’s straight forward and it’s a really essential tool in the arsenal of a NetWorker or Dell EMC Data Protection Suite user.

Oh, there’s one more thing – there’s other compatibility guides of course: older NetWorker and Avamar software guides, NetWorker hardware guides, etc. You can get access to the legacy and traditional hardware compatibility guides via the right-hand area on the guide page:

Part 07

There you have it. If you need to check NetWorker or DPS compatibility, make the Interactive Compatibility Guide your first point of call.

Hey, don’t forget my book is available now in paperback and Kindle formats!

NetWorker 9.1 FLR Web Interface

 NVP, Recovery, vProxy  Comments Off on NetWorker 9.1 FLR Web Interface
Apr 042017

Hey, don’t forget, my new book is available. Jam packed with information about protecting across all types of RPOs and RTOs, as well as helping out on the procedural and governance side of things. Check it out today on Amazon! (Kindle version available, too.)

In my introductory NetWorker 9.1 post, I covered file level recovery (FLR) from VMware image level backup via NMC. I felt at the time that it was worthwhile covering FLR from within NMC as the VMware recovery integration in NMC was new with 9.1. But at the same time, the FLR Web interface for NetWorker has also had a revamp, and I want to quickly run through that now.

First, the most important aspect of FLR from the new NetWorker Virtual Proxy (NVP, aka “vProxy”) is not something you do by browsing to the Proxy itself. In this updated NetWorker architecture, the proxies are very much dumb appliances, completely disposable, with all the management intelligence coming from the NetWorker server itself.

Thus, to start a web based FLR session, you actually point your browser to:


The FLR web service now runs on the NetWorker server itself. (In this sense quite similarly to the FLR service for Hyper-V.)

The next major change is you no longer have to use the FLR interface from a system currently getting image based backups. In fact, in the example I’m providing today, I’m doing it from a laptop that isn’t even a member of the NetWorker datazone.

When you get to the service, you’ll be prompted to login:

01 Initial Login

For my test, I wanted to access via the Administration interface, so I switched to ‘Admin’ and logged on as the NetWorker owner:

02 Logging In as Administrator

After you login, you’re prompted to choose the vCenter environment you want to restore from:

03 Select vCenter

Selecting the vCenter server of course lets you then choose the protected virtual machine in that environment to be recovered:

04 Select VM and Backup

(Science fiction fans will perhaps be able to intuit my host naming convention for production systems in my home lab based on the first three virtual machine names.)

Once you’ve selected the virtual machine you want to recover from, you then get to choose the backup you want to recover – you’ll get a list of backups and clones if you’re cloning. In the above example I’ve got no clones of the specific virtual machine that’s been protected. Clicking ‘Next’ after you’ve selected the virtual machine and the specific backup will result in you being prompted to provide access credentials for the virtual machine. This is so that the FLR agent can mount the backup:

05 Provide Credentials for VM

Once you provide the login credentials (and they don’t have to be local – they can be an AD specified login by using the domain\account syntax), the backup will be mounted, then you’ll be prompted to select where you want to recover to:

06 Select Recovery Location

In this case I selected the same host, recovering back to C:\tmp.

Next you obviously need to select the file(s) and folder(s) you want to recover. In this case I just selected a single file:

07 Select Content to Recover

Once you’ve selected the file(s) and folder(s) you want to recover, click the Restore button to start the recovery. You’ll be prompted to confirm:

08 Confirm Recovery

The restore monitor is accessible via the bottom of the FLR interface, basically an upward-pointing arrow-head to expand. This gives you a view of a running, or in this case, a complete restore, since it was only a single file and took very little time to complete:

09 Recovery Success

My advice generally is that if you want to recover thousands or tens of thousands of files, you’re better off using the NMC interface (particularly if the NetWorker server doesn’t have a lot of RAM allocated to it), but for smaller collections of files the FLR web interface is more than acceptable.

And Flash-free, of course.

There you have it, the NetWorker 9.1 VMware FLR interface.

Hey, don’t forget, my new book is available. Jam packed with information about protecting across all types of RPOs and RTOs, as well as helping out on the procedural and governance side of things. Check it out today on Amazon! (Kindle version available, too.)


What to do on world backup day

 Backup theory, Best Practice, Recovery  Comments Off on What to do on world backup day
Mar 302017

World backup day is approaching. (A few years ago now, someone came up with the idea of designating one day of the year to recognise backups.) Funnily enough, I’m not a fan of world backup day, simply because we don’t backup for the sake of backing up, we backup to recover.

Every day should, in fact, be world backup day.

Something that isn’t done enough – isn’t celebrated enough, isn’t tested enough – are recoveries. For many organisations, recovery tests consist of actually doing a recovery when requested, and things like long term retention backups are never tested, and even more rarely recovered from.

bigStock Rescue

So this Friday, March 31, I’d like to suggest you don’t treat as World Backup Day, but World Recovery Test Day. Use the opportunity to run a recovery test within your organisation (following proper processes, of course!) – preferably a recovery that you don’t normally run in terms of day to day operations. People only request file recoveries? Sounds like a good reason to run an Exchange, SQL or Oracle recovery to me. Most recoveries are Exchange mail level recoveries? Excellent, you know they work, let’s run a recovery of a complete filesystem somewhere.

All your recoveries are done within a 30 day period of the backup being taken? That sounds like an excellent idea to do the recovery from an LTR backup written 2+ years ago, too.

Part of running a data protection environment is having routine tests to validate ongoing successful operations, and be able to confidently report back to the business that everything is OK. There’s another, personal and selfish aspect to it, too. It’s one I learnt more than a decade ago when I was still an on-call system administrator: having well-tested recoveries means that you can sleep easily at night, knowing that if the pager or mobile phone does shriek you into blurry-eyed wakefulness at 1am, you can in fact log onto the required server and run the recovery without an issue.

So this World Backup Day, do a recovery test.

The need to have an efficient and effective testing system is something I cover in more detail in Data Protection: Ensuring Data Availability. If you want to know more, feel free to check out the book on Amazon or CRC Press. Remember that it doesn’t matter how good the technology you deploy is if you don’t have the processes and training to use it.

Mar 272017

I’d like to take a little while to talk to you about licensing. I know it’s not normally considered an exciting subject (usually at best people think of it as a necessary-evil subject), but I think it’s common to see businesses not take full advantage of the potential data protection licensing available to them from Dell EMC. Put it this way: I think if you take the time to read this post about licensing, you’ll come away with some thoughts on how you might be able to expand a backup system to a full data protection system just thanks to some very handy licensing options available.

When I first started using NetWorker, the only licensing model was what I’d refer to as feature based licensing. If you wanted to do X, you bought a license that specifically enabled NetWorker to do X. The sorts of licenses you would use included:

  • NetWorker Base Enabler – To enable the actual base server itself
  • OS enablers – Called “ClientPack” enablers, these would let you backup operating systems other than the operating system of the NetWorker server itself (ClientPack for Windows, ClientPack for Unix, ClientPack for Linux, etc).
  • Client Count enablers – Increasing the number of clients you can backup
  • Module enablers – Allowing you to say, backup Oracle, or SQL, or Exchange, etc.
  • Autochanger enablers – Allowing you to connect autochangers of a particular slot count (long term NetWorker users will remember short-slotting too…)

That’s a small excerpt of the types of licences you might have deployed. Over time, some licenses got simplified or even removed – the requirement for ClientPack enablers for instance were dropped quite some time ago, and the database licenses were simplified by being condensed into licenses for Microsoft databases (NMM) and licenses for databases and applications (NMDA).

Feature based licensing is, well, confusing. I’d go so far as to suggest it’s anachronistic. As a long-term NetWorker user, I occasionally get asked what a feature based licensing set might look like, or what might be required to achieve X, and even for me, having dealt with feature based licenses for 20 years, it’s not fun.

bigStock Confusion

The problem – and it’s actually a serious one – with feature based licensing is you typically remain locked, for whatever your minimum budget cycle is, into what your backup functionality is. Every new database, set of clients, backup device or special requirement has to be planned well in advance to make sure you have the licenses you need. How often is that really the case? I’m into my 21st year of working with backup and I still regularly hear stories of new systems or projects coming on-line without full consideration of the data protection requirements.

In this modern age of datacentre infrastructure where the absolute requirement is agility, using feature-based licensing is like trying to run on a treadmill that’s submerged waist-deep in golden syrup.

There was, actually, one other type of NetWorker licensing back then – in the ‘old days’, I guess I can say: an Enterprise license. That enabled everything in one go, but required yearly audits to ascertain usage and appropriate maintenance costs, etc. It enabled convenient use but from a price perspective it only suited upper-echelon businesses.

Over time to assist with providing licensing agility, NetWorker got a second license type – capacity licensing. This borrowed the “unlimited features” aspect of enterprise-based licensing, and worked on the basis of what we refer to as FETB – Front End TB. The simple summary of FETB is “if you did a full backup of everything you’re protecting, how big would it be?” (In fact, various white-space components are typically stripped out – a 100 GB virtual machine for instance that’s thickly provisioned but only using 25GB would effectively be considered to contribute just 25 GB to the capacity.)

The beauty of the capacity license scheme is that it doesn’t matter how many copies you generate of your data. (An imaginary BETB (“Back End TB”) license would be unpleasant in the extreme – limiting you to the total stored capacity of your backups.) So that FETB license applies regardless of whether you just keep all your backups for 30 days, or whether you keep all your backups for 7 years. (If you keep all your backups for 7 years, read this.)

A FETB lets you adjust your backup functionality as the business changes around you. Someone deploys Oracle but you’ve only had to backup SQL Server before? Easy, just install NMDA and start backing Oracle up. The business makes the strategic decision to switch from Hyper-V to VMware? No problem – there’s nothing to change from a licensing perspective.

But, as I say in my book, backup and recovery, as a standalone topic is dead. That’s why Dell EMC has licensing around Data Protection Suite. In fact, there’s a few different options to suit different tiers of organisations. If you’ve not heard of Data Protection Suite licensing, you’ve quite possibly been missing out on a wealth of opportunities for your organisation.

Let’s start with the first variant that was introduced, Data Protection Suite for Backup. (In fact, it was originally just Data Protection Suite.) DPS for Backup has been expanded as other products have been released, and now includes:

DPS for Backup

Think about that – from a single wrapper license (DPS for Backup), you get access to 6 products. Remember before when I said the advantage of NetWorker capacity licensing over ‘feature’ licensing was the ability to adapt to changes in the business requirements for backup? This sort of license expands on that ability even more so. You might start today using NetWorker to protect your environment, but in a year’s time your business needs to setup some remote offices that are best served by Avamar. With DPS for Backup, you don’t need to go and buy Avamar licenses, you just deploy Avamar. Equally, the strategic decision might be made to give DBAs full control over their backup processes, so it makes sense to give them access to shared protection storage via Data Domain Boost for Enterprise Applications (DDBEA), instead of needing to be configured for manual backups in NetWorker. The business could decide to start pushing some long term backups from NetWorker out to Cloud object storage – that’s easy, just deploy a CloudBoost virtual machine because you can. You can mix and match your licenses as you need. Just as importantly, you can deploy Data Protection Advisor at the business layer to provide centralised reporting and monitoring across the entire gamut, and you can take advantage of Data Protection Search to easily find content regardless of whether it was NetWorker or Avamar that protected it.

Data Protection Suite for Backup is licensed – like the NetWorker Capacity model – via FETB. So if you license for say, 500 TB, you can slice and dice that however you need between NetWorker, Avamar and DDBEA, and get CloudBoost, DPA and DP-Search rolled in. Suddenly your backup solution is a much broader data protection solution, just thanks to a license model!

If you’re not an existing NetWorker or Avamar site, but you’re looking for some increased efficiencies in your application backups/backup storage, or a reduction in the capacity licensing for another product, you might instead be interested in DPS for Applications:

DPS for Applications

Like DPS for Backup, DPS for Applications is a FETB capacity license. You get to deploy Boost for Enterprise Apps and/or ProtectPoint to suit your requirements, you get Data Protection Advisor to report on your protection status, and you also get the option to deploy Enterprise Copy Data Management (eCDM). That lets you set policies on application protection – e.g., “There must always be 15 copies of this database”. The application administration team can remain in charge of backups, but to assuage business requirements, policies can be established to ensure systems are still adequately protected. And ProtectPoint: whoa, we’re talking serious speed there. Imagine backing up a 10TB or 50TB database, not 20% faster, but 20 times faster. That’s ProtectPoint – Storage Integrated Data Protection.

Let’s say you’re an ultra-virtualised business. There’s few, if any, physical systems left, and you don’t want to think of your data protection licensing in terms of FETB, which might be quite variable – instead, you want to look at a socket based licensing count. If that’s the case, you probably want to look at Data Protection Suite for Virtual Machines:

DPS for Virtual Machines

DPS for Virtual Machines is targeted for the small to medium end of town to meet their data protection requirements in a richly functional way. On a per socket (not per-core) license model, you get to protect your virtual infrastructure (and, if you need to, a few physical servers) with Avamar, using image based and agent-based backups in whatever mix is required. You also get RecoverPoint for Virtual Machines. RecoverPoint gives you DVR-like Continuous Data Protection that’s completely storage independent, since it operates at the hypervisor layer. Via an advanced journalling system, you get to deliver very tight SLAs back to the business with RTOs and RPOs in the seconds or minutes, something that’s almost impossible with just standard backup. (You can literally choose to roll back virtual machines on an IO-by-IO basis. Or spin up testing/DR copies using the same criteria.) You also get DPA and DP-Search, too.

There’s a Data Protection Suite for archive bundle as well if your requirements are purely archiving based. I’m going to skip that for the moment so I can talk about the final licensing bundle that gives you unparalleled flexibility for establishing a full data protection strategy for your business; that’s Data Protection Suite for Enterprise:

DPS for Enterprise

Data Protection Suite for Enterprise returns to the FETB model but it gives you ultimate flexibility. On top of it all you again get Data Protection Advisor and Data Protection Search, but then you get a raft of data protection and archive functionality, all again in a single bundled consumption model: NetWorker, Avamar, DDBEA, CloudBoost, RecoverPoint for Virtual Machines, ProtectPoint, AppSync, eCDM, and all the flavours of SourceOne. In terms of flexibility, you couldn’t ask for more.

It’s easy when we work in backup to think only in terms of the main backup product we’re using, but there’s two things that have become urgently apparent:

  • It’s not longer just about backup – To stay relevant, and to deliver value and results back to the business, we need to be thinking about data protection strategies rather than backup and recovery strategies. (If you want proof of that change from my perspective, think of my first book title vs the second – the first was “Enterprise Systems Backup and Recovery”, the second, “Data Protection”.)
  • We need to be more agile than “next budget cycle” – Saying you can’t do anything to protect a newly emerged or altering workload until you get budget next year to do it is just a recipe for disaster. We need, as data protection professionals, to be able to pick the appropriate tool for each workload and get it operational now, not next month or next year.

Licensing: it may on the outset appear to be a boring topic, but I think it’s actually pretty damn exciting in what a flexible licensing policy like the Data Protection Suite allows you to offer back to your business. I hope you do too, now.

Hey, you’ve made it this far, thanks! I’d love it if you bought my book, too! (In Kindle format as well as paperback.)


Mar 222017

It’s fair to say I’m a big fan of Queen. They shaped my life – the only band to have even a remotely similar effect on me was ELO. (Yes, I’m an Electric Light Orchestra fan. Seriously, if you haven’t listened to the Eldorado or Time operatic albums in the dark you haven’t lived.)

Queen taught me a lot: the emotional perils of travelling at near-relativistic speeds and returning home, that maybe immorality isn’t what fantasy makes it seem like, and, amongst a great many other things, that you need to take a big leap from time to time to avoid getting stuck in a rut.

But you can find more prosaic meanings in Queen, too, if you want to. One of them deals with long term retention. We get that lesson from one of the choruses for Too much love will kill you:

Too much love will kill you,

Just as sure as none at all

Hang on, you may be asking, what’s that got to do with long term retention?

Replace ‘love’ with ‘data’ and you’ve got it.


I’m a fan of the saying:

It’s always better to backup a bit too much than not quite enough.

In fact, it’s something I mention again in my book, Data Protection: Ensuring Data Availability. Perhaps more than once. (I’ve mentioned my book before, right? If you like my blog or want to know more about data protection, you should buy the book. I highly recommend it…)

That’s something that works quite succinctly for what I’d call operational backups: your short term retention policies. They’re going to be the backups where you’re keeping say, weekly fulls and daily incrementals for (typically) between 4-6 weeks for most businesses. For those sorts of backups, you definitely want to err on the side of caution when choosing what to backup.

Now, that’s not to say you don’t err on the side of caution when you’re thinking about long term retention, but caution definitely becomes a double-edged sword: the caution of making sure you’re backing up what you are required to, but also the caution of making sure you’re not wasting money.

Let’s start with a simpler example: do you backup your non-production systems? For a lot of environments, the answer is ‘yes’ (and that’s good). So if the answer is ‘yes’, let me ask the follow-up: do you apply the same retention policies for your non-production backups as you do for your production backups? And if the answer to that is ‘yes’, then my final question is this: why? Specifically, are you doing it because it’s (a) habit, (b) what you inherited, or (c) because there’s a mandated and sensible reason for doing so? My guess is that in 90% of scenarios, the answer is (a) or (b), not (c). That’s OK, you’re in the same boat as the rest of the industry.

Let’s say you have 10TB of production data, and 5TB of non-production data. Not worrying about deduplication for the moment, if you’re doing weekly fulls and daily incrementals, with a 3.5% daily change (because I want to hurt my brain with mathematics tonight – trust me, I still count on my fingers, and 3.5 on your fingers is hard) with a 5 week retention period then you’re generating:

  • 5 x (10+5) TB in full backups
  • 30 x ((10+5) x 0.035) TB in incremental backups

That’s 75 TB (full) + 15.75 TB (incr) of backups generated for 15TB of data over a 5 week period. Yes, we’ll use deduplication because it’s so popular with NetWorker and shrink that number quite nicely thank-you, but 90.75 TB of logical backups over 5 weeks for 15TB of data is the end number we get at.

But do you really need to generate that many backups? Do you really need to keep five weeks worth of non-production backups? What if instead you’re generating:

  • 5 x 10 TB in full production backups
  • 2 x 5 TB in full non-prod backups
  • 30 x 10 x 0.035 TB in incremental production backups
  • 12 x 5 x 0.035 TB in incremental non-prod backups

That becomes 50TB (full prod) + 10 TB (full non-prod) + 10.5 TB (incr prod) + 2.1 TB (incr non-prod) over any 5 week period, or 72.6 TB instead of 90.75 TB – a saving of 20%.

(If you’re still pushing your short-term operational backups to tape, your skin is probably crawling at the above suggestion: “I’ll need more tape drives!” Well, yes you would, because tape is inflexible. So using backup to disk means you can start saving on media, because you don’t need to make sure you have enough tape drives for every potential pool that would be written to at any given time.)

A 20% saving on operational backups for 15TB of data might not sound like a lot, but now let’s start thinking about long term retention (LTR).

There’s two particular ways we see long term retention data handled: monthlies kept for the entire LTR period, or keeping monthlies for 12-13 months and just keeping end-of-calendar-year (EoCY) + end-of-financial-year (EoFY) for the LTR period. I’d suggest that the knee-jerk reaction by many businesses is to keep monthlies for the entire time. That doesn’t necessarily have to be the case though – and this is the sort of thing that should also be investigated: do you legally need to keep all your monthly backups for your LTR, or do you just need to keep those EoCY and EoFY backups for that period? That alone might be a huge saving.

Let’s assume though that you’re keeping those monthly backups for your entire LTR period. We’ll assume you’re also not in engineering, where you need to keep records for the lifetime of the product, or biosciences, where you need to keep records for the lifetime of the patient (and longer), and just stick with the tried-and-trusted 7 year retention period seen almost everywhere.

For LTR, we also have to consider yearly growth. I’m going to cheat and assume 10% year on year growth, but the growth only kicks in once a year. (In reality for many businesses it’s more like a true compound annual growth, ammortized monthly, which does change things around a bit.)

So let’s go back to those numbers. We’ve already established what we need for operational backups, but what do we need for LTR?

If we’re not differentiating between prod and non-prod (and believe me, that’s common for LTR), then our numbers look like this:

  • Year 1: 12 x 15 TB
  • Year 2: 12 x 16.5 TB
  • Year 3: 12 x 18.15 TB
  • Year 4: 12 x 19.965 TB
  • Year 5: 12 x 21.9615 TB
  • Year 6: 12 x 24.15765 TB
  • Year 7: 12 x 26.573415 TB

Total? 1,707.69 TB of LTR for a 7 year period. (And even as data ages out, that will still grow as the YoY growth continues.)

But again, do you need to keep non-prod backups for LTR? What if we didn’t – what would those numbers look like?

  • Year 1: 12 x 10 TB
  • Year 2: 12 x 11 TB
  • Year 3: 12 x 12.1 TB
  • Year 4: 12 x 13.31 TB
  • Year 5: 12 x 14.641 TB
  • Year 6: 12 x 16.1051 TB
  • Year 7: 12  17.71561 TB

That comes down to just 1,138 TB over 7 years – a 33% saving in LTR storage.

We got that saving just by looking at splitting off non-production data from production data for our retention policies. What if we were to do more? Do you really need to keep all of your production data for an entire 7-year LTR period? If we’re talking a typical organisation looking at 7 year retention periods, we’re usually only talking about critical systems that face compliance requirements – maybe some financial databases, one section of a fileserver, and email. What if that was just 1 TB of the production data? (I’d suggest that for many companies, a guesstimate of 10% of production data being the data required – legally required – for compliance retention is pretty accurate.)

Well then your LTR data requirements would be just 113.85 TB over 7 years, and that’s a saving of 93% of LTR storage requirements (pre-deduplication) over a 7 year period for an initial 15 TB of data.

I’m all for backing up a little bit too much than not enough, but once we start looking at LTR, we have to take that adage with a grain of salt. (I’ll suggest that in my experience, it’s something that locks a lot of companies into using tape for LTR.)

Too much data will kill you,

Just as sure as none at all

That’s the lesson we get from Queen for LTR.

…Now if you’ll excuse me, now I’ve talked a bit about Queen, I need to go and listen to their greatest song of all time, March of the Black Queen.