Basics: Planning A Recovery Service

 Architecture, Basics, Recovery  Comments Off on Basics: Planning A Recovery Service
Jan 302018
 

Introduction

In Data Protection: Ensuring Data Availability, I talk quite a lot about what you need to understand and plan as part of a data protection environment. I’m often reminded of the old saying from clothing and carpentry – “measure twice, cut once”. The lesson in that statement of course is that rushing into something headlong may make your work more problematic. Taking the time to properly plan what you’re doing though can in a lot of instances (and data protection is one such instance) make the entire process easier. This post isn’t meant to be a replacement to the various planning chapters in my book – but I’m sure it’ll have some useful tips regardless.

We don’t backup just as something to do; in fact, we don’t protect data just as something to do, either. We protect data to either shield our applications and services (and therefore our businesses) from failures, and to ensure we can recover it if necessary. So with that in mind, what are some essential activities in planning a recovery service?

Hard disk and magnifying glass

First: Do you know what the data is?

Data classification isn’t something done during a data protection cycle. Maybe one day it will be when AI and machine learning is sufficiently advanced; in the interim though it requires input from people – IT, the business, and so on. Of course, there’s nothing physically preventing you from planning and implementing a recovery service without performing data classification; I’d go so far as to suggest that an easy majority of businesses do exactly that. That doesn’t mean it’s an ideal approach though.

Data classification is all about understanding the purpose of the data, who cares about it, how it is used, and so on. It’s a collection of seemingly innocuous yet actually highly important questions. It’s something I cover quite a bit in my book, and for the very good reason that I honestly believe a recovery service can be made simpler, cheaper and more efficient if it’s complimented by a data classification process within the organisation.

Second: Does the data need to exist?

That’s right – does it need to exist? This is another essential but oft-overlooked part of achieving a cheaper, simpler and more efficient recovery service: data lifecycle management. Yet, every 1TB you can eliminate from your primary storage systems, for the average business at least, is going to yield anywhere between 10 and 30TB savings in protection storage (RAID, replication, snapshots, backup and recovery, long term recovery, etc.). While for some businesses that number may be smaller, for the majority of mid-sized and higher businesses, that 10-30TB saving is likely to go much, much higher – particularly as the criticality of the data increases.

Without a data lifecycle policy, bad things happen over time:

  • Keeping data becomes habitual rather than based on actual need
  • As ‘owners’ of data disappear (e.g., change roles, leave the company, etc.), reluctance to delete, prune or manage the data tends to increase
  • Apathy or intransigence towards developing a data lifecycle programme increases.

Businesses that avoid data classification and data lifecycle condemn themselves to the torment of Sisyphus – constantly trying to roll a boulder up a hill only to have it fall back down again before they get to the top. This manifests in many ways, of course, but in designing, acquiring and managing a data recovery service it usually hits the hardest.

Third: Does the data need to be protected?

I remain a firm believer that it’s always better to backup too much data than not enough. But that’s a default, catchall position rather than one which should be the blanket rule within the business. Part of data classification and data lifecycle will help you determine whether you need to enact specific (or any) data protection models for a dataset. It may be test database instances that can be recovered at any point from production systems; it might be randomly generated data that has no meaning outside of a very specific use case, or it might be transient data merely flowing from one location to another that does not need to be captured and stored.

Remember the lesson from data lifecycle – every 1TB eliminated from primary storage can eliminate 10-30TB of data from protection storage. The next logical step after that is to be able to accurately answer the question, “do we even need to protect this?”

Fourth: What recovery models are required?

At this point, we’ve not talked about technology. This question gets us a little closer to working out what sort of technology we need, because once we have a fair understanding of the data we need to offer recovery services for, we can start thinking about what types of recovery models will be required.

This will essential involve determining how recoveries are done for the data, such as:

  • Full or image level recoveries?
  • Granular recoveries?
  • Point in time recoveries?

Some data may not need every type of recovery model deployed for it. For some data, granular recoverability is equally important as complete recoverability, for other types of data, it could be that the only way to recover it is image/full – wherein granular recoveries would simply leave data corrupted or useless. Does all data require point in time recovery? Much will, but some may not.

Other recovery models you should consider of course are how much users will be involved in recoveries. Self-service for admins? Self-service for end-users? All operator run? Chances are of course it’ll be a mix depending those previous recovery model questions (e.g., you might allow self-service individual email recovery, but full exchange recovery is not going to be an end-user initiated task.)

Fifth: What SLOs/SLAs are required?

Regardless of whether your business has Service Level Objectives (SLOs) or Service Level Agreements (SLAs), there’ll be the potential you have to meet a variety of them depending on the nature of the failure, the criticality and age of the data, and so on. (For the rest of this section, I’ll use ‘SLA’ as a generic term for both SLA and SLO). In fact, there’ll be up to three different categories of SLAs you have to meet:

  • Online: These types of SLAs are for immediate or near-immediate recoverability from failure; they’re meant to keep the data online rather than having to seek to retrieve it from a copy. This will cover options such as continuous replication (e.g., fully mirrored storage arrays), continuous data protection (CDP), as well as more regular replication and snapshot options.
  • Nearline: This is where backup and recovery, archive, and long term retention (e.g., compliance retention of backups/archives) comes into play. Systems in this area are designed to retrieve the data from a copy (or in the case of archive, a tiered, alternate platform) when required, as opposed to ensuring the original copy remains continuously, or near to continuously available.
  • Disaster: These are your “the chips are down” SLAs, which’ll fall into business continuity and/or isolated recovery. Particularly in the event of business continuity, they may overlap with either online or nearline SLAs – but they can also diverge quite a lot. (For instance, in a business continuity situation, data and systems for ‘tier 3’ and ‘tier 4’ services, which may otherwise require a particular level of online or nearline recoverability during normal operations, might be disregarded entirely until full service levels are restored.

Not all data may require all three of the above, and even if data does, unless you’re in a hyperconverged or converged environment, it’s quite possible if you’re a backup administrator, you only need to consider some of the above, with other aspects being undertaken by storage teams, etc.

Now you can plan the recovery service (and conclusion)

And because you’ve gathered the answers to the above, planning and implementing the recovery service is now the easy bit! Trust me on this – working out what a recovery service should look like for the business is when you’ve gathered the above information is a fraction of the effort compared to when you haven’t. Again: “Measure twice, cut once.”

If you want more in-depth information on above, check out chapters in my book such as “Contextualizing Data Protection”, “Data Life Cycle”, “Business Continuity”, and “Data Discovery” – not to mention the specific chapters on protection methods such as backup and recovery, replication, snapshots, continuous data protection, etc.

Hypervisor Direct – Convergent Data Protection

 Convergent Data Protection, Data Domain  Comments Off on Hypervisor Direct – Convergent Data Protection
Oct 102017
 

At VMworld, DellEMC announced a new backup technology for virtual machines called Hypervisor Direct, which represents a paradigm that I’d refer to as “convergent data protection”, since it mixes layers of data protection to deliver optimal results.

First, I want to get this out of the way: hypervisor direct is not a NetWorker plugin, nor an Avamar plugin. Instead, it’s part of the broader Data Protection Suite package (a good reminder that there are great benefits in the DPS licensing model).

As its name suggests, hypervisor direct is about moving hypervisor backups directly onto protection storage without a primary backup package being involved. This fits under the same model available for Boost Plugins for Databases – centralised protection storage with decentralised access allowing subject matter experts (e.g., database and application administrators) to be in control of their backup processes.

Now, VMware backups are great, but there’s a catch. If you integrate with VMware’s snapshot layer, there’s always a risk of virtual machine stun. The ‘stun’, we refer to there, happens when logged data to the snapshot delta logs are applied to the virtual machine once the snapshot is released. (Hint: if someone tries to tell you otherwise, make like Dorothy in Wizard of Oz and look behind the curtain, because there’s no wizard there.) Within NetWorker and Avamar, we reduce the risk of virtual machine stun significantly by doing optimised backups:

  • Leveraging changed block tracking to only need to access the parts of the virtual machine that have changed since the last backup
  • Using source based deduplication to minimise the amount of data that needs to be sent to protection storage

Those two techniques combined will allow you seamless virtual machine backups in almost all situations – in fact, 90% or more. But, as the old saying goes (I may be making this saying up, bear with me) – it’s that last 10% that’ll really hurt you. In fact, there’s two scenarios that’ll cause virtual machine stun:

  • Inadequate storage performance
  • High virtual machine change rates

In the case of the first scenario, it’s possible to run virtual machines on storage that doesn’t meet their performance requirements. This is particularly so when people are pointing older or under-spec NAS appliances at their virtual machine farm. Now, that may not have a significant impact on day to day operations (other than a bit of user grumbling), but it will be noticed during the snapshot processes around virtual machine backup. Ideally, we want to avoid the first scenario by always having appropriately performing storage for a virtual infrastructure.

Now, the second scenario, that’s more interesting. That’s the “10% that’ll really hurt you”. That’s where a virtualised Oracle or SQL database is 5-10TB with a 40-50% daily change rate. That size, and that change rate will smash you into virtual machine stun territory every time.

Traditionally, the way around that has been one or two (or both) data protection strategies:

  • LUN or array based replication, ignoring the virtual machine layer entirely. That’s good for a secondary copy but it’s going to be at best crash consistent. (It’s also going to be entirely storage dependent – locking you into a vendor and making refreshes more expensive/complex – and will lock you out of technology like vVOL and vSAN.)
  • In-guest agents. That’ll give you your backup, but it’ll be at agent-based performance levels creating additional workload stresses on the virtual machine and the ESX environment. And if we’re talking a multi-TB database with a high change rate – well, that’s not necessarily a good thing to do.

So what’s the way around it? How can you protect those sorts of environments without locking yourself into a storage platform, or preventing yourself from making architectural changes to your overall environment?

You get around it by being a vendor that has a complete continuum of data protection products and creating a convergent data protection solution. That’s what hypervisor direct does.

Hypervisor Direct

Hypervisor direct merges the Boost-direct technology you get in DDBEA and ProtectPoint with RecoverPoint for Virtual Machines (RP4VM). By integrating the backup process in via the Continuous Data Protection (CDP) functionality of RP4VM, we don’t need to take snapshots using VMware at all. That’s right, you can’t get virtual machine stun even in large virtual machines with high IO because we don’t work at that layer. Instead, leveraging the ESXi write splitter technology in RP4VM’s CDP, the RecoverPoint journal system is used to allow a virtual machine backup to be taken, direct to Data Domain, without impact to the source virtual machine.

Do you want to know the really cool feature of this? It’s application consistent, too. That 5-10TB Oracle or SQL database with a high change rate I was talking about earlier? Well, your DBA or Application Administrator gets to run their normal Oracle RMAN backup script for a standard backup, and everything is done at the back-end. That’s right, the Oracle backup or SQL backup (or a host of other databases) triggers the appropriate virtual machine copy functions automatically. (And if a particular database isn’t integrated, there’s still filesystem integration hooks to allow a two-step process.)

This isn’t an incremental improvement to backup options, this is an absolute leapfrog – it’s about enabling efficient, high performance backups in situations where previously there was no actual option available. And it still lets your subject matter experts be involved in the backup process as well.

If you do have virtual machines that fall into this category, reach out to your local DellEMC DPS team for more details. You can also check out some of the official details here.

%d bloggers like this: