10 from 10: 10 years of blogging, 10 years of epic data protection changes

Some blogging history

On 25 January 2009, I published the first article on what was then the NetWorker Blog: How old is your media?

So that means this blog is now ten years old. In blog years, that makes it fairly seasoned to still be running and getting updates. So I’m thinking (and please indulge me here), that it’s as good an opportunity as ever to reflect on some of the things I’ve covered during that time, and the changes we’ve seen in the industry.

10 year blogging anniversary
Celebrating 10 years of blogging

The NetWorker blog started initially as a means of me driving up attention regarding my first book, Enterprise Systems Backup and Recovery, A Corporate Insurance Policy, released in October 2008. EBR (as I think about it in shorthand) was a good book focused pretty much 100% on just backup and recovery activities, but it came out at an inflexion point in the data protection market – the emergence of deduplication as a rock solid way of altering and improving a data protection environment. (Tape may still be limping along, but for most of its traditional usage points and many organisations, it’s in ongoing decline.)

So it’s no surprise that my first blog article back then was about tracking how old your tapes are. (Astute readers would have also noted that almost any tape based article I wrote where I included examples would have included the tape 800840D. That tape barcode ended up being burnt in my head from a particularly unpleasant recovery situation, in case you were wondering why it got re-used all the time…)

I blogged a lot originally – lots of tidbits I’d accumulated over the years of using NetWorker; at 2009 I’d already been using NetWorker for 13 years. These days, that’s now stretched to 23 years, and it’s fair to say that other than a relatively short period at the start of my career I’ve spent my entire professional life working with backup and data protection as a key function of what I did – and I’ve done all of the roles: implementation, architecture, pre-sales, training, operations/administration, and support. To this day I like to think that it’s the “I’ve done all the roles” aspect which gives me insight in the more complex aspects of data protection: the people and processes. Technology is always just one piece of the puzzle, after all.

So across the years, there’s almost 800 blog posts on this site, of which one was guest-written, and over the decade there’s been millions of impressions pass through the blog across its different forms. I’m not putting an arm out patting myself on the back, but I’d like to think that my blog has been helpful to a few of those visitors. In case you’re wondering, across all that time, my top 5 read articles are:

5. Basics – Listing files in a backup
4. Basics – Changing Saveset Browse/Retention times
3. NetWorker 9: The Future of Backup
2. Basics – Stopping and Starting NetWorker from the Windows Command Line
1. Basics – Fixing “NSR Peer Information” Errors

I guess the common thread there is that people search for backup information a lot. In fact, back in the days when search engines would let blogs see the referring search term, I used to build a lot of my content articles around common incoming queries. (Now you only get that information from some search engines, or where someone has used search on the blog itself.)

My writing hasn’t just been limited to my blogging, of course. In 2017 I released Data Protection: Ensuring Data Availability, which a much broader scope beyond just backup and recovery. To me, we’re seeing the start of a new inflexion point within the industry, but this time it’s around people and roles, rather than technology as such: we’re moving away from people being “backup administrators”, and seeing the rise of the data protection architect, and that’s effectively what the new book was targeted at, because it covers the full gamut of data storage protection, from fault tolerance to snapshots and replication, backups, archive (in part), long term retention, and even cyber-recovery. Later this year, a book I collaborated on, Protecting Information Assets and IT Infrastructure in the Cloud will launch, too. (As you can well imagine, I go into quite a lot of detail on cloud data protection best practices and common pitfalls in that.)

I make no joke when I say I’m passionate about data protection. When I’m in a “full solution” meeting with customers (that’s where we’ll be talking the entire stack: servers, storage, networking, etc.), more often than not as the data protection person I’ll be the last on the agenda: not because that’s where the company puts me, but because customers love the rest of the stack and see data protection as a necessary evil. My job is simple, and I start every time by saying how excited I am to get up and talk last, because I see it as a mission to make sure I leave the room with my customers excited about data protection. I’ll never agree with anyone that data protection is a necessary evil: I see it as something that provides almost infinite possibility, and I love getting to spend my day convincing people of that.

That’s my job here, too, and just like my day job, it’s a labour of love rather than something you do for pay.

When I first started blogging, I was working for a relatively small systems integrator (having spent 6 years prior at another relatively small systems integrator); I’ve looked at a bunch of different backup and data protection technologies over the years, but when the opportunity arose for me to work for EMC (now Dell EMC), I jumped at it because my view of where data protection needed to head was aligned with EMC’s. Or, to be more truthful, it was the other way around: I jumped at the opportunity to work for EMC/Dell EMC precisely because the company’s direction in and focus on data protection aligned to what I truly, passionately believed as being (a) the best in the industry and (b) the best for the industry.

The Ten Biggest Changes in Data Protection Over the Past Decade

I don’t want to make this blog article all about me. So since this is about reflections, I want to reflect on what I see as the 10 biggest changes in data protection over the last 10 years. (In no particular order.)

The Decline And Fall of the Linear Serpentine Empire

“Tape is dead, long live tape” goes the old saying, except of course, tape keeps on being edged further and further away from the use cases it previously dominated. Tape dominated backup, but deduplication drastically curtailed its cost effectiveness, particularly when stacked up against everything that disk based backup solutions provided. Tape dominated archive, but now object storage (on-premises or in a cloud) is shrinking tape’s appeal there. Quantum, previously a tape powerhouse, has been delisted this month from the New York Stock Exchange after circling the edges of the event horizon for some time. While I have respect for long term industry contributors such as Quantum, I think their decline is an apt parallel to the decline of tape.

Will tape die next week? No. It’ll continue for years, if not decades to come, but those use cases are continuing to decline, year in year out. We’ll still be talking about it, but guess what? Some people still ask if they can fax something to you, too. It doesn’t mean that the fax has survived.

NetApp Lost (or, The Continuum Won)

Snap and replicate is all you need for data protection if you want to find yourself in the most complete and difficult to untangle vendor lock-in imaginable. Imagine deciding to replace your primary storage and discovering that you’ve got to keep it for 7 years to age out those snapshots (or worse, recover them all and migrate them).

There’s a time and a place for snapshots: you will never hear me argue about that. Never. Snapshots are a valid form of data protection that should be part of your holistic protection approach (or part of your data protection continuum). The rest of the industry has accepted that there’s more to data protection than snapshots. (Some might argue that hyperscaling cloud providers would prove otherwise: I disagree. They might fundamentally offer snapshots, but increasingly as businesses move into public cloud, they’re keeping a continuum model. Sensible business and IT practices have picked the continuum approach.)

Deduplication is King, Long Live The King

Contributing significantly to the decline and fall of the linear serpentine empire, deduplication has, as I mentioned earlier, had a radical impact on the data protection industry. (In fact, it’s had a radical impact on the storage industry as well; primary storage had different performance requirements that initially delayed deduplication entering that arena, but as soon as all-flash became king, deduplication followed there as well.)

Deduplication has fundamentally enabled businesses to keep at least their primary operational recovery retention period fully online. This in turn enabled the growth of secondary use cases for data protection. My first book was called “Enterprise Systems Backup and Recovery: A corporate insurance policy” because (especially when using tape), backup was an insurance policy: it’s something you paid for month in, month out, and hoped you never had to claim against. By reducing the storage footprint requirements, but making data instantly available, deduplication has enabled significant development of secondary use cases for all that data stored, including easy repopulation, data mining, simplified testing, and a whole bunch of other activities that we often lump these days into “instant access”.

Deduplication isn’t going away any time soon. In fact, deduplication has become so common that all the “hey we’re different!” backup companies out there now run around trying to tell everyone that deduplication is deduplication is deduplication – i.e., a somewhat crazed notion that getting 4:1 deduplication is as good as getting 55:1 deduplication. (Spoiler alert: it’s not.)

Oracle Killed Unix

Back in 2010, Oracle killed the traditional Unix platform when they purchased Sun. Within six months, every one of my NetWorker customers bar one (that one was running NetWorker on Windows) told me they were looking to transition NetWorker to Windows or Linux. Why? Their businesses had made a strategic decision that Sun systems would only be kept if there was a compelling business case that an application or function could not be ported. It’s what we called the ULNAY scenario: “Uncle Larry Needs Another Yacht”. Oracle grabbed Solaris maintenance prices, rolled them up into a little ball and then threw them as high as they could.

Die-hard AIX and HPUX fans will tell me that Oracle killing Solaris didn’t affect their platforms. I disagree. Solaris, HPUX and AIX outlived all the other Unix platforms: Tru64, Irix, DGUX, MIPS Unix, AT&T Unix, you name them. They all fell by the wayside in one way or another, leaving Solaris HPUX and AIX collectively leaning against each other for support, sharing the Unix marketplace.

When Solaris fell, AIX and HPUX didn’t have anywhere to go to other than their own long slow topple. Instead, Oracle gifted Linux the *nix side of the infrastructure world. Of course, that made a big impact on data protection. Investment shifted to Windows/Linux, and it’s stayed there in terms of the higher order functions. (x86 virtualization of course drove traditional Unix further into the woods.)

Cloud Reinvigorated the “Why Do I Need to Backup?” Question

When I first started in backup, I’d at least weekly get asked “why do I need to backup?” Over time, that died off. Businesses got that they needed to do backup and recovery (for the most part): it became an essential infrastructure activity.

When businesses started moving into public cloud, there were a lot of nasty failures caused by “they take care of backup, don’t they?” To a degree, that story is continuing to this day. These days, my “why do I need to backup?” questions almost exclusively fall into public cloud scenarios, regardless of whether that’s SaaS, PaaS or IaaS. There’s just a lot of assumptions out there about the infrastructure provider automatically taking care of that for you. (Spoiler alert: They don’t.)

Automation, Automation, Automation

Look, I know I keep banging on about this, but I can’t stress to you how big automation is becoming. The datacentre is becoming software defined, and the only way to make that work in a truly efficient way (rather than the “more of the same” approach) is to automate everything in your datacentre. The best data protection deployments I’ve seen in the last decade have been ones where businesses have spent real time and effort in making sure that data protection was as automated as possible, regardless of whether that was storage level data protection or backup and recovery services.

Governance Grew Up

It used to be that I’d have data protection conversations with customers where ROI meant “return on investment”. In astute businesses now, it’s an overloaded TLA and can equally mean “Risk Of Incarceration”. In many jurisdictions these days, failure to recover data is a serious issue that can have direct financial (and sometimes freedom of movement) implications on senior people in the business.

This past decade (mostly in the last 5 years) we’ve seen a much more heightened awareness within data protection about essential governance options: being able to apply retention lock to backups, the ability to audit that a protection environment is working successfully, proving compliance to RPOs and RTOs (not to mention other SLAs) and adherence to KPIs within the business. We’ve shifted from the need to report on what’s been backed up to needing to be able to report on what’s not being backed up within an environment. And with Ransomware, Hactivision and other seriously challenging attacks growing year on year, there’s a much more heightened awareness that mismanaged backup environments can be the Achilles heel of an environment. So we’re not only seeing more governance attention to data protection from traditional vectors, we’re also seeing a lot more attention from risk and security officers, too.

Virtualisation Redefined Data Protection

Other than deduplication, nothing made as much of an impact on data protection as the triumphant return of virtualisation within the datacentre. (Let’s be honest, mainframe was doing it a long time ago.)

Virtualisation was, initially at least, inimical to backup and recovery: or perhaps more realistically, the other way around. Everything that you used virtualisation for (resource pooling, cooperative sharing of resources, recognising that not everything would need to run at full pelt at the same time) was thrown out the window when you triggered your overnight backup (particularly your weekly full backup). Hundreds of virtual machines on dozens of hypervisors all suddenly going pell-mell trying to get their backups done as quickly as possible.

Virtualisation forced a rethink of the agent based backup approach. There were a few false starts, to be sure, but we’re now at a point where image based backup has become critical to the successful running of anything but the smallest virtualisation environment. In fact, the only thing that tends to mask the impact of agent-based backups across a full virtual server farm is source based deduplication.

As soon as we stopped thinking of servers as physical boxes, but instead large container files, we started to be able to do far more interesting data protection options with them, and rethink other protection mechanisms. Bare Metal Recovery (BMR) went from an essential feature in a data protection or at least an infrastructure build environment to entirely niche in less than a decade, thanks to virtualisation. (Who needs BMR when you can just reimage from a template and get back up and running faster?) Application clustering still exists, but it’s also turning into a niche scenario, too. Just as equally, Continuous Data Protection (CDP) was a niche function on physical systems, but gained mainstream accessibility and traction when we virtualised.

Now we’re seeing the next levels of virtualisation changing data protection: first, within data protection itself; Software Defined data protection (e.g., Data Domain Virtual Edition) has become an essential consideration. Secondly though, virtualisation is continuing to push new usage paradigms, such as DRaaS.

Data Protection Decentralised

Twenty years ago I would have related to you how important it was for Oracle DBAs to give up control over their backup and recovery processes – that the only way to achieve order and reliable protection within an environment was to fold all backup and recovery functionality into a single enterprise backup product.

I wouldn’t even contemplate making that argument any more. For one, there’s too many applications out there. Backup administrators don’t have time to become subject matter experts on every database and application that’s running in a business. Equally though, application and database administrators don’t have time to become subject matter experts on each enterprise backup system they encounter.

We’ve seen a pivoting over the last decade in particular to the notion of centralised reporting/provisioning with decentralised access. So DBAs get to backup and recover using their own tools, VMware admins get to protect their systems using their own tools, and so on. In reality, it’s the only way to scale data protection within an organisation that truly works. That lets DBAs be in charge of granular recovery within their own tools, and VMware administrators do things as simple as adding clients to a protection policy without ever leaving vSphere, all the way up through performing instant access.

Centralised provisioning. Centralised governance. Decentralised control. It’s the bees knees when it comes to optimum data protection environments.

The Media Server Lost

The classic three tier backup architecture (Server, Media Server/Storage Node, Client) evolved form a simple need to provide granular administrative control over the funneling of data from point A (the client) to point C (the protection storage – originally, the tapes). So to fill that gap, a lot of Bs were added: A to C via B, that is.

One of the most profound impacts of deduplication, particularly source based, inline deduplication, is the elimination of the media server or storage node from architectural considerations. Before you jump up and down and say NetWorker still has storage nodes, I’ll say that in an optimum NetWorker configuration, those storage nodes are all virtaulised and exist just to help tell clients where to send their data directly. In an optimum NetWorker environment, your storage nodes are not in the data path.

There are exception circumstances where you may still need to consider an intermediary server, they’ll usually come down to the very specific use case of NDMP (as the primary example). If a backup vendor tells you that you need big, powerful physical media servers though as a matter of course for all your backups, these days what they’re really telling you is that they’re still using a 90s architecture, and you get to pay for the privilege of using it.

Wrapping Up

It’s been an exciting ten years writing about data protection, and the industry has seen some epic changes. But here’s the real rub: the next decade is going to be even bigger.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.