Hello, and welcome to 2019. I’m still getting my head around that. It still seems that it was only a blink of my eyes ago that I was sitting in my office on December 31, 1999, waiting to see whether any disasters would happen come the stroke of midnight. Thankfully, nothing happened due to extensive preparation. In addition to the normal patching of operating systems and applications I’d done with my team at the time, I’d also been assigned to an extensive remediation project for a SAP/Oracle instance running on Tru64.
It was no small irony that we found out on Monday, 3 January 2000 that the only system which had crashed as the clock struck midnight was a Microsoft Access database hastily thrown together at the last minute to track any Y2K issues reported.
Measure twice, cut once. Haste makes waste. Name your aphorism and run with it.
Y2K was a valuable lesson for me when I still a relatively junior system administrator. I’d been working in the team for around four years at that stage, but there were experts in that team who taught me some essential skills. Y2K may have been a good example of how to run (and at times, how not to run) projects, but the day to day operation of the team was exactly what I needed to hone my skills and – I’ll admit – learn the difference between being smart and being wise. (Perhaps, too, learn the importance between being smart and being a smartarse. Ah, the perils of youth.)
I’ve always said the biggest lesson I got from my first job was what I’d call the zeroth rule of system administration. Lackadaisically, it was couched in terms that always makes people cringe when they hear it: “The best system administrator is a lazy system administrator”. I don’t know how many looks of shock I’ve seen when I’ve said that over the years. But here’s what it meant:
“If you have to do something more than once, automate it.”
Scoffa & Fozzy’s Zeroth Rule of System Administration
No more, no less. An administrator (system or otherwise) should not have to repetitively perform tasks that can be automated. Work all you have to in order to get all the repetitive stuff automated and you can spend your time focusing on projects and fixing things.
Automation isn’t just important, it’s the secret sauce that makes the difference between infrastructure that works and infrastructure that sings. If you’re prepared to throw enough people at something, you can always make infrastructure work. I recall in 2016 having a senior architect working for a managed service provider-cum-IaaS provider blithely say, “Why would I need to automate it? I’ve got a pool of 70 people I can throw at it.”
Therein dies innovation: people who work on the same thing, day in and day out, do not innovate. It is the challenge of the new and the ascension of the automated that gives the potential to businesses to free up their staff in order to get a competitive edge.
A common talking point with customers is transformation. Some of you are probably tired of hearing about IT transformation and Digital Transformation, but the simple reality is that IT transformation begets digital transformation. You have to keep the lights on, you have to keep infrastructure running, but you have to do it more efficiently, and – no surprises here – you can only do that when you can automate your infrastructure and your processes.
So we’re at the start of 2019 and one of the biggest buzz-words in IT is automation. This is where I sit back as a grizzled ex-Unix administrator (big beard and all) and blithely say “welcome to the party, pal”. (OK, I saw Die Hard at IMAX before xmas and it’s still fresh in my mind.)
Automate! Automate! Automate! (You can almost imagine a Ballmer-esque person prancing around stage shouting it to a crowd of DevOps teams.)
It’s not just a big buzzword for 2019, it’ll drive the next decade of change within IT infrastructure, both on-premises and in the cloud. Everything old is new again, you might say.
So what’s that got to do with backup and recovery – and more broadly, data protection? Well, everything. Expect to see an increasing focus from everyone about what you can automate within your data protection systems. Things like the Data Protection Extension for VMware’s vRealize Suite (available with both NetWorker and Avamar) are going to gain increasing focus. Well defined and well enabled automation has only a single end-goal within an organisation: a private (or hybrid) cloud experience. That’s not just buzz terms – that’s about eliminating silos and speeding up time to execution, time to value.
How long does it take you to get a new virtual machine spun up in your environment – particularly, your production environment? I don’t mean end-to-end, to the point where you can start activating a workload or onboarding users onto the system – I’m talking to the point where you can start configuring the system. 10 minutes? 2 hours? 2 months?
Unsurprisingly, without automation, and without a silo-less approach to service delivery, the answer is more often than not closer to 2 months than 2 hours for many businesses. That’s because service delivery and approval models haven’t adapted to a modern, automated approach – and I’m sorry to say that managed service providers who still have the approach of “I’ve got a pool of 70 people I can throw at it” don’t speed that up (usually, the opposite).
Innovation isn’t necessarily about doing new things; if you’re trying to move a one-tonne block by dragging it on the ground someone can innovate your approach by giving you a wheeled platform to sit the block on. That’s not new technology, just a way of tackling the problem from a different angle.
Innovation in data protection is getting it automated, and getting it automatically baked into the systems that are being provisioned in your environment. Years ago that might have meant including the backup software agent in the system deployment image, but that’s only a fraction of the challenge being addressed. Now it’s about someone being able to click through some dialog pages in a web browser to deploy a new virtual machine and have it automatically configured for nightly backups with 30 days retention, with monthlies retained for 12 months, all because they chose, “Silver Protection” from a drop-down list of the mandatory data protection selection for the image.
There was a lot of buzz a few years ago about “By 2020 there’ll be 44 Zetabytes of data”, but to me that wasn’t really the challenge. 2020 is only a year away now, and while globally there may be a tonnes more data today than there was in the world this time last year, locally we only have to deal with the data growth that we see within our own organisations.
But are you getting more staff added to your teams? Does the data growth come with Full Time Employee (FTE) growth? The days of seeing a parallel between data growth and FTE growth are long gone: administrators are expected to handle significantly larger volumes of data each year without needing additional team members to help.
There’s only one way to handle that ongoing data growth, not only within your infrastructure as a whole, but in a way to ensure your job doesn’t drive you bonkers: automation.
The best data protection administrator is a lazy data protection administrator. Make that your rule for 2019.
2 thoughts on “Hello, 2019”