“And lo, as the skies darkened and the earth trembled, I saw them riding their pale steeds — The Four Horsemen of the Appocalypse: Apathy, Procrastination, Budget and Emulation.”
Etched onto the tombstone of the last COBOL programmer
“Legacy.”
The word sends chills down the spines of IT infrastructure teams.
We don’t keep legacy equipment because it’s fun. Well, when we’re referring to business purposes, at least. Yes, I’ll admit to wanting some legacy equipment, such as an Amiga 2000 and a Commodore 128, but personal nostalgia is not typically seen as a business process, and I wouldn’t run any production workloads on either of them, either.
Legacy systems are retained because they’re running legacy workloads. Let’s step back and think of what we mean when we say workload. Effectively it means:
An application (or collection of applications) using some set of data, which combined, provides a service for the business.
That service might be internal (e.g., timesheets), or it might be external (customer point of sale). Combined, the application(s), the data and the process create a workload.
We used to worry about what might happen when the last COBOL programmer retires (or passes away). What would happen to all those legacy Mainframe applications that haven’t been updated?
For many businesses, legacy applications are an every day part of life, and we’re not just talking OpenVMS, Tandem or Mainframe systems. In their defense at least, those are all still actively maintained platforms. Sure, there may be a little life-support going on, but there’s actually still a connected support hotline number.
There’s a plethora of scary legacy applications that are, for want of a better term, ticking time-bombs in the heart of business.
Windows 2003. Windows 2000. Windows NT 4. Solaris 8. Unjustified and ancient versions of Oracle, Sybase, Ingres. “You must upgrade your browser to Internet Explorer 6 to access this site”, “This service requires Silverlight”, “Adobe Flash required!”
Each is a stake through the collective hearts of an infrastructure team, and each acts as some deep sea anchor, pulling on the ship of business as it tries to reach its destination.
These legacy apps – running on systems and in applications that are years and in fact sometimes decades past their end of service life keep on being kept around and as an outside observer it often leaves me scratching my head wondering why internal risk teams haven’t stormed offices with blazing torches and pitchforks demanding remediation. There’s an answer, of course. In fact, there’s usually four common answers:
- Apathy
- Procrastination
- Budget
- Emulation
Each answer practically represents a different level of maturity within business when it comes to dealing with legacy apps – though the best solution of course to legacy apps is true migration. Uplift the application and systems, refactor if you’re going to be move it into the cloud or at least properly update it so it runs on modern equipment as a modern program. Yes, it’s easier said than done, but it’s arguably better than the alternatives.
Apathy: It doesn’t matter if this application is old, we’ll just let the requirements of the legacy application define what we can or can’t do within the business. I perhaps saw this best when new Windows laptops had to be forcibly downgraded to 32-bit versions because a critical business desktop application was 16-bit and incompatible with 64-bit versions of the OS. This caused a plethora of other challenges, but hey! the 16-bit application kept on ticking.
Procrastination: Let’s leave it another year and see if the problem goes away. (Narrator’s voice: It won’t.) This is the “kicking the can down the road” solution. It’s not quite the same as apathy. Whereas apathy will result in delays on new systems upgrades or functions, procrastination will lead to some upgrade, at some point, being done, and chaos erupts! as the legacy application ceases to be accessible.
Budget: We’d get to this if we have the money. But it’ll cost $X to fix it and we don’t have that money. Meanwhile, flagellate the IT department for having to pay a 30% increase in maintenance fees on end of service life equipment, and ignore the soft costs of people spending endless amounts of time keeping equipment and systems on life-support.
Emulation: Virtualise it and make it go away. More recently, that can also mean containerise it and make it go away. (Maybe the next step will be legacy production applications running in WINE?)
Look, I appreciate refactoring or redeveloping legacy applications is hard, I really do. But I think every business that avoids the problem edges themselves that little bit closer, every year, to an appocalypse:
- There’ll come a time when that apathy approach brings the business to a screeching halt and it’ll cost an order of magnitude more to fix it, because all the prioritisation of legacy over modern that’s been going on to keep that legacy app running just creates more legacy problems to fix.
- Procrastination only has one outcome. Helen, that stalwart of the IT department who has been warning for years to successive managers that the application will stop working at some point because some upgrade will be done by a project that doesn’t take it into consideration will be proved right, but no-one will stop to thank her for her foresight while everyone is running around screaming as they try to work out how to urgently downgrade a host of systems.
- The business may put off that legacy application upgrade because of budget, but there’s a lot of budgetary soft costs associated with this, and what’s usually not factored is the risk to the business if that legacy app causes a failure and it turns out there’s no more spare parts that can be purchased on eBay?
- Emulation (whether it’s literal emulation, or something simpler, like virtualisation or containerisation) may work to begin with, but eventually even this approach will get you bogged down in legacy considerations. (It’s been a long time, for instance, since a version of vSphere supported an NT 4 guest OS.) And these legacy applications have been left as legacy because they’re supposedly curmudgeonly … will that emulation layer wrapped in a virtualisation environment promising to run code a vendor has long left behind provide a rock-solid guarantee that it’ll run exactly like it’s meant to?
Every one of these approaches creates data protection problems: in fact, all three data protection problems are likely to become manifest. Legacy doesn’t just mean legacy application, it also means legacy approaches to privacy, legacy approaches to security, and legacy approaches to backup and recovery (and all other forms of data storage protection).
The appocalypse is unlikely to come in one huge deluge. It’s not going to be like the threat Y2K posed if it hadn’t been mitigated. There’s not some special time when suddenly everything will stop working (although I remain somewhat somewhat fearful of the Y2038 problem). In fact, it’s already upon us: it comes in dribs and drabs; Apathy, Procrastination, Budget and Emulation ride in on pale horses and cause the most esoteric of outages and delays, and you can see the shadow of them in a myriad of ways where companies resist new ways of doing things, new enhanced consumer services. Not because they don’t want to do something, but because the legacy app has drawn a line in the sand they’re not able to cross yet.
We were worried about what would happen when the last COBOL programmer died, never quite realising that was a small problem compared to the real appocalypse.