Backup Servers and Malicious Attacks

Many years ago when I was still a Unix system administrator, some midrange systems manager got the bright idea to standardise on a particular monitoring and control software across the country for our Unix platforms. This software would allow a single dashboard view of the health of all Unix systems in the business in Australia, regardless of which team the systems were administered by.

It was, in theory, a grand idea. In practice, I recall my team had a few security concerns about it, made more real when it came time for the rollout. The rollout would be managed by one of the other teams, and we had to supply the root passwords for all servers under our control to that team for the rollout.

Then a day or two after the rollout started we found out that at least fifty per cent of the time, when the rollout happened, the /etc/shadow file would be clobbered, rendering logins inoperable.

As you can imagine, this was classified as A Very Bad Error.

Data Portal

As the NetWorker (aka Solstice Backup) administrator, I had a novel solution: I’d log onto the backup server and do a directed recovery from the backup server of the appropriate /etc/shadow file from any affected server back to itself. Thus A Very Bad Error became An Annoying Inconvenience.

The right recovery can get you out of one hell of a pickle.

I’ve also had some Very Bad Recoveries, too. Like, accidentally recovering /dev from a RedHat Linux laptop (with forced overwrite) over /dev on a Solaris NetWorker server. (Getting back from that was an exercise in horror, due to cascading failures.) Backup solutions are great for solving problems within your environment, but of course, if you make a mistake and recover the wrong thing to the wrong location, all sorts of problems can occur.

This is a topic I’ve been thinking of more regularly with the growing sophistication of insider and ransomware attacks, and an article I read over the weekend gave me more food for thought on it:

When Hillary Clinton stumbled and coughed through public appearances during her 2016 presidential run, she faced critics who said that she might not be well enough to perform the top job in the country. To quell rumors about her medical condition, her doctor revealed that a CT scan of her lungs showed that she just had pneumonia.

But what if the scan had shown faked cancerous nodules, placed there by malware exploiting vulnerabilities in widely used CT and MRI scanning equipment? Researchers in Israel say they have developed such malware to draw attention to serious security weaknesses in critical medical imaging equipment used for diagnosing conditions and the networks that transmit those images — vulnerabilities that could have potentially life-altering consequences if unaddressed.

The malware they created would let attackers automatically add realistic, malignant-seeming growths to CT or MRI scans before radiologists and doctors examine them. Or it could remove real cancerous nodules and lesions without detection, leading to misdiagnosis and possibly a failure to treat patients who need critical and timely care.

Hospital viruses: Fake cancerous nodes in CT scans, created by malware, tricks radiologists. Kim Zetter, April 3 2019, Washington Post.

The article goes on to describe that the malware isn’t just theoretical. Researchers developed the malware, then with the approval of a hospital, tested deploying it. Radiologists really were confused most of the time and picked inserted content as tumours, and didn’t miss the removal of real tumours.

Advanced medical systems such as MRI scanners, CT scanners, X-Ray systems, etc., should be appropriately air-gapped, of course. Yet they’re not. My father had a pacemaker installed last year, and there is a succession of articles (e.g., this) talking about how common pacemakers are prone to be vulnerable, putting patients lives directly at risk. My father also has terminal cancer, so the idea that medical systems relating to the identification of tumours is something that speaks personally to me.

In fact, last week I had a ‘delightful’ (for actually negative values of ‘delightful’) accident; the smoke alarms in our house went off at 23.30 or thereabouts on Tuesday night, and coming back downstairs from turning off one of the smoke alarms, I slipped on the stairs, became slightly airborne, landed hard on my rear and slid down the remaining half flight of stairs. I have massive corking of muscles around my thigh, hips and right buttock, and the bruising continues to spread out – it’s easily larger than an A3 sheet of paper turned in almost any direction. But I didn’t break anything: I know this, because I went and got X-Rays done the morning after the accident. I’m not going to name the hospital involved, but since I couldn’t exactly sit comfortably, I was pacing a bit while waiting my turn for the X-Ray. It was a small imaging lab, and there was at least 10 metres separating the X-Ray room from the imaging lab’s office (which was around a corner from the room, making it further out of sight), and … surprise surprise, each time I walked past the imaging lab’s office, their main (only) desktop computer was sitting unlocked and accessible. It’s a hospital, it’s secure, right?

Except, if you read through the article linked above, it took about 30 seconds for someone to wander into the hospital in question and infect a device with the malware.

Had I a USB-key laden with malware (and intent to do so) it would have been trivial last Wednesday for me to similarly infect the medical imaging systems’s computer, since the room was unattended for 10+ minutes at a time – sure, it was probably just local patient records/administration, since the actual computers controlling the X-Rays were in the room with the X-Ray system, but it still would have been nasty.

We might also think back to Ken Thompson’s Reflections on Trusting Trust, where he outlined embedding a login-backdoor for Unix systems into the C compiler used – but only in the C compiler’s binary, making it effectively undetectable without a byte-by-byte decompile of the compiled C compiler or its generated login utility. (I honestly think it’s one of the most important documents anyone in the IT industry can ever read.)

Security is hard. Just the same way backup is hard. Or, to put it another way to avoid shrieking from wonks in startups that would have you believe otherwise: good security is hard; good backup is hard. Guess what? Good, secure backup is harder, still.

There’s already been reported instances of hackers taking out backup servers before going on a data destruction spree. There’s already been reports of Ransomware deliberately targeting backups if it detects it’s on a backup server.

Here’s my take: this is something we all need to be prepared for, then some. Ransomware is most effective when you have to pay the ransom. After all, if you’ve got viable snapshots (that haven’t been overrun by the amount of change generated by the cryptoware), or if you’ve got backups, you can get the data back a lot faster by recovering it than you will by paying the ransom.

This is not me talking about cyber-recovery (aka “IRS”). I’ve covered that topic before. I’ll of course mention that unlike the linked backup deletion story above, NetWorker combined with Data Domain will give you significantly better protection, since the Boost protocol doesn’t have a ‘mount point’ so can’t be readily accessed, and retention lock stops someone from being able to delete your backups.

Deleting your backups is one thing.

Maliciously using your backups is another thing altogether. I think that’s something we need to be more aware of, because it will happen. Cyber-attacks are a massive business for criminals and yes, nation-states, too. The script-kiddies play with the discarded toys, but the (dare I say) professional criminals and nation states are prepared to put in the work to build a more sophisticated attack vector. (After all, think of the work that went into Stuxnet.)

Your backup server (or services) represent a significant, currently (thankfully) under-utilized attack vector for your organisation. There’s an old saying in backup: nothing touches more of your environment than backup (other than the network itself). So ask yourself, just what sort of damage could malicious software that understands your backup server do if it were given 24 hours, or even 12 hours access to your environment?

  • What if randomly, directories on a server were reverted to their state from a week ago? What if that happened to dozens, or hundreds, of servers in your environment?
  • What if SQL databases picked at random were recovered to 2 weeks ago?
  • What if data from multiple servers were recovered onto a server, all into the same directory, making it a pigs breakfast to sort out what it should look like?
  • What if virtual machine images were repeatedly recovered to alternate locations in a vCenter environment until the storage pool was exhausted? (Isn’t thin provisioning cool?)
  • What if backup jobs were just randomly aborted?
  • What if clone jobs were all disabled?
  • What if recovery jobs were randomly aborted?

Technically, they’re all annoyances that are (ahem) recoverable from. But how much time, effort and short term data loss might you incur in the process?

There’s only two ways you shouldn’t be scared about this: you don’t appreciate just how much damage malicious software that understands backup could do in your environment, or you’ve already worked with your security teams to give your backup environment the same sort of security levels as are given your most mission critical systems.

I’m not aware of any malicious software that the sorts of things I’ve mentioned above yet. Here’s my prediction: it will eventuate. We’re seeing increasing sophistication in malicious software, and now is your window of opportunity, your window of need, to get very tight with your security team and work closely with them to prevent your backup server from being a security threat to your environment. Good, secure backup is hard.

I’m not going to sit here and pretend I know exactly how to secure your backup services. I have ideas, or course: limiting remote access, limiting software installation options, OS hardening, jump boxes and a raft of activities: everything that you would do for the sort of database which, if stolen, would see your company fall over in a heap and perhaps never recover from. If you’re not applying that level of security to your backup servers, you’re not thinking of the future. Obviously, that’s going to be a challenge: you might protect your most mission critical database by only allowing a single server to talk to it, but backup still has to fulfil its day to day functions. So it’s a fine line you’re going to have to tread.

Better to start treading that line now than the day after we hear of the first piece of malicious software designed do roll Oracle databases back to yesterday’s backup.

1 thought on “Backup Servers and Malicious Attacks”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.