Back when I first started doing enterprise backup, DLT 7000 had just been introduced. There were a few systems I had to administer that still had DLT 4000 drives attached, but DLT 7000 was rapidly becoming the standard.
With DLT 7000 came a batch of additional headaches, most notably: how do I keep the damn thing streaming? With a 5MB/s write time and at least half of the servers in my environment still connected by 10Mbit rather than 100Mbit ethernet, keeping a drive of that speed streaming was a challenge involving juggling of backup timings and parallelism.
Fast forward 13 years, and we’ve come full circle. For a while systems and networks leapfrogged tape, or at least were able to mostly keep up with tape, but we’re now, with high speed tape like LTO-4, back to a situation the average site will struggle to keep tape streaming.
First, I guess I should qualify – what’s this streaming that I refer to? If you want to get down to the utter nuts and bolts of it, it refers to keeping the tape running through the drive mechanism at a consistent (and high) number of metres per second. (For instance, several LTO-4 drives are rated at 7 metres per second.) In backup terms, what we’re talking about is keeping a consistently high number of MB/s running to the drive.
When we’re unable to keep a consistently high number of MB/s running to the drive, one of two things will typically happen – if the drive is able to (and it depends entirely on the manufacturer and tape format), it may “step down” its streaming speed to a number that is more suitable to the environment. This has variable success. You might be able to argue it’s like only ever going up to 3rd gear in a Ferrari, but I don’t know cars so that’s likely to be a terribly analogy for a whole suite of reasons I don’t understand … 🙂
The second thing that may happen is that the tape will start to shoe-shine. Shoe-shining is where the minimum threshold throughput for drive streaming can’t be achieved. The drive eventually starts stopping and starting when its buffers are emptied, etc., and this slows the backup down even further, plus creates additional wear and tear both on drives and on media.
To be blunt – the minimum goal of any backup administrator when it comes to performance tuning an environment should be to eliminate shoe-shining wherever possible.
So, back to that “full circle”; years ago, we’re now at the point again where keeping media streaming is a real challenge.
One problem that frequently occurs on new sites is that when evaluating tape formats for purchase, they look at that magic “bang for buck” number – the size of the media, in GB. For this reason, LTO-4 looks appealing to a large number of sites – 800 GB native, 1.6TB compressed (assuming 2:1 compression), it just seems like a great media format.
The problem that frequently happens though is that the streaming speed isn’t taken into consideration. LTO-4 on average has an uncompressed streaming speed of 120MB/s. This is not easy to achieve, and as you can imagine, achieving faster with compression is even more challenging.
Now, there are undoubtedly big environments that can easily keep LTO-4 streaming with direct backups from client to tape. But these aren’t your average environments. Look at the speed – 120MB/s – that’s faster than gigabit ethernet. We’re immediately talking either large trunked environments at both the server and the clients, or stepping up to 10 gigabit ethernet. We’re talking lots of spindles on high speed disk. Or to be perhaps a little crass, we’re talking buckets of $$$.
To me then the primary impact of high speed tape on backup is the need for organisations to rethink backup when using high speed tape. Using even LTO-3, it was possible for a gigabit based environment to achieve a modicum of tape streaming just by using higher levels of parallelism, etc. However, once you reach the point where your average streaming speed for native/uncompressed backups exceeds your average network speed, you must adjust the backup architecture.
The most common, and most appropriate way to achieve this is to move to a 2-tier storage system, comprising of a layer of disk and then the layer of tape.
Within NetWorker, there’s two ways to achieve this:
- First backup to disk backup units (ADV_FILE devices), then clone/stage to tape.
- First backup to virtual tape libraries (VTLs), then clone/stage to tape.
The purpose of either of these mechanisms is to put all the backups that would be done overnight, etc., into a single location where once it is streamed to tape the network is no longer a factor.
So, if we go down the disk backup unit option, this would mean attaching some high speed storage to the backup server (or a storage node – let’s assume in this instance that every time I say “backup server”, I could equally mean “storage node”), and also attach the LTO-4 drives to the backup server. When the backup is initially done though, it is run across the network to the backup server’s disk backup units. Once the backup completes, the backup server runs first cloning operations to write tape copies – without the network in play, and assuming we have suitable hardware connectivity, we should be able to easily keep LTO-4 streaming from one consistent and uninterrupted read from high speed disk. At a later point, we then stage that data – write a second copy, which when completes, removes the copy from the disk backup unit.
(I should note, there’s a raft of other options that can be deployed to assist with getting high speed tape streaming, many of which I discuss in the performance tuning section of my book. I’ve just picked the most common scenario here.)
If we go down the VTL path, we’re still essentially relying on the same mechanism, but in a different format. That is, we’re relying on the scenario that once all the data we want to transfer out to physical tape is on one “chunk” of high speed disk, we can do that transfer at streaming speed.
My first recommendation then to any site that is using LTO-4* in a direct-to-tape scheme, and can’t get drives streaming, is that they need to rethink their backup architecture. In the end it doesn’t matter how much time you spend tweaking software settings here and there, if the hardware can’t cut it, you won’t get it.
—
* More generally, as you may have imagined, this can apply to any tape format where, as I mentioned earlier in the article, the native streaming speed exceeds the native network speed.