Nov 302009
 

With their recent acquisition of Data Domain, some people at EMC have become table thumping experts overnight on why you it’s absolutely imperative that you backup to Data Domain boxes as disk backup over NAS, rather than a fibre-channel connected VTL.

Their argument seems to come from the numbers – the wrong numbers.

The numbers constantly quoted are number of sales of disk backup Data Domain vs VTL Data Domain. That is, some EMC and Data Domain reps will confidently assert that by the numbers, a significantly higher percentage of Data Domain for Disk Backup has been sold than Data Domain with VTL. That’s like saying that Windows is superior to Mac OS X because it sells more. Or to perhaps pick a little less controversial topic, it’s like saying that DDS is better than LTO because there’s been more DDS drives and tapes sold than there’s ever been LTO drives and tapes.

I.e., an argument by those numbers doesn’t wash. It rarely has, it rarely will, and nor should it. (Otherwise we’d all be afraid of sailing too far from shore because that’s how it had always been done before…)

Let’s look at the reality of how disk backup currently stacks up in NetWorker. And let’s preface this by saying that if backup products actually started using disk backup properly tomorrow, I would be the first to shout “Don’t let the door hit your butt on the way out” to every VTL on the planet. As a concept, I wish VTLs didn’t have to exist, but in the practical real world, I recognise their need and their current ascendency over ADV_FILE. I have, almost literally at times, been dragged kicking and screaming to that conclusion.

Disk Backup, using ADV_FILE type devices in NetWorker:

  • Can’t move a saveset from a full disk backup unit to a non-full one; you have to clear the space first.
  • Can’t simultaneously clone from, stage from, backup to and recover from a disk backup unit. No, you can’t do that with tape either, but when disk backup units are typically in the order of several terabytes, and virtual tapes are in the order of maybe 50-200 GB, that’s a heck of a lot less contention time for any one backup.
  • Use tape/tape drive selection algorithms for deciding which disk backup unit gets used in which order, resulting in worst case capacity usage scenarios in almost all instances.
  • Can’t accept a saveset bigger than the disk backup unit. (It’s like, “Hello, AMANDA, I borrowed some ideas from you!”)
  • Can’t be part-replicated between sites. If you’ve got two VTLs and you really need to do back-end replication, you can replicate individual pieces of media between sites – again, significantly smaller than entire disk backup units. When you define disk backup units in NetWorker, that’s the “smallest” media you get.
  • Are traditionally space wasteful. NetWorker’s limited staging routines encourages clumps of disk backup space by destination pool – e.g., “here’s my daily disk backup units, I use them 30 days out of 31, and those over there that occupy the same amount of space (practically) are my monthly disk backup units, I use them 1 day out of 31. The rest of the time they sit idle.”
  • Have poor staging options (I’ll do another post this week on one way to improve on this).

If you get a table thumping sales person trying to tell you that you should buy Data Domain for Disk Backup for NetWorker, I’d suggest thumping the table back – you want the VTL option instead, and you want EMC to fix ADV_FILE.

Honestly EMC, I’ll lead the charge once ADV_FILE is fixed. I’ll champion it until I’m blue in the face, then suck from an oxygen tank and keep going – like I used to, before the inadequacies got too much. Until then though, I’ll keep skewering that argument of superiority by sales numbers.

The lighter side…

 Aside  Comments Off
Nov 282009
 

In my spare time I tend to either write, watch a bit of TV or whip out my camera and take some photos. When I bought my 50D, we were lucky enough to also snag a nice 50mm macro lens, which does beautiful depth of field as well as actual macro photography.

My cats are particularly enjoying the 50D – while it’s not as good as my partner’s 5D, it does have one advantage over older cameras – the ability to take decent shots in lower light situations without needing a flash. Since the cats are often the subject of my photography, they’re at least starting to put up with a camera that doesn’t half blind them every time the shutter goes.

Even a very average amateur photographer like myself can manage to get great shots with a great camera, thus proving that if you want to take good photos, step yourself up to a digital SLR – even on full program mode, it’ll outstrip the best of the “prosumer” point and click devices.

I’m particularly pleased with this morning’s efforts:

Stitch

In case you’re not familiar, that’s a Burmese. They’re identifiable almost instantly by their yellow to golden eyes. They’re amongst the most intelligent cats you can get whilst simultaneously being the most loving, frequently putting dogs to shame. And yes, they fetch too.

Nov 282009
 

My partner for the last 13 years is a graphic designer (and an excellent photographer to boot). As you may imagine, over the years, I’ve watched him use a successive number of Adobe products. I even periodically got cast offs myself – e.g., floating around on my system I’ve got a first generation InDesign that I had been working on my book in for a while.

One thing I’ve noticed over the years is that it doesn’t matter how fast his systems are, the one thing that will generally slow them down are Adobe products. The only time this isn’t the case is when he has 10GB of RAM or more.

But this isn’t just limited to the “pro” Adobe products. So when John Nack over at Adobe had the gall to say that Adobe are “sensitive to bloat”, I couldn’t believe that anyone from Adobe could say that with a straight face.

For goodness sakes, even Acrobat Reader, the absolute (should be minimalist) stable horse of Adobe is bloated and slow. It’s practically zombie-esque. You launch it and it staggers out of its directory/crypt, shakes off the spider webs, looks around for some fresh brains to snack on, begrudgingly brings up a window, freezes for a little while in case there’s a zombie hunter around, then eventually opens the requested document.

Then there’s Flash. That travesty of an animation product that’s actually electronic tar. You know what I do when I see a vendor using Flash for something I need to do? I clear my calendar. I quit every app I have to give the memory leaking sucker enough room to work for a while before crashing, then I sit and wait for it to do its best at trashing my systems. Honestly, the best speed up I got with web browsing was when I installed ClickToFlash for Safari. It makes web browsing a dream, and means I can actually go to the Sydney Morning Herald without 6-10 flash apps starting every time I go to the front page*. It’s amazing the number of sites you can go to and have a smooth clean web browsing experience when you’ve got Flash turned off.

There’s no excuse for such tired and frumpy software. Anyone with a Mac for instance will tell you how quickly Apple’s Preview launches and starts displaying PDFs. On the rare instances where you have to, for some esoteric compatibility reason, open Acrobat Reader instead, well, you want to go fix yourself a coffee while you wait for it to load.

Honestly, Adobe should maybe spend a year or two demanding that developers stop adding code and functionality, and instead learn some lessons and start subtracting code.


* SMH would undoubtedly argue that it’s a “Mac problem”. Honestly, much as I love that paper, it’s so anti-Apple that it’s a wonder someone hasn’t founded a Journalistic Bias Awards just to give them the inaugural golden bong for to celebrate whatever crap they smoke before they write stories about Apple. They make that insidious piece of online garbage, The Inquirer, look fair and balanced a lot of the time!

Nov 272009
 

As an employee of an EMC partner, I periodically get access to nifty demos as VMs. Unfortunately these are usually heavily geared towards running within a VMware hosted environment, and rarely if ever port across to Parallels.

While this wasn’t previously an issue having an ESX server in my lab, I’ve slowly become less tolerant of noisy computers and so it’s been less desirable to have on – part of the reason why I went out and bought a Mac Pro. (Honestly, PC server manufacturers just don’t even try to make their systems quiet. How Dull.)

With the recent upgrade to Parallels v5 being a mixed bag (much better performance, Coherence broken for 3+ weeks whenever multiple monitors are attached), on Thursday I decided I’d had enough and felt it was time to start at least trying VMware Fusion. As I only have one VM on my Mac Book Pro, as opposed to 34 on my Mac Pro, I felt that testing Fusion out on my Mac Book Pro to start with would be a good idea.

[Edit 2009-12-08 – Parallels tech support came through, the solution is to decrease the amount of VRAM available to a virtual machine. Having more than 64MB of VRAM assigned in v5 currently prevents Parallels from entering Coherence mode.]

So, what are my thoughts of it so far after a day of running with it?

Advantages over Parallels Desktop:

  • VMware’s Unity feature in v3 isn’t broken (as opposed to Coherence with dual monitors currently being dead).
  • VMware’s Unity feature actually merges Coherence and Crystal without needing to just drop all barriers between the VM and the host.
  • VMware Fusion will happily install ESX as a guest machine.
  • (For the above reason, I suspect, though I’ve not yet had time to test, that I’ll be able to install all the other cool demos I’ve got sitting on a spare drive)
  • VMware’s Unity feature extends across multiple monitors in a way that doesn’t suck. Coherence, when it extends across multiple monitors, extends the Windows Task Bar across multiple monitors in the same position. This means that it can run across the middle of the secondary monitor, depending on how your monitors are layed out. (Maybe Coherence in v5 works better … oops, no, wait, it doesn’t work at all for multiple monitors so I can’t even begin to think that.)

Areas where Parallels kicks Fusion’s Butt:

  • Even under Parallels Desktop v4, Coherence mode was significantly faster than Unity. I’m talking seamless window movement in Coherence, with noticeable ghosting in Unity. It’s distracting and I can live with it, but it’s pretty shoddy.
  • For standard Linux and Windows guests, I’ve imported at least 30 different machines from VMware ESX and VMware Server hosted environments into Parallels Desktop. Not once did I have a problem with “standard” machines. I tried to use VMware’s import utility this morning on both a Windows 2003 guest and a Linux guest and both were completely unusable. The Windows 2003 guest went through a non-stop boot cycle where after 5 seconds or so of booting it would reset. The Linux guest wouldn’t even get past the LILO prompt. Bad VMware, very Bad.
  • When creating pre-allocated disks, Parallels is at least twice as fast as Fusion. Creating a pre-allocated 60GB disk this morning took almost an hour. That’s someone’s idea of a bad joke. Testing creating a few other drives all exhibited similarly terrible performance.
  • Interface (subjective): Parallels Desktop v5 is beautiful – it’s crisp and clean. VMware Fusion’s interface looks like it’s been cobbled together with sticks and duct tape.

Areas where Desktop Virtualisation continues to suck, no matter what product you use:

  • Why do I have to buy a server class virtualisation product to simulate turning the monitor off and putting the keyboard away? That’s not minimising the window, it’s called closing the window, and I should be able to do that regardless of what virtualisation software I’m running.
  • Why does the default for new drives remain splitting them in 2GB chunks? Honestly, I have no sympathy for anyone still running an OS old enough that it can’t (as the virtual machine host) support files bigger than 2GB. At least give me a preference to turn the damn behaviour off.

I’ll be continuing to trial Fusion for the next few weeks before I decide whether I want to transition my Mac Pro from Parallels Desktop to Fusion. The big factor will be whether I think the advantages of running more interesting operating systems (e.g., ESX) within the virtualisation system is worth the potential hassle of having to recreate all my VMs, given how terribly VMware’s Fusion import routine works…

[Edit 2009-12-08 – Parallels tech support came through, the solution is to decrease the amount of VRAM available to a virtual machine. Having more than 64MB of VRAM assigned in v5 currently prevents Parallels from entering Coherence mode.]

Nybbles

 Aside, General thoughts  Comments Off
Nov 262009
 

If you thought in the storage blogosphere that this week had seen the second coming, you’d be right. Well, the second coming of Drobo, that is. With new connectivity options and capacity for more drives, Drobo has had so many reviews this week I’ve struggled to find non-Drobo things to read at times. (That being said, the new versions do look nifty, and with my power bills shortly to cutover to have high on-peak costs and high “not quite on-peak” costs, one or two Drobos may just do the trick as far as reducing the high number of external drives I have at any given point in time.)

Tearing myself away from non-Drobo news, over at Going Virtual, Brian has an excellent overview of NetWorker v7.6 Virtualisation Features. (I’m starting to think that the main reason why I don’t get into VCBs much though is the ongoing limited support for anything other than Windows.)

At The Backup Blog, Scott asks the perennial question, Do You Need Backup? The answer, unsurprisingly is yes – that was a given. What remains depressing is that backup consultants such as Scott and myself still need to answer that question!

StorageZilla has started Parting Shot, a fairly rapid fire mini-blogging area with frequent updates that are often great to read, so it’s worth bookmarking and returning to it frequently.

Over at PenguinPunk, Dan has been having such a hard time with Mozy that it’s making me question my continued use of them – particularly when bandwidth in Australia is often index-locked to the price of gold. [Edit: Make that has convinced me to cancel my use of them, particularly in light of a couple of recent glitches I've had myself with it.]

Palm continues to demonstrate why it’s a dead company walking with the latest mobile backup scare coming from their department. I’d have prepared a blog entry about it, but I don’t like blogging about dead things.

Grumpy Storage asks for comments and feedback on storage LUN sizings and standards. I think a lot of it is governed by the question “how long is a piece of string”, but there are some interesting points regarding procurement and performance that are useful to have stuck in your head next time you go to bind a LUN or plan a SAN.

Finally, the Buzzword Compliance (aka “Yawn”) award goes to this quote from Lauren Whitehouse of the “Enterprise Strategy Group” that got quoted on half a zillion websites covering EMC’s release of Avamar v5:

“Data deduplicated on tape can expire at different rates — CommVault and [IBM] TSM have a pretty good handle on that,” she said. “EMC Avamar positions the feature for very long retention, but as far as a long-term repository, it would seem to be easy for them to implement a cloud connection for EMC Avamar, given their other products like Mozy, rather than the whole dedupe-on-tape thing.”

(That Lauren quote, for the record, came from Search Storage – but it reads the same pretty much anywhere you find it.)

Honestly, Cloud Cloud Cloud. Cloud Cloud Cloud Cloud Cloud. Look, there’s a product! Why doesn’t it have Cloud!? As you can tell, my Cloud Filter is getting a little strained these days.

Don’t even get me started on the crazy assumption that just because a company owns A and B they can merge A and B with the wave of a magic wand. Getting two disparate development teams to merge disparate code in a rush, rather than as a gradual evolution, is usually akin to seeing if you can merge a face and a brick together. Sure, it’ll work, but it won’t be pretty.

Nov 252009
 

Everyone who has worked with ADV_FILE devices knows this situation: a disk backup unit fills, and the saveset(s) being written hang until you clear up space, because as we know savesets in progress can’t be moved from one device to another:

Savesets hung on full ADV_FILE device until space is cleared

Honestly, what makes me really angry (I’m talking Marvin the Martian really angry here) is that if a tape device fills and another tape of the same pool is currently mounted, NetWorker will continue to write the saveset on the next available device:

Saveset moving from one tape device to another

What’s more, if it fills and there’s a drive that currently does have a tape mounted, NetWorker will mount a new tape in that drive and continue the backup in preference to dismounting the full tape and reloading a volume in the current drive.

There’s an expression for the behavioural discrepancy here: That sucks.

If anyone wonders why I say VTLs shouldn’t need to exist, but I still go and recommend them and use them, that’s your number one reason.

Nov 242009
 

Over at StorageNerve, and on Twitter, Devang Panchigar has been asking Is Storage Tiering ILM or a subset of ILM, but where is ILM? I think it’s an important question with some interesting answers.

Devang starts with defining ILM from a storage perspective:

1) A user or an application creates data and possibly over time that data is modified.
2) The data needs to be stored and possibly be protected through RAID, snaps, clones, replication and backups.
3) The data now needs to be archived as it gets old, and retention policies & laws kick in.
4) The data needs to be search-able and retrievable NOW.
5) Finally the data needs to be deleted.

I agree with items 1, 3, 4 and 5 – as per previous posts, for what it’s worth, I believe that 2 belongs to a sister activity which I define as Information Lifecycle Protection (ILP) – something that Devang acknowledges as an alternative theory. (I liken the logic to separation between ILM and ILP to that between operational production servers and support production servers.)

The above list, for what it’s worth, is actually a fairly astute/accurate summary of the involvement of the storage industry thus far in ILM. Devang rightly points out that Storage Tiering (migrating data between different speed/capacity/cost storage based on usage, etc.), doesn’t address all of the above points – in particular, data creation and data deletion. That’s certainly true.

What’s missing from ILM from a storage perspective are the components that storage can only peripherally control. Perhaps that’s not entirely accurate – the storage industry can certainly participate in the remaining components (indeed, particularly in NAS systems it’s absolutely necessary, as a prime example) – but it’s more than just the storage industry. It’s operating system vendors. It’s application vendors. It’s database vendors. It is, quite frankly, the whole kit and caboodle.

What’s missing in the storage-centric approach to ILM is identity management – or to be more accurate in this context, identity management systems. The brief outline of identity management is that it’s about moving access control and content control out of the hands of the system, application and database administrators, and into the hands of human resources/corporate management. So a system administrator could have total systems access over an entire host and all its data but not be able to open files that (from a corporate management perspective) they have no right to access. A database administrator can fully control the corporate database, but can’t access commercially sensitive or staff salary details, etc.

Most typically though, it’s about corporate roles, as defined in human resources, being reflected from the ground up in system access options. That is, human resources, when they setup a new employee as having a particular role within the organisation (e.g., “personal assistant”), triggering the appropriate workflows to setup that person’s accounts and access privileges for IT systems as well.

If you think that’s insane, you probably don’t appreciate the purpose of it. System/app/database administrators I talk to about identity management frequently raise trust (or the perceived lack thereof) involved in such systems. I.e., they think that if the company they work for wants to implement identity management they don’t trust the people who are tasked with protecting the systems. I won’t lie, I think in a very small number of instances, this may be the case. Maybe 1%, maybe as high as 2%. But let’s look at the bigger picture here – we, as system/application/database administrators currently have access to such data not because we should have access to such data but because until recently there’s been very few options in place to limit data access to only those who, from a corporate governance perspective, should have access to that data. As such, most system/app/database administrators are highly ethical – they know that being able to access data doesn’t equate to actually accessing that data. (Case in point: as the engineering manager and sysadmin at my last job, if I’d been less ethical, I would have seen the writing on the wall long before the company fell down under financial stresses around my ears!)

Trust doesn’t wash in legal proceedings. Trust doesn’t wash in financial auditing. Particularly in situations where accurate logs aren’t maintained in an appropriately secured manner to prove that person A didn’t access data X. The fact that the system was designed to permit A to access X (even as part of A’s job) is in some financial, legal and data sensitivity areas, significant cause for concern.

Returning to the primary point though, it’s about ensuring that the people who have authority over someone’s role within a company (human resources/management) having control over the the processes that configure the access permissions that person has. It’s also about making sure that those work flows are properly configured and automated so there’s no room for error.

So what’s missing – or what’s only at the barest starting point, is the integration of identity/access control with ILM (including storage tiering) and ILP. This, as you can imagine, is not an easy task. Hell, it’s not even a hard task – it’s a monumentally difficult task. It involves a level of cooperation and coordination between different technical tiers (storage, backup, operating systems, applications) that we rarely, if ever see beyond the basic “must all work together or else it will just spend all the time crashing” perspective.

That’s the bit that gives the extra components – control over content creation and destruction. The storage industry on its own does not have the correct levels of exposure to an organisation in order to provide this functionality of ILM. Nor do the operating system vendors. Nor do the database vendors or the application vendors – they all have to work together to provide a total solution on this front.

I think this answers (indirectly) Devang’s question/comment on why storage vendors, and indeed, most of the storage industry, has stopped talking about ILM – the easy parts are well established, but the hard parts are only in their infancy. We are after all seeing some very early processes around integrating identity management and ILM/ILP. For instance, key management on backups, if handled correctly, can allow for situations where backup administrators can’t by themselves perform the recovery of sensitive systems or data – it requires corporate permissions (e.g., the input of a data access key by someone in HR, etc.) Various operating systems and databases/applications are now providing hooks for identity management (to name just one, here’s Oracle’s details on it.)

So no, I think we can confidently say that storage tiering in and of itself is not the answer to ILM. As to why the storage industry has for the most part stopped talking about ILM, we’re left with one of two choices – it’s hard enough that they don’t want to progress it further, or it’s sufficiently commercially sensitive that it’s not something discussed without the strongest of NDAs.

We’ve seen in the past that the storage industry can cooperate on shared formats and standards. We wouldn’t be in the era of pervasive storage we currently are without that cooperation. Fibre-channel, SCSI, iSCSI, FCoE, NDMP, etc., are proof positive that cooperation is possible. What’s different this time is the cooperation extends over a much larger realm to also encompass operating systems, applications, databases, etc., as well as all the storage components in ILM and ILP. (It makes backups seem to have a small footprint, and backups are amongst the most pervasive of technologies you can deploy within an enterprise environment.)

So we can hope that the reason we’re not hearing a lot of talk about ILM any more is that all the interested parties are either working on this level of integration, or even making the appropriate preparations themselves in order to start working together on this level of integration.

Fingers crossed people, but don’t hold your breath – no matter how closely they’re talking, it’s a long way off.

Nov 232009
 

While still not appearing on the NetWorker compatibility lists, it would certainly appear from testing that NetWorker 7.6 plays a whole lot nicer with Mac OS X 10.6 (Snow Leopard) better than its predecessors did.

Previously when Snow Leopard came out, I posted that NetWorker 7.5 was able to work with it, but did qualify that if you were backing up laptops or other hosts that periodically changed location, NetWorker would get itself into a sufficient knot that it would become necessary to reinstall and/or reboot in order to get a successful backup. (In actual fact, it was rare that a reboot would be sufficient.) Fixed-point machines – e.g., servers and desktops – typically did not experience this problem.

It would seem that 7.6 handles location changes considerably better.

If you’re experiencing intermittent problems with NetWorker 7.5 not wanting to work properly on Mac OS X 10.6 hosts, I’d suggest upgrading to NetWorker 7.6 as a test to see whether that resolves your problems.

The usual qualifiers remain – read the release notes before doing an upgrade.

Nov 212009
 

One of the features most missed by NetWorker users since the introduction of NetWorker Management Console has been the consolidated view of NetWorker activities that had previously been always available.

For the last few releases, this has really only been available via nsrwatch, the utility available in NetWorker on Unix platforms, but sadly missing in NetWorker on Windows systems. If you’re not familiar with nsrwatch (which is possible if you’ve only recently been working with NetWorker, or mainly come from a Windows approach), it gives you a view like the following:

nsrwatch

This style of view used to be available in the old “nwadmin” program for both Unix and Windows, and administrators that came from historical releases which supported nwadmin have sorely missed that overview-at-a-glance monitoring as opposed to wading through separate tabs to see glimpses of activities via NMC. It’s sort of like the difference between looking into a building where the entire front wall is made of glass, or looking into a building where there’s 4 windows but to open one you have to close the other three.

With NetWorker 7.6, you can kiss goodbye to that blinkered approach. In all its glory, we have the overview-at-a-glance monitoring back:

Consolidated monitoring in 7.6

Is this compelling enough reason to run out and immediately upgrade to 7.6? Probably not – you need to upgrade based on site requirements, existing patches, known issues and compatibilities, etc. I.e., you need to read the release notes and decide what to do. Preferably, you should have a test environment you can run it up in – or at least develop a back-out plan should the upgrade not work entirely well for you.

Is it a compelling enough reason to at least upgrade your NMC packages to 7.6, or install a dedicated NMC server running 7.6 instead of a pre-7.6 release?

Hell yes.

Nov 212009
 

While NetWorker 7.6 is not available for download as of the time I write this, the documentation is available on PowerLink. For those of you chomping at the bit to at least read up on NetWorker 7.6, now is the time to wander over to PowerLink delve into the documentation.

The last couple of releases of NetWorker have been interesting for me when it comes to beta testing. In particular, I’ve let colleagues delve into VCB functionality, etc., and I’ve stuck to “niggly” things – e.g., checking for bugs that have caused us and our customers problems in earlier versions, focusing on the command line, etc.

For 7.6 I also decided to revisit the documentation, particularly in light of some of the comments that regularly appear on the NetWorker mailing list about the sorry state of the Performance Tuning and Optimisation Guide.

It’s pleasing, now that the documentation is out, to read the revised and up to date version of the Performance Tuning Guide. Regularly critics of the guide for instance will be pleased to note that FDDI does not appear once. Not once.

Does it contain every possible useful piece of information that you might use when trying to optimise your environment? No, of course not – nor should it. Everyone’s environment will differ in a multitude of ways. Any random system patch can affect performance. A single dodgy NIC can affect performance. A single misconfigured LUN or SAN port can affect performance.

Instead, the document now focuses on providing a high level overview of performance optimisation techniques.

Additionally, recommendations and figures have been updated to support current technology. For instance:

  • There’s a plethora of information on PCI-X vs PCIeXpress.
  • RAM guidelines for the server based on the number of clients has been updated.
  • NMC finally gets a mention as a resource hog! (Obviously, that’s not the words used, but it’s the implication for larger environments. I’ve been increasingly encouraging larger customers to put NMC on a separate host for this reason.)
  • There’s a whole chunk on client parallelism optimisation, both for the clients and the backup server itself.

I don’t think this document is perfect, but if we’re looking at the old document vs the new, and the old document scored a 1 out of 10 on the relevancy front, this at least scores a 7 or so, which is a vast improvement.

Oh, one final point – with the documentation now explicitly stating:

The best approach for client parallelism values is:

– For regular clients, use the lowest possible parallelism settings to best balance between the number of save sets and throughput.

– For the backup server, set highest possible client parallelism to ensure that index backups are not delayed. This ensures that groups complete as they should.

Often backup delays occur when client parallelism is set too low for the NetWorker server. The best approach to optimize NetWorker client performance is to eliminate client parallelism, reduce it to 1, and increase the parallelism based on client hardware and data configuration.

(My emphasis)

Isn’t it time that the default client parallelism value were decreased from the ridiculously high 12 to 1, and we got everyone to actually think about performance tuning? I was overjoyed when I’d originally heard that the (previous) default parallelism value of 4 was going to be changed, then horrified when I found out it was being revised up, to 12, rather than down to 1.

Anyway, if you’ve previously dismissed the Performance Tuning Guide as being hopelessly out of date, it’s time to go back and re-read it. You might like the changes.