Considerations for client parallelism for NetWorker server

While doing a few tests for this blog on a lab server, I noticed what looked like odd behaviour – I had started a manual save running on the NetWorker server for local data. That backup was writing to tape, and while it was going I kicked off a group for an altogether different client.

The backup for the client ran, but then seemed to hang on completion. As the backup-to-tape was merely to test filling tape, and therefore could be restarted at any time, I cancelled out on a hunch, and the savegroup completed almost immediately.

It was “hung” waiting for a free unit of parallelism for the NetWorker server in order to write the client indices. It turned out that I’d forgotten a change I’d made on Friday to test some other settings – that change being to reduce the parallelism of the client instance of the NetWorker server to 1.

With this in place, the backup server couldn’t complete the savegroup because it couldn’t write its indices, and it couldn’t write its indices because it was only allowed a client parallelism of 1, and that unit of parallelism was occupied writing to tape.

So it lead me to think – how easy would it be, given this, for companies to experience delays in their backups due to too low a setting for client parallelism for the NetWorker server? The answer – quite easy. After all, the first, most golden rule of client performance tuning on NetWorker is to eliminate client parallelism, to reduce it to 1, and work your way up based on client hardware and data configuration.

This means that it’s actually fairly critical that the NetWorker server have sufficient parallelism to ensure that index backups do not become an impediment to groups finishing. Based on this I’d recommend aiming for client parallelism for the NetWorker server to:

  • Never be set to 1.
  • For small environments (under 30 servers) be set to at least 4.
  • For medium environments (say, 31-100) be set to at least 8.
  • For larger environments (100+), be set to at least 8, but preferably one of:
    • The same as the actual server parallelism, or
    • The same as the highest group parallelism, if group parallelism is used.

Note that the above entirely assumes that the backup server is a dedicated backup server. If the backup server is also say, a file server*, then obviously different settings will need to be considered to avoid swamping the system.

In essence, while the main goal for regular clients is to achieve as low a client parallelism as possible – i.e., to optimise the balance between number of savesets and throughput, for the backup server the goal should be to have as high a client parallelism as necessary to ensure that index backups are not delayed, so as to ensure that groups finish when they are ready to finish.


* For what it’s worth, my recommendation is that in 100% of times, a backup server should be dedicated. That is, the primary and sole function of the server is to act as a backup server.

4 thoughts on “Considerations for client parallelism for NetWorker server”

  1. Many NetWorker shops also uses savegrp -o to run their index backups. If that is the case the server’s client parallelism isn’t that important as if all the indexes are saved during backup prime time.

    What is your opinion on using savegrp -o for large NetWorker installations?

    1. “In essence, while the main goal for regular clients is to achieve as low a client parallelism as possible – i.e., to optimise the balance between number of savesets and throughput, for the backup server the goal should be to have as high a client parallelism as necessary to ensure that index backups are not delayed, so as to ensure that groups finish when they are ready to finish.”

      The reason behind this is that index backups use the backup server as a client.

  2. I assume you’re actually referring to places where they exclusively run savegrp -O instead of allow index backups to run with group backups?

    Overall, I don’t actually have any problem with savegroup -O, and in fact use it from time to time myself. However, I do generally have concerns with leaving index backups to “some other time” as opposed to “when NetWorker would like to do them”. They are, after all, reasonably important when it comes to rapid recovery of data – and particularly recent data, which for most sites is the data that’s most likely to be required for recovery.

    My personal take is that shops that turn off automated index backups and instead only do them via savegrp -O introduce additional risk into their site that can usually be avoided by a (typically small) change to the architecture. I.e., I’m all about designing risk out of, rather than in to, the solution, so I can’t say I’m a big fan of deferred index backups.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.