Seven enhancements I’d like to see in NetWorker

Much as I love using NetWorker, I do periodically have my gripes with it. As a long term user (I go back to 1996), there’s a bunch of smaller things that I’d love to see fixed. These aren’t the earth-shattering changes that we see slide decks for, but rather, the sort of changes that make NetWorker an easier fit for more environments.

7 quibbles

Bootstrap to a Storage Node

The mmrecov utility needs to be run on the backup server, referencing a volume on the backup server.

Why?

Historically this made sense, but storage nodes have so long been ingrained in how we use NetWorker that forcing device requirements onto the backup server is an anachronism. Indeed, it’s getting increasingly common to see NetWorker environments where the server runs in what I call director mode – it doesn’t actually do any backups itself, they’re all done by storage nodes. Well, except for the bootstrap.

It’s time to see NetWorker support there being no devices on the backup server.

Fix Peer-Information Wizard

Of all the years I’ve been running the NetWorker blog, the one article that gets hit more than anything else is Fixing NSR Peer Information. There’s two aspects of this:

  1. NMC alerts (when you first log in) should show peer information errors detected in the logs.
  2. There should be an option to right-click on a client and choose to fix the peer information error. In some cases this can’t be corrected remotely – where that’s the case the server should direct the user to a new utility installed as part of the client install process and the exact command used to run the fix. That command would basically do the nsradmin run against the client NSRLA resource.

Sure, it might be a bit fiddly, but it’s important.

Bring Back Cross Platform Recoveries

Supposedly cross-platform recoveries were never supported. That may be the case, but they certainly worked throughout all of NetWorker version six. Let’s see them come back.

Give Schedules to Scheduled Cloning

Scheduled cloning is a great feature, but the scheduling mechanism is just a bit too basic for my preferences. Being able to use an override of “full last Friday every month” in standard schedules is incredibly useful – it makes configuration of backups highly flexible. Why isn’t there a similar option for scheduled clones? (“clone last Friday every month”)?

Let’s go back to the future

Go to your backup server. Create a single file, identify your standard backup pool and run the command:

# save -b pool -e "+40 years" filename
save: invalid expiration time: +40 years

NetWorker still can’t see past 2038. Ten years ago that was a long time away. Not quite so far away now.

Let me name a jukebox when I create it in NMC

I love the utility jbconfig, but when I’ve got a jukebox with drive sharing, it’s much more convenient to create it in NMC. Except:

  1. Scan for devices
  2. Configure library
  3. Disable library
  4. Edit library properties
  5. Put in the name I want
  6. Enable library

Steps 3-6 shouldn’t exist. When I create a library, I should be prompted for the library name. Just like jbconfig has been doing since … forever.

Nested directives

As the number of clients that are managed by NetWorker increases, so too does the challenge of keeping the configuration under control. It’s very easy to end up in situations where dozens of hosts will share 90% of their directive requirements, but have a small variance based on install locations or system purpose. Not being able to include one set of directives in another has been a perennial bug bear for me.

So what about you?

So they’re the seven enhancements I’d really like to see in NetWorker at a more basic level. There’s nothing earth shattering in there, but I think it’s easy to get so focused on the major enhancements that the little ones we need get neglected.

So what about you? What quibbles do you have with NetWorker that you’d like to see changed?

8 thoughts on “Seven enhancements I’d like to see in NetWorker”

  1. Hello Preston,

    What about regexp in directives and/or saveset list, this to dynamic file&dir selection (to skip or to include).
    On big volumes like file servers, would allow easier setup of parallel save streams from same volume without loosing need added data.

    Great list, by the way !

    Cheers

    Th

    1. Hi Thierry,

      The multi-streaming options for Unix savesets goes some of the distance – but yes, I agree regexp directives would be great, too!

      Cheers.

  2. I would like to see, that NetWorker save set can backup all files matching the pattern e. g. *.bakup.gz

    1. Me too, at least the platforms schould behave equal.
      Because “C:\a*” works, and we used it a long time to create parallel streams.
      The problem is only you need too many savesets to get all files.
      And every savestream creates a new snapshot.

      But this is old style, the future is BBB, who wants file based backup (ironic)

  3. I’d like to see conditional group management. “If group a is running, then do not run group b” Easy to do with a script, but many sites don’t support scripts. So useful with Database servers where you want a file system backup not to crash a DB backup

  4. 1) email alerts or phone txt alerts once ANY manual or GUI restores completed

    2) Data Domain reports…to accurately forecast when the space will used up..in dedupe env its hard to “calculate” when it will be used up….so very hard to tell department, “hey go buy else it will run out by xxyy”

  5. “Let’s go back to the future” comment:

    Official answer from Mandic Vladimir , Sr. Technical Director & Chief Software Architect bellow:

    “NetWorker product is based on the operating system for the year 2038 limitation (http://en.wikipedia.org/wiki/Year_2038_problem).
    The issue will be addressed in the future, but at the time customers should use retention policies below 2038 and if needed customers can always extend them later.”

  6. Limitations in staging.

    I’d like to see possibility to set up maximum amount of data for one staging session and in case that there is still data in AFTD then another staging session will be started…maybe limitations for amount of staging sessions will be helpful as well.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.