This is not necessarily reflective of my customers, though several of them have expressed desire for at least one or two features in the following list. However, that doesn’t mean there’s features currently not in NetWorker that I feel would significantly enhance it.
(I should point out that even missing these features, I still think it’s superior to other backup products.)
So, here’s my personal wish-list for NetWorker (in no particular order), coming from a continuous use of the product since 1996:
- Enhancements to ADV_FILE backup:
- The nsrmmd service needs to be able to support some form of proxying, such that savesets which are being written to a disk backup unit which fills can be “moved” to another nsrmmd service for completion on another ADV_FILE unit.
- When backing up to ADV_FILE units, NetWorker needs to pick next volume to write to by capacity, not age. This would prevent the same ADV_FILE unit(s) filling frequently and stagger writes better across all devices.
- Enhancements to backup, more generally:
- Simultaneously generate a backup and one or more clones, with the option available as to whether failure of one, a particular number or all constitutes a failure.
- Support for expiration/browse dates beyond 2038.
- Enhancements to recovery:
- New recover log that gets automatically populated with details of who recovered what. This should be at the NetWorker server, rather than NMC level.
- Administrators should be able to optionally allow cross platform directed recoveries. (This will become particularly pertinent as companies complete migration activities from Novell NetWare to Novell Open Enterprise Server. As one platform is NetWare, and the other Linux, recovering old NetWare data to an OES server is not supported.)
- Preferred recovery pools – currently NetWorker picks savesets to use for recovery based on nsavetime, which can result in disk backup environments where data has been cloned, then staged, in NetWorker requesting recovery from a clone rather than stage volume. While this can be ameliorated by individually setting the “offsite” flag for each volume, it would be preferable to be able to nominate “priorities” for pools, such that NetWorker will recommend volumes from the “highest priority pool” when a recovery is required and no volumes are on-line.
The wish list is by no means complete, but these are the things I tend to chafe against more frequently than others.
Since I don’t want it to be said that I just come up with a wish-list but don’t have any idea of how topics on it would be accomplished, I’ll go focus on what would (theoretically) need to be done to fix up disk backup, accomplish simultaneous (inline) cloning, etc. (Admittedly, this is my “I don’t have access to the NetWorker source code and never have but architecturally I think EMC should do this” train of thought, but that doesn’t invalidate the theoretical architecture.)
The current “tiering” used to get data to a volume needs to be extended by one layer. Currently the (general) process is that the client save process (or other nominated process) sends data to the nominated nsrmmd process for writing directly to a volume.
It’s this model that needs to be changed. Rather than the client sending directly to the nsrmmd process, instead it needs to send to a proxy process – let’s call that the fictional nsrmmpd process. This process would act as a “broker”. It would receive the individual backup processes from each client backup, and determine which nsrmmd would facilitate the backup. The important feature however would be that there would not be a 1-to-1 relationship between nsrmmds and nsrmmpds – rather, there would be a 1-to-1 relationship between client backups and nsrmmpds, and the nsrmmpd would be able to redirect the client data stream to another nsrmmd on an as-needs basis. This would resemble the following:
The advantage of this style of daemon layout would be that client backup processes would not have to re-negotiate with nsrmmd processes, but rather, would be passed-through by the proxy process to whichever nsrmmd process was most suitable at any given time. This would in theory be completely seamless and undetectable to the client.
Hi Preston.
This is number one on my wishlist.
I wish it was possible to control more granularly the access to the information in the media DB. It’s actually possible for any client to run an mminfo query and retrieve information about all the other clients.
That is ok, in an environment that’s rather small and not distributed. In the command line guide, it says this about Privicege requrements to retrieve data from the media DB (page 179, on the bottom)
“PRIVILEGE REQUIREMENTS
A User with “Recover Local Data” privilege is allowed to query the media database for
save set information only for the client where mminfo command is invoked”
So evidently out of the box, all clients have more privilege than recover local data. Should that be nessessary?
Best regards,
Johannes
I’d not noticed that before — I’ll have to look at the documentation closely to confirm. If that’s the case, then it can be filed as a bug (rather than an RFE) as mminfo is then not working as the feature set is advertised. I’ll have a closer look, and may file a bug report over it – it’s a lot easier to get attention on bugs than RFEs 🙂
Johannes
Had a discussion with some people in EMC about this one overnight. It turns out that permissions are working – it’s just there’s an option there that a lot of people (including myself) don’t immediately notice.
The ability to see savesets from other systems comes from the “Monitor NetWorker” privilege which is assigned to the Users user group by default.
However, if you take this privilege away from the “Users” user group, end users can’t see backup details from other clients.
If you then have mid-level users that you want to be able to do local backup and recovery but also see backups for other systems (e.g., help desk/operations staff), you’d create a new usergroup that has all 3 privileges again.
I’ll do a blog posting about this tomorrow.
Cheers,
Preston.