Feb 072018
 

The world is changing, and data protection is changing with it. (OK, that sounds like an ad for Veridian Dynamics, but I promise I’m serious.)

One of the areas in which data protection is changing is that backup environments are growing in terms of deployments. It’s quite common these days to see multiple backup servers deployed in an environment – whether that’s due to acquisitions, required functionality, network topology or physical locations, the reason doesn’t really matter. What does matter is that as you increase the number of systems providing protection within an environment, you want to be able to still manage and monitor those systems centrally.

Data Protection Central (DPC) was released earlier this month, and it’s designed from the ground up as a modern, HTML5 web-based system to allow you to monitor your Avamar, NetWorker and Data Domain environments, providing health and capacity reporting on systems and backup. (It also builds on the Multi Systems Manager for Avamar to allow you to perform administrative functions within Avamar without leaving the DPC console – and, well, more is to come on that front over time.)

I’ve been excited about DPC for some time. You may remember a recent post of mine talking about Data Domain Management Center (DDMC); DPC isn’t (at the moment at least) a replacement for DDMC, but it’s built in the same spirit of letting administrators have easy visibility over their entire backup and recovery environment.

So, what’s involved?

Well, let’s start with the price. DPC is $0 for NetWorker and Avamar customers. That’s a pretty good price, right? (If you’re looking for the product page on the support website by the way, it’s here.)

You can deploy it in one of two ways; if you’ve got a SLES server deployed within your environment that meets the requirement, you can download a .bin installer to drop DPC onto that system. The other way – and quite a simple way, really, is to download a VMware OVA file to allow you to easily deploy it within your virtual infrastructure. (Remember, one of the ongoing themes of DellEMC Data Protection is to allow easy virtual deployment wherever possible.)

So yesterday I downloaded the OVA file and today I did a deployment. From start to finish, including gathering screenshots of its operation, that deployment, configuration and use took me about an hour or so.

When you deploy the OVA file, you’ll get prompted for configuration details so that there’s no post-deployment configuration you have to muck around with:

Deploying DPC as an OVA - Part 1

Deploying DPC as an OVA – Part 1

At this point in the deployment, I’ve already selected where the virtual machine will deploy, and what the disk format is. (If you are deploying into a production environment with a number of systems to manage, you’ll likely want to follow the recommendations for thick provisioning. I chose thin, since I was deploying it into my lab.)

You fill in standard networking properties – IP address, gateway, DNS, etc. Additionally, per the screen shot below, you can also immediately attach DPC into your AD/LDAP environment for enterprise authentication:

DPC Deployment, LDAP

DPC Deployment, LDAP

I get into enough trouble at home for IT complexity, so I don’t run LDAP (any more), so there wasn’t anything else for me to do there.

The deployment is quite quick, and after you’re done, you’re ready to power on the virtual machine.

DPC Deployment, ready to power on

DPC Deployment, ready to power on

In fact, one of the things you’ll want to be aware of is that the initial power on and configuration is remarkably quick. (After power-on, the system was ready to let me log on within 5 minutes or so.)

It’s a HTML5 interface – that means there’s no Java Web Start or anything like that; you simply point your web browser at the FQDN or IP address of the DPC server in a browser, and you’ll get to log in and access the system. (The documentation also includes details for changing the SSL certificate.)

DPC Login Screen

DPC Login Screen

DPC follows Dell’s interface guidelines, so it’s quite a crisp and easy to navigate interface. The documentation includes details of your initial login ID and password, and of course, following best practices for security, you’re prompted to change that default password on first login:

DPC Changing the Default Password

DPC Changing the Default Password

After you’ve logged in, you get to see the initial, default dashboard for DPC:

DPC First Login

DPC First Login

Of course, at this point, it looks a wee bit blank. That makes sense – we haven’t added any systems to the environment yet. But that’s easily fixed, by going to System Management in the left-hand column.

DPC System Management

DPC System Management

System management is quite straightforward – the icons directly under “Systems” and “Groups” are for add, edit and delete, respectively. (Delete simply removes a system from DPC, it doesn’t un-deploy the system, of course.)

When you click the add button, you are prompted whether you want to add a server into DPC. (Make sure you check out the version requirements from the documentation, available on the support page.) Adding systems is a very straight-forward operation, as well. For instance, for Data Domain:

DPC Adding a Data Domain

DPC Adding a Data Domain

Adding an Avamar server is likewise quite simple:

DPC Adding an Avamar Server

DPC Adding an Avamar Server

And finally, adding a NetWorker server:

DPC Adding a NetWorker Server

DPC Adding a NetWorker Server

Now, you’ll notice here, DPC prompts you that there’s some added configuration to do on the NetWorker server; it’s about configuring the NetWorker rabbitmq system to be able to communicate with DPC. For now, that’s a manual process. After following the instructions in the documentation, I also added the following to my /etc/rc.d/rc.local file on my Linux-based NetWorker/NMC server to ensure it happened on every reboot, too:

/bin/cat <<EOF | /opt/nsr/nsrmq/bin/nsrmqctl
monitor andoria.turbamentis.int
quit
EOF

It’s not just NetWorker, Avamar and Data Domain you can add – check out the list here:

DPC Systems you can add

DPC Systems you can add

Once I added all my systems, I went over to look at the Activities > Audit pane, which showed me:

DPC Activity Audit

DPC Activity Audit

Look at those times there – it took me all of 8 minutes to change the password on first login, then add 3 Data Domains, an Avamar Server and a NetWorker server to DPC. DPC has been excellently designed to enable rapid deployment and time to readiness. And guess how many times I’d used DPC before? None.

Once systems have been added to DPC and it’s had time to poll the various servers you’re monitoring, you start getting the dashboards populated. For instance, shortly after their addition, my lab DDVE systems were getting capacity reporting:

DPC Capacity Reporting (DD)

DPC Capacity Reporting (DD)

You can drill into capacity reporting by clicking on the capacity report dashboard element to get a tabular view covering Data Domain and Avamar systems:

DPC Detailed Capacity Reporting

DPC Detailed Capacity Reporting

On that detailed capacity view, you see basic capacity details for Data Domains, and as you can see down the right hand side, details of each Mtree on the Data Domain as well. (My Avamar server is reported there as well.)

Under Health, you’ll see a quick view of all the systems you have configured and DPC’s assessment of their current status:

DPC System Health

DPC System Health

In this case, I had two systems reported as unhealthy – one of my DDVEs had an email configuration problem I lazily had not gotten around to fixing, and likewise, my NetWorker server had a licensing error I hadn’t bothered to investigate and fix. Shamed by DPC, I jumped onto both and fixed them, pronto! That meant when I went back to the dashboards, I got an all clear for system health:

DPC Detailed Dashboard

DPC Detailed Dashboard

I wanted to correct those 0’s, so I fired off a backup in NetWorker, which resulted in DPC updating pretty damn quickly to show something was happening:

DPC Dashboard Backup Running

DPC Detailed Dashboard, Backup Running

Likewise, when the backup completed and cloning started, the dashboard was updated quite promptly:

DPC Detailed Dashboard, Clone Running

DPC Detailed Dashboard, Clone Running

You can also see details of what’s been going on via the Activities > System view:

DPC Activities - Systems

DPC Activities – Systems

Then, with a couple of backup and clone jobs run, the Detailed Dashboard was updated a little more:

DPC, Detailed Dashboard More Use

DPC, Detailed Dashboard More Use

Now, I mentioned before that DPC takes on some Multi Systems Manager functionality for Avamar, viz.:

DPC, Avamar Systems Management

DPC, Avamar Systems Management

So that’s back in the Systems Management view. Clicking the horizontal ‘…’ item next to a system lets you launch the individual system management interface, or in the case of Avamar, also manage policy configuration.

DPC, Avamar Policy View

DPC, Avamar Policy View

In that policy view, you can create new policies, initiate jobs, and edit existing configuration details – all without having to go into the traditional Avamar interface:

DPC, Avamar Schedule Configuration

DPC, Avamar Schedule Configuration

DPC, Avamar Retention Configuration

DPC, Avamar Retention Configuration

DPC, Avamar Policy Editing

DPC, Avamar Policy Editing

That’s pretty much all I’ve got to say about DPC at this point in time – other than to highlight the groups function in System Management. By defining groups of resources (and however you want to), you can then filter dashboard views not only for individual systems, but for groups, too, allowing quick and easy review of very specific hosts:

DPC System Management - Groups

DPC System Management – Groups

In my configuration there I’ve lumped by whether systems are associated with an Avamar backup environment or a NetWorker backup environment, but you can configure groups however you need. Maybe you have services broken up by state, or country, or maybe you have them distributed by customer or service you’re providing. Regardless of how you’d like to group them, you can filter through to them in DPC dashboards easily.

So there you go – that’s DPC v1.0.1. It’s honestly taken me more time to get this blog article written than it took me to deploy and configure DPC.

Note: Things I didn’t show in this article:

  • Search and Recovery – That’s where you’d add a DP Search system (I don’t have DP-Search deployed in my lab)
  • Reports – That’s where you’d add a DPA server, which I don’t have deployed in my lab either.

Search and Recovery lets you springboard into the awesome DP-Search web interface, and Reports will drill into DPA and extract the most popular reports people tend to access in DPA, all within DPC.

I’m excited about DPC and the potential it holds over time. And if you’ve got an environment with multiple backup servers and Data Domains, you’ll get value out of it very quickly.

Jan 262018
 

When NetWorker and Data Domain are working together, some operations can be done as a virtual synthetic full. It sounds like a tautology – virtual synthetic. In this basics post, I want to explain the difference between a synthetic full and a virtual synthetic full, so you can understand why this is actually a very important function in a modernised data protection environment.

The difference between the two operations is actually quite simple, and best explained through comparative diagrams. Let’s look at the process of creating a synthetic full, from the perspective of working with AFTDs (still less challenging than synthetic fulls from tape), and working with Data Domain Boost devices.

Synthethic Full vs Virtual Synthetic Full

On the left, we have the process of creating a synthetic full when backups are stored on a regular AFTD device. I’ve simplified the operation, since it does happen in memory rather than requiring staging, etc. Effectively, the NetWorker server (or storage ndoe) will read the various backups that need to be reconstituted into a new, synthetic full, up into memory, and as chunks of the new backup are constructed, they’re written back down onto the AFTD device as a new saveset.

When a Data Domain is involved though, the server gets a little lazier – instead, it just simply has the Data Domain virtually construct a synthetic full – remember, at the back end on the Data Domain, it’s all deduplicated segments of data along with metadata maps that define what a complete ‘file’ is that was sent to the system. (In the case of NetWorker, by ‘file’ I’m referring to a saveset.) So the Data Domain assembles details of a new full without any data being sent over the network.

The difference is simple, but profound. In a traditional synthetic full, the NetWorker server (or storage node) is doing all the grunt work. It’s reading all the data up into itself, combining it appropriately and writing it back down. If you’ve got a 1TB full backup and 6 incremental backups, it’s having do read all that data – 1TB or more, up from disk storage, process it, and write another ~1TB backup back down to disk. With a virtual synthetic full, the Data Domain is doing all the heavy lifting. It’s being told what it needs to do, but it’s doing the reading and processing, and doing it more efficiently than a traditional data read.

So, there’s actually a big difference between synthetic full and virtual synthetic full, and virtual synthetic full is anything but a tautology.

Dec 302017
 

With just a few more days of 2017 left, I thought it opportune making the last post of the year to summarise some of what we’ve seen in the field of data protection in 2017.

2017 Summary

It’s been a big year, in a lot of ways, particularly at DellEMC.

Towards the end of 2016, but definitely leading into 2017, NetWorker 9.1 was released. That meant 2017 started with a bang, courtesy of the new NetWorker Virtual Proxy (NVP, or vProxy) backup system. This replaced VBA, allowing substantial performance improvements, and some architectural simplification as well. I was able to generate some great stats right out of the gate with NVP under NetWorker 9.1, and that applied not just to Windows virtual machines but also to Linux ones, too. NetWorker 9.1 with NVP allows you to recover tens of thousands or more files from image level backup in just a few minutes.

In March I released the NetWorker 2016 usage survey report – the survey ran from December 1, 2016 to January 31, 2017. That reminds me – the 2017 Usage Survey is still running, so you’ve still got time to provide data to the report. I’ve been compiling these reports now for 7 years, so there’s a lot of really useful trends building up. (The 2016 report itself was a little delayed in 2017; I normally aim for it to be available in February, and I’ll do my best to ensure the 2017 report is out in February 2018.)

Ransomware and data destruction made some big headlines in 2017 – repeatedly. Gitlab hit 2017 running with a massive data loss in January, which they consequently blamed on a backup failure, when in actual fact it was a staggering process and people failure. It reminds one of the old manager #101 credo, “If you ASSuME, you make an ASS out of U and ME”. Gitlab’s issue may have at a very small level been a ‘backup failure’, but only in so much that everyone in the house thinking it was someone else’s turn to fill the tank of the car, and running out of petrol, is a ‘car failure’.

But it wasn’t just Gitlab. Next generation database users around the world – specifically, MongoDB – learnt the hard way that security isn’t properly, automatically enabled out of the box. Large numbers of MongoDB administrators around the world found their databases encrypted or lost as default security configurations were exploited on databases left accessible in the wild.

In fact, Ransomware became such a common headache in 2017 that it fell prey to IT’s biggest meme – the infographic. Do a quick Google search for “Ransomware Timeline” for instance, and you’ll find a plethora of options around infographics about Ransomware. (And who said Ransomware couldn’t get any worse?)

Appearing in February 2017 was Data Protection: Ensuring Data Availability. Yes, that’s right, I’m calling the release of my second book on data protection as a big event in the realm of data storage protection in 2017. Why? This is a topic which is insanely critical to business success. If you don’t have a good data protection process and strategy within your business, you could literally lose everything that defines the operational existence of your business. There’s three defining aspects I see in data protection considerations now:

  • Data is still growing
  • Product capability is still expanding to meet that growth
  • Too many businesses see data protection as a series of silos, unconnected – storage, virtualisation, databases, backup, cloud, etc. (Hint: They’re all connected.)

So on that basis, I do think a new book whose focus is to give a complete picture of the data storage protection landscape is important to anyone working in infrastructure.

And on the topic of stripping the silos away from data protection, 2017 well and truly saw DellEMC cement its lead in what I refer to as convergent data protection. That’s the notion of combining data protection techniques from across the continuum to provide new methods of ensuring SLAs are met, impact is eliminated, and data hops are minimised. ProtectPoint was first introduced to the world in 2015, and has evolved considerably since then. ProtectPoint allows primary storage arrays to integrate with data protection storage (e.g., VMAX3 to Data Domain) so that those really huge databases (think 10TB as a typical starting point) can have instantaneous, incremental-forever backups performed – all application integrated, but no impact on the database server itself. ProtectPoint though was just the starting position. In 2017 we saw the release of Hypervisor Direct, which draws a line in the sand on what Convergent Data Protection should be and do. Hypervisor direct is there for your big, virtualised systems with big databases, eliminating any risk of VM-stun during a backup (an architectural constraint of VMware itself) by integrating RecoverPoint for Virtual Machines with Data Domain Boost, all while still being fully application integrated. (Mark my words – hypervisor direct is a game changer.)

Ironically, in a world where target-based deduplication should be a “last resort”, we saw tech journalists get irrationally excited about a company heavy on marketing but light on functionality promote their exclusively target-deduplication data protection technology as somehow novel or innovative. Apparently, combining target based deduplication and needing to scale to potentially hundreds of 10Gbit ethernet ports is both! (In the same way that releasing a 3-wheeled Toyota Corolla for use by the trucking industry would be both ‘novel’ and ‘innovative’.)

Between VMworld and DellEMC World, there were some huge new releases by DellEMC this year though, by comparison. The Integrated Data Protection Appliance (IDPA) was announced at DellEMC world. IDPA is a hyperconverged backup environment – you get delivered to your datacentre a combined unit with data protection storage, control, reporting, monitoring, search and analytics that can be stood up and ready to start protecting your workloads in just a few hours. As part of the support programme you don’t have to worry about upgrades – it’s done as an atomic function of the system. And there’s no need to worry about software licensing vs hardware capacity: it’s all handled as a single, atomic function, too. For sure, you can still build your own backup systems, and many people will – but for businesses who want to hit the ground running in a new office or datacentre, or maybe replace some legacy three-tier backup architecture that’s limping along and costing hundreds of thousands a year just in servicing media servers (AKA “data funnel$”), IDPA is an ideal fit.

At DellEMC World, VMware running in AWS was announced – imagine that, just seamlessly moving virtual machines from your on-premises environment out to the world’s biggest public cloud as a simple operation, and managing the two seamlessly. That became a reality later in the year, and NetWorker and Avamar were the first products to support actual hypervisor level backup of VMware virtual machines running in a public cloud.

Thinking about public cloud, Data Domain Virtual Edition (DDVE) became available in both the Azure and AWS marketplaces for easy deployment. Just spin up a machine and get started with your protection. That being said, if you’re wanting to deploy backup in public cloud, make sure you check out my two-part article on why Architecture Matters: Part 1, and Part 2.

And still thinking about cloud – this time specifically about cloud object storage, you’ll want to remember the difference between Cloud Boost and Cloud Tier. Both can deliver exceptional capabilities to your backup environment, but they have different use cases. That’s something I covered off in this article.

There were some great announcements at re:Invent, AWS’s yearly conference, as well. Cloud Snapshot Manager was released, providing enterprise grade control over AWS snapshot policies. (Check out what I had to say about CSM here.) Also released in 2017 was DellEMC’s Data Domain Cloud Disaster Recovery, something I need to blog about ASAP in 2018 – that’s where you can actually have your on-premises virtual machine backups replicated out into a public cloud and instantiate them as a DR copy with minimal resources running in the cloud (e.g., no in-Cloud DDVE required).

2017 also saw the release of Enterprise Copy Data Analytics – imagine having a single portal that tracks your Data Domain fleet world wide, and provides predictive analysis to you about system health, capacity trending and insights into how your business is going with data protection. That’s what eCDA is.

NetWorker 9.2 and 9.2.1 came out as well during 2017 – that saw functionality such as integration with Data Domain Retention Lock, database integrated virtual machine image level backups, enhancements to the REST API, and a raft of other updates. Tighter integration with vRealize Automation, support for VMware image level backup in AWS, optimised object storage functionality and improved directives – the list goes on and on.

I’d be remiss if I didn’t mention a little bit of politics before I wrap up. Australia got marriage equality – I, myself, am finally now blessed with the challenge of working out how to plan a wedding (my boyfriend and I are intending to marry on our 22nd anniversary in late 2018 – assuming we can agree on wedding rings, of course), and more broadly, politics again around the world managed to remind us of the truth to that saying by the French Philosopher, Albert Camus: “A man without ethics is a wild beast loosed upon this world.” (OK, I might be having a pointed glance at Donald Trump over in America when I say that, but it’s still a pertinent thing to keep in mind across the political and geographic spectrums.)

2017 wasn’t just about introducing converged data protection appliances and convergent data protection, but it was also a year where more businesses started to look at hyperconverged administration teams as well. That’s a topic that will only get bigger in 2018.

The DellEMC data protection family got a lot of updates across the board that I haven’t had time to cover this year – Avamar 7.5, Boost for Enterprise Applications 4.5, Enterprise Copy Data Management (eCDM) 2, and DDOS 6.1! Now that I sit back and think about it, my January could be very busy just catching up on things I haven’t had a chance to blog about this year.

I saw some great success stories with NetWorker in 2017, something I hope to cover in more detail into 2018 and beyond. You can see some examples of great success stories here.

I also started my next pet project – reviewing ethical considerations in technology. It’s certainly not going to be just about backup. You’ll see the start of the project over at Fools Rush In.

And that’s where I’m going to leave 2017. It’s been a big year and I hope, for all of you, a successful year. 2018, I believe, will be even bigger again.

Basics – Prior Recovery Details

 Basics, NetWorker, Recovery  Comments Off on Basics – Prior Recovery Details
Dec 192017
 

If you need to find out details about what has recently been recovered with a NetWorker server, there’s a few different ways to achieve it.

NMC, of course, offers recovery reports. These are particularly good if you’ve got admins (e.g., security/audit people) who only access NetWorker via NMC – and good as a baseline report for auditing teams. Remember that NMC will retain records for reporting for a user defined period of time, via the Reports | Reports | Data Retention setting:

Reports Data Retention Menu

Data Retention Setting

The actual report you’ll usually want to run for recovery details is the ‘Recover Details’ report:

NMC Recover Details Report

The other way you can go about it is to use the command nsrreccomp, which retrieves details about completed recoveries from the NetWorker jobs database. Now, the jobs database is typically pruned every 72 hours in a default install (you can change the length of time on the jobs database). Getting a list of completed recoveries (successful or otherwise) is as simple as running the command nsrreccomp -L:

[root@orilla tmp]# nsrreccomp -L
name, start time, job id, completion status
recover, Tue Dec 19 10:28:25 2017(1513639705), 838396, succeeded
DNS_Serv_Rec_20171219, Tue Dec 19 10:39:19 2017(1513640359), 838404, failed
DNS_Server_Rec_20171219_2, Tue Dec 19 10:41:48 2017(1513640508), 838410, failed
DNS_Server_Rec_20171219_3, Tue Dec 19 10:43:55 2017(1513640635), 838418, failed
DNS_Server_Rec_20171219, Tue Dec 19 10:47:43 2017(1513640863), 838424, succeeded

You can see in the above list that I made a few recovery attempts that generated failures (deliberately picking a standalone ESX server that didn’t have a vProxy to service it as a recovery target, then twice forgetting to change my recovery destination) so that we could see the list includes successful and failed jobs.

You’ll note the second last field in each line of output is actually the Job ID associated with the recovery. You can actually use this with nsrreccomp to retrieve the full output of an individual job, viz.:

[root@orilla tmp]# nsrreccomp -R 838396
 Recovering 1 file from /tmp/ into /tmp/recovery
 Volumes needed (all on-line):
 Backup.01 at Backup_01
 Total estimated disk space needed for recover is 8 KB.
 Requesting 1 file(s), this may take a while...
 Recover start time: Tue 19 Dec 2017 10:28:25 AEDT
 Requesting 1 recover session(s) from server.
 129290:recover: Successfully established direct file retrieve session for save-set ID '1345653443' with adv_file volume 'Backup.01'.
 ./tostage.txt
 Received 1 file(s) from NSR server `orilla'
 Recover completion time: Tue 19 Dec 2017 10:28:25 AEDT

The show-text option can be used for any recovery performed. For a virtual machine recovery it’ll be quite verbose – a snippet is as follows:

[root@orilla tmp]# nsrreccomp -R 838424
Initiating virtual machine 'krell_20171219' recovery on vCenter caprica.turbamentis.int using vProxy langara.turbamentis.int.
152791:nsrvproxy_recover: vProxy Log Begins ===============================================
159373:nsrvproxy_recover: vProxy Log: 2017/12/19 10:47:19 NOTICE: [@(#) Build number: 67] Logging to '/opt/emc/vproxy/runtime/logs/vrecoverd/RecoverVM-5e9719e5-61bf-4e56-9b68-b4931e2af5b2.log' on host 'langara'.
159373:nsrvproxy_recover: vProxy Log: 2017/12/19 10:47:19 NOTICE: [@(#) Build number: 67] Release: '2.1.0-17_1', Build number: '1', Build date: '2017-06-21T21:02:28Z'
159373:nsrvproxy_recover: vProxy Log: 2017/12/19 10:47:19 NOTICE: [@(#) Build number: 67] Changing log level from INFO to TRACE.
159373:nsrvproxy_recover: vProxy Log: 2017/12/19 10:47:19 INFO: [@(#) Build number: 67] Created RecoverVM session for "krell_20171219", logging to "/opt/emc/vproxy/runtime/logs/vrecoverd/RecoverVM-5e9719e5-61bf-4e56-9b68-b4931e2af5b2.log"...
159373:nsrvproxy_recover: vProxy Log: 2017/12/19 10:47:19 NOTICE: [@(#) Build number: 67] Starting restore of VM "krell_20171219". Logging at level "TRACE" ...
159373:nsrvproxy_recover: vProxy Log: 2017/12/19 10:47:19 NOTICE: [@(#) Build number: 67] RecoverVmSessions supplied by client:
159373:nsrvproxy_recover: vProxy Log: 2017/12/19 10:47:19 NOTICE: [@(#) Build number: 67] {
159373:nsrvproxy_recover: vProxy Log: 2017/12/19 10:47:19 NOTICE: [@(#) Build number: 67] "Config": {
159373:nsrvproxy_recover: vProxy Log: 2017/12/19 10:47:19 NOTICE: [@(#) Build number: 67] "SessionId": "5e9719e5-61bf-4e56-9b68-b4931e2af5b2",
159373:nsrvproxy_recover: vProxy Log: 2017/12/19 10:47:19 NOTICE: [@(#) Build number: 67] "LogTag": "@(#) Build number: 67",

Now, if you’d like more than 72 hours retention in your jobs database, you can set it to a longer period of time (though don’t get too excessive – you’re better off capturing jobs database details periodically and saving to an alternate location than trying to get the NetWorker server to retain a huge jobs database) via the server properties. This can be done using nsradmin or NMC – NMC shown below for reference:

NMC Server Properties

NMC Server Properties JobsDB Retention

There’s not much else to grabbing recovery results out of NetWorker – it’s certainly useful to know of both the NMC and command line options available to you. (Of course, if you want maximum flexibility on recovery reporting, you should definitely check out Data Protection Advisor (DPA), which is available automatically under any of the Data Protection Suite licensing models.)

 

NetWorker 2017 Usage Survey

 NetWorker, Site  Comments Off on NetWorker 2017 Usage Survey
Dec 012017
 
Survey Image

It seems like only a few weeks ago, 2017 was starting. But here we are again, and it’s time for another NetWorker usage survey. If you’re a recent blog subscriber, you may not have seen previous surveys, so here’s how it works:

Every year a survey is run on the NetWorker blog to capture data on how businesses are using NetWorker within their environment. As per previous years, the survey runs from December 1 to January 31. At the end of survey, I analyse the data, crunch the numbers, sacrifice a tape to the data protection deities and generate a report about how NetWorker is being used in the community.

My goal isn’t just for the report to be a simple regurgitation of the data input by respondents. It’s good to understand the patterns that emerge, too. Is deduplication more heavily used in the Americas, or APJ? Who keeps data for the longest? Is there any correlation between the longevity of NetWorker use and the number of systems being protected? You can see last year’s survey results here.

Survey Image

To that end, it’s time to run the 2017 NetWorker survey. Once again, I’m also going to give away a copy of my latest book, Data Protection: Ensuring Data Availability. All you have to do in order to be in the running is to be sure to include your email address in the survey. Your email address will only be used to contact you if you win.

The survey should hopefully only take you 5-10 minutes.

The survey has now closed. Results will be published mid-late February 2018.

What’s new in NetWorker 9.2.1?

 Features, NetWorker  Comments Off on What’s new in NetWorker 9.2.1?
Nov 232017
 

NetWorker 9.2.1 has just been released, and the engineering and product management teams have delivered a great collection of features, in addition to the usual roll-up of prior cumulative patch releases, etc.Cloud TransferThere’s some great Cloud features in 9.2.1, regardless of whether you’re working with a private cloud in your datacentre, consuming VMware in AWS, or using CloudBoost to get to object storage.

VMware’s move into the AWS cloud environment has opened up new avenues for customers seeking a seamless hybrid cloud model, and that wouldn’t be complete without being able to extend an on-premises data protection policy into the VMware environment running in AWS. So, NetWorker 9.2.1 includes the option to perform image based backups of VMware virtual machines in AWS. This is round one of support for VMware backups in AWS – as is always the case (and why for traditional AWS virtual machine backups you usually have to deploy an agent), what you can or can’t do is dependent on what level of access you have to the hypervisor. In this case, VMware have focused in their first round of enablement for data protection on image based backup and recovery. If you’ve got VMware systems in AWS that require highly granular file level recovery regularly, you may want to install an agent for those, but for ‘regular’ VMware virtual machines in AWS where the focus is being able to do high speed image level backups and recoveries, NetWorker 9.2.1 has you covered.

NetWorker 9.2.1 also supports vRealize Data Protection Extension 4.0.2, so if you’re building a private cloud experience within your environment, NetWorker will be up to date on that front for you, as well.

Finishing up with on cloud support, CloudBoost has also been updated with this release, and has some enhancements for efficiency and reporting that’ll make this update a must if you’re using it to get data onto public object storage.

Regular VMware users aren’t left out either – there’s now a new option to recover virtual machine configuration metadata as well as virtual machine data, which can be a particularly useful option when you’re doing an in-place recovery of a virtual machine. Also, if you’ve got an ESXi server or servers within your environment that aren’t under vCenter control, you’re now covered by NetWorker as well – virtual machines on these systems can also be backed up.

9.2.1 also lets you used unserved NetWorker licenses. In prior 9.x releases, it was necessary to use a license server – either one for your entire company, or potentially more if you had different security zones. Now you can associate issued licenses directly with NetWorker servers if you need to, rather than serving them out of the license server – a very handy option for dark or secured sites.

Directives. With 9.2.1 (and rolling back to the 9.2 clients too), you can now use wildcards in the directory paths. This is going to be a really useful scenario for database servers. For instance, it was quite common to see directives such as:

<< /d/u01/database >>
+skip: *.dbf

<< /d/u02/database >>
+skip: *.dbf

<< /d/u03/database >>
+skip: *.dbf

Under earlier versions of NetWorker, if a new mount point, /d/04 were added, you had to remember to update the directives to exclude (say in this example), database files from that filesystem from your backup. Now instead, you can just have a catch-all directive of:

<< /d/*/database >>
+skip: *.dbf

Or if you wanted to be more specific and only do it against mount-points that started with a ‘u’ and had 2 letters following:

<< /d/u??/database >>
+skip: *.dbf

Check out the wildcard support in the administrator’s guide for 9.2.1 (around p339).

From an availability perspective, you can now do a NetWorker disaster recovery straight from a Data Domain Cloud Tier device if you so need to. You also now get the option of using RecoverPoint for Virtual Machines (RP4VM) to replicate your NetWorker server between sites for seamless failover capabilities. If you’re using RP4VM at the moment, this really boosts your capability when it comes to protecting your backup environment.

There’s some command line enhancements as well, such as being able to force an ad-hoc backup for a workflow, even if the workflow’s policy is set to do a skip on the particular day you’re doing it.

Security isn’t missed, too. In situations where you have to backup via a storage node (it happens, even to the best of us – particularly in secure zones), you’ve now got the option to enable in-flight encryption of data between the client and the storage node. This therefore allows a totally encrypted datapath of say:

Client <Encrypted> Storage Node <–DDBoost Encrypted–> Data Domain

or

Client <Encrypted> Storage Node <–Tape FC Encryption–> Tape Drive

Obviously, the preference is always to go direct from the client to protection storage, but if you do have to go via a storage node, you’re covered now.

Logging gets enhanced, too. There’s now automatic numbering of lines in the log files to make it easier to trace messages for support and self-support.

Finally, NAS isn’t left out, either. There’s NetWorker Snapshot Management support now for XtremIO X2, and NetApp OnTAP 9.2. (On the NDMP front though – and I know snapshots are different, check out the insane levels of performance that we can get from dense filesystems on Isilon now, though.)

So, I’d planned to have a pretty quiet weekend of sitting under a freezing air-conditioner, out of the heat, and doing as little as possible. It seems instead I’ll be downloading NetWorker 9.2.1 and getting it up and running in my lab. (Lucky there’s room under the air-conditioner for my laptop, too.)

NetWorker Success Stories

 NetWorker  Comments Off on NetWorker Success Stories
Nov 062017
 

Last month I ran a birthday giveaway competition – tell me a NetWorker success story, and go in the running for a signed copy of my book, Data Protection: Ensuring Data Availability. Since then, it’s been a bit quiet on the NetWorker Hub, and I apologise for that: my time has been considerably occupied with either work or much needed downtime of late. Sometimes it really does seem that every month gets busier for me than the last. (And by “sometimes”, I sort of mean “every month”).

Knight in shining armour

One of the original symbols for NetWorker was a knight in shining armour – very much reflective of its purpose to protect the most valuable asset in your castle: your data. So it seems fitting that as I share some of the success stories I received, I use a knight as the image for the post. So let’s have at it.

Success Story #1:

With the book and blog it make me clear where lots of thing confusing on the Data Protection and helps me to present buying Data protection suite over TSM.

Hey, it may not specifically be a NetWorker success story, but I’m chuffed, regardless!

Success Story #2:

NetWorker gave me a career to be honest. I have come across multiple situations where a critical recovery or ad-hoc backup has saved someone’s job.

This is a story I can really identify with – NetWorker definitely gave me a career, too!

Success Story #3:

Had the experience recently where senior management was amazed with the fact that we managed to recover data up to last 24 hours with no loss otherwise for 7 file servers that were part of a BCP triggered to recover from bad weather in Houston. Came down to the team sharing with management on how the environment is backed up and how validation is done as a check and balance. Awesome experience when you realise that the investment on a good backup strategy and the governed implementation of the same does pay off during business continuity efforts.

Backup is good, but recovery is great. Being able to pull backup systems in to help provide business continuity is a great example of planning.

Success Story #4:

Saved my customers a lot of times when files has been deleted.

Again, I can agree with this one. That’s why NetWorker has been so important to me over the years – it’s helped so many customers in so many challenging situations.

Success Story #5:

Working with NetWorker since 7.6, I would say NetWorker and I are growing up together. I’m getting a better engineer year by year and NetWorker did the same. Today I’m doing things (like cluster backups and VM backups) I couldn’t imagine years ago.

My first NetWorker server really was called mars (you’ll get what I mean if you read enough NetWorker man pages), and we’ve both grown a lot since my earlier career as a Unix system administrator. My first server was v4.1, and I had v3 clients back then on a variety of systems. (I think the last time I used a v3 client was in 2000 to backup Banyan Vines systems.) File type devices, advanced file type devices, storage nodes, cluster support, Windows support, Linux support … the list goes on for things I’ve seen added to NetWorker over the years!

Success Story #6:

It does what it says on the tin.

Backs up and recovers servers.

What more can you ask for?

Succinct and true.

Success Story #7:

BMR recovery during a virus attack in environment really helped to tackle and restore multiple servers quickly.

(I hear great stories regularly about backups saving businesses during virus and ransomware attacks. Snapshots can help in those situations, of course, too, but the problem with snapshots is that a potent virus or ransomware attack can overwhelm your snapshot storage space, making a bad situation worse.)

Success Story #8:

When looking for a suitable replacement for IBM TSM 5.5/DataDomain VTL. We started to look Networker 8/DataDomain. We were blown away how it’s was so flexible and a powerfull integration with ESX.  We have better backup performance/restore and VM backup was so easy that management couldn’t believe I could backup 800 VM without deploying an agent on each server.

Here’s the thing: Data Domain will boost (no pun intended) a lot of average backup products, but you get the true power of that platform when you’re using a fully integrated product like NetWorker or Avamar.

Success Story #9:

We do BAU backup and restore with Networker and not much surprises there, but one capability/feature that saved us a lot of time/money was migrating from legacy DataDomain VTLs to NEW Datadomain Boost Target by just Cloning legacy VTLs.That gave us the opportunity to de-comm old system and still have access to legacy backups without requiring keeping the old devices and servers.

This is a great architectural story. Data Domain is by far the best VTL you can get on the market, but if you want to switch from VTL into true disk based backups, you can handle that easily with NetWorker. NetWorker makes moving backup data around supremely easy – and it’s great at ‘set and forget’ cloning or migration operations, too.

Success Story #10:

Restoring an entire environment of servers with Windows BMR special ISO.

I don’t see much call for BMR these days given the rise of virtualisation in the midrange market, but it’s still an option if you really need it.

Success Story #11:

I was able to take our backup tapes to a remote site in a different city and was able to recover the production servers, including the database servers, in less time than was planned for, thus proving that DR is possible using NetWorker.

NetWorker isn’t all about deduplication. It started at a time when deduplication didn’t exist, and it can still solve problems when you don’t have deduplication in your environment.

Success Story #12:

There are many however let me speak about latest. Guest level backups would put hell lot of load on underlying hypervisor on VM infrastructure. So we deployed NVP and moved all our file systems to it . The blazing speed and FLR helped us to achieve our SLA. Integration with NVP was seamless with 98% deduplication.

NVP really is an awesome success story. The centres of excellence have run high scale backups showing thousands of virtual machines backed up per hour. It really is game changing for most businesses. (Check at the end of the blog article for a fantastic real customer success story that one of the global data protection presales team shared recently.)

Success Story #13:

Have worked on multiple NMDA, NMSAP and DDBEA cases and have resolved them and the customer appreciates the DELL EMC support team.

Success stories come from customers and the people sitting on the other side of the fence, too. There’s some amazingly dedicated people in the DellEMC NetWorker (and more broadly, data protection) support teams … some of them I’ve known for over 15 years, in fact. These are people who take the call when you’re having a bad day, and they’re determined to make sure your day improves.

Success Story #14:

I believe to understand the difference between Networking and Networker was the biggest challenge as I was completely from the networking background.

There are a lot of success stories but I think to state or iterarte success in terms of networker is something which has been set by you and the bench mark for which is very high, so no success stories.

Hopefully I can replicate 5% of your success then probably I would be successful in terms of me.

I remember after I’d been using NetWorker for about 3 years, I managed to get into my first NetWorker training course. There was someone in the course who thought he was going into a generic networking course. And any enterprise backup product like NetWorker really well help you understand your business network a lot more, so this is a pretty accurate story, I think.

Success Story #15:

My success story is simple … every time I restore data for the company/users. Either it may be whole NetWorker server restore or Database (SAP,SQL,ORACLE etc) or file/folder or maybe a BMR.

Every “Thank You” Message I receive from end user gives me immense happiness when I restore data and I am privileged to help others by doing Data Protection. Highly satisfied with my work as its like a game for me. every time I  restore Something i treat it as win (Winning the Game).

Big or small, every recovery is important!

Success Story #16:

This story comes from Daniel Itzhak in the DPS Presales team. Dan recently shared a fantastic overview of a customer who’d made the switch to NVP backups with NetWorker. Dan didn’t share it for the competition, but it’s such a great view that I wanted to share it as part of this anyway. Here’s the numbers:

  • 1,124 Virtual Machines across multiple sites and vCenter clusters
  • 30 days of backups – Average 350 TB per day front end data being protected, 10.2PB logical data protected after 30 days.
  • Largest client in the environment – 302 TB. (That is one seriously big virtual machine!)
  • Overall deduplication ratio: 35x (to put that in perspective, 350TB per day at 35x deduplication ratio would mean on average 10TB stored per day)
  • More than 34,700 jobs processed in that time (VM environments tend to have lower job counts) … 99% of backups finish in under 2 hours every day.

That sounds impressive, right? Well, that’s not the only thing that’s impressive about it. Let’s think back to the NetWorker and Data Domain architecture … optimised data path, source based deduplication, minimum data hops, and storage nodes relegated to device access negotiation only. Competitive products would require big, expensive physical storage nodes/media servers to process that sort of data – I know, I’ve seen those environments. Instead, what did Dan’s customer need to run their environment? Let’s review:

  • 1 x RHEL v7.3 NetWorker Server, 9.2.0.3 – 4 vCPUs with 16GB of RAM
  • 3 x Storage Nodes (1 remote, 2 local), each with: 4 vCPU and 32GB of RAM
  • 2 x NVP – Which you might recall, requires 8 GB of RAM and 4 vCPU

You want to backup 1000+ VMs in under 2 hours every night at 35x deduplication? Look no further than NetWorker and Data Domain.

I’ve contacted the winner – thanks to everyone who entered!

Basics – device `X’ is marked as suspect

 Basics, Data Domain, NetWorker  Comments Off on Basics – device `X’ is marked as suspect
Sep 282017
 

So I got myself into a bit of a kerfuffle today when I was doing some reboots in my home lab. When one of my DDVE systems came back up and I attempted to re-mount the volume hosted on that Data Domain in NetWorker, I got an odd error:

device `X’ is marked as suspect

Now, that’s odd, because NetWorker marks savesets as suspect, not volumes.

Trying it out on the command line still got me the same results:

[root@orilla ~]# nsrmm -mv -f adamantium.turbamentis.int_BoostClone
155485:nsrd: device `adamantium.turbamentis.int_BoostClone' is marked as suspect

Curiouser curiouser, I thought. I did briefly try to mark the volume as not suspect, but this didn’t make a difference, of course – since suspect applies to savesets, not volumes:

[root@orilla ~]# nsrmm -o notsuspect BoostClone.002
6291:nsrmm: Volume is invalid with -o [not]suspect

I could see the volume was not marked as scan needed, and even explicitly re-marking the volume as not requiring a scan didn’t change anything.

Within NMC I’d been trying to mount the Boost volume under Devices > Devices. I viewed the properties of the relevant device and couldn’t see anything about the device being suspect, so I thought I’d pop into Devices > Data Domain Devices and view the device details there. Nothing different there, but when I attempted to mount the device from there, it instead told me the that the ‘ddboost’ user associated with the Data Domain didn’t have the rights required to access the device.

Insufficient Rights

That was my Ahah! moment. To test my theory I tried to login as the ddboost user onto the Data Domain:

[Thu Sep 28 10:15:15]
[• ~ •]
pmdg@rama 
$ ssh ddboost@adamantium
EMC Data Domain Virtual Edition
Password: 
You are required to change your password immediately (password aged)
Changing password for ddboost.
(current) UNIX password:

Eureka!

Eureka!

I knew I’d set up that particular Data Domain device in a hurry to do some testing, and I’d forgotten to disable password ageing. Sure enough, when I logged into the Data Domain Management Console, under Administration > Access > Local Users, the ‘ddboost’ account was showing as locked.

Solution: edit the account properties for the ‘ddboost’ user and give it a 9999 day ageing policy.

Huzzah! Now the volume would mount on the device.

There’s a lesson here – in fact, a couple:

  1. Being in a rush to do something and not doing it properly usually catches you later on.
  2. Don’t stop at your first error message – try operations in other ways: command line, different parts of the GUI, etc., just in case you get that extra clue you need.

Hope that helps!


Oh, don’t forget – it was my birthday recently and I’m giving away a copy of my book. To enter the competition, click here.

Birthday give-away competition

 NetWorker  Comments Off on Birthday give-away competition
Sep 272017
 
iStock Balloons

Towards the end of September each year, I get to celebrate another solar peregrination, and this year I’m celebrating it with my blog readers, too.

iStock Balloons

Here’s how it works: I’ve now been blogging about NetWorker on nsrd.info since late 2009. I’ve chalked up almost 700 articles, and significantly more than a million visitors during that time. I’ve got feedback from people over the years saying how useful the blog has been to them – so, running from today until October 15, I’m asking readers to tell me one of their success stories using NetWorker.

I’ll be giving away a prize to a randomly selected entrant – a signed copy of my book, Data Protection: Ensuring Data Availability.

The competition is open to everyone, but here’s the catch: I do intend to share the submitted stories. I take privacy seriously: no contact details will be shared with anyone, and success stories will be anonymised, too. If you want to be in the running for the book, you’ll need to supply your email address so I can get in contact with the winner!

The competition has closed.


Oh, don’t forget I’ve got a new project running over at Fools Rush In, about Ethics in Technology.

Basics – Understanding NetWorker Dependency Tracking

 Backup theory, NetWorker  Comments Off on Basics – Understanding NetWorker Dependency Tracking
Sep 162017
 

Dependency tracking is an absolutely essential feature within a backup product. It’s there to ensure you can recover data through the entire specified retention period for your backups, regardless of what mix of full, differential and/or incremental backups you do. It’s staggering to think there are some backup products out there (*cough* net *cough* ‘backup’), that treat backup retention with such contempt that they don’t bother to enforce dependency preservation.

Without dependency tracking, you’ve always got the risk that a recovery you want to do on the edge of your specified retention period might fail.

NetWorker does dependency tracking by default. In fact, it only does dependency tracking. To understand how dependency tracking works, and what that means for protecting your backups, check out my video below. (Make sure to switch it into High Definition – it’s not about being able to see more of my beard, but it is to make sure you can see all the screen content!)


Dependency tracking is such an important feature in data protection that you’ll find it’s also covered in my book, Data Protection: Ensuring Data Availability.


On another note, I’m starting a new project. I may work in IT, but I’ve always been a fan of philosophy, too. The new project is called Fools Rush In, and it’s going to be an ongoing weekly exploration of topics relating to ethics in IT and modern technology. It’s going to be long-form in its approach – the perfect thing to sit down and read over a cup of coffee or tea. This’ll be an exciting journey, and I’d love it if you joined me on it. The introductory article is …where angels fear to tread, and the latest post, What is Ethics? gives a bit of a primer on schools of ethical thought and how we can start approaching ethics in IT/technology.

%d bloggers like this: