Dec 302017

With just a few more days of 2017 left, I thought it opportune making the last post of the year to summarise some of what we’ve seen in the field of data protection in 2017.

2017 Summary

It’s been a big year, in a lot of ways, particularly at DellEMC.

Towards the end of 2016, but definitely leading into 2017, NetWorker 9.1 was released. That meant 2017 started with a bang, courtesy of the new NetWorker Virtual Proxy (NVP, or vProxy) backup system. This replaced VBA, allowing substantial performance improvements, and some architectural simplification as well. I was able to generate some great stats right out of the gate with NVP under NetWorker 9.1, and that applied not just to Windows virtual machines but also to Linux ones, too. NetWorker 9.1 with NVP allows you to recover tens of thousands or more files from image level backup in just a few minutes.

In March I released the NetWorker 2016 usage survey report – the survey ran from December 1, 2016 to January 31, 2017. That reminds me – the 2017 Usage Survey is still running, so you’ve still got time to provide data to the report. I’ve been compiling these reports now for 7 years, so there’s a lot of really useful trends building up. (The 2016 report itself was a little delayed in 2017; I normally aim for it to be available in February, and I’ll do my best to ensure the 2017 report is out in February 2018.)

Ransomware and data destruction made some big headlines in 2017 – repeatedly. Gitlab hit 2017 running with a massive data loss in January, which they consequently blamed on a backup failure, when in actual fact it was a staggering process and people failure. It reminds one of the old manager #101 credo, “If you ASSuME, you make an ASS out of U and ME”. Gitlab’s issue may have at a very small level been a ‘backup failure’, but only in so much that everyone in the house thinking it was someone else’s turn to fill the tank of the car, and running out of petrol, is a ‘car failure’.

But it wasn’t just Gitlab. Next generation database users around the world – specifically, MongoDB – learnt the hard way that security isn’t properly, automatically enabled out of the box. Large numbers of MongoDB administrators around the world found their databases encrypted or lost as default security configurations were exploited on databases left accessible in the wild.

In fact, Ransomware became such a common headache in 2017 that it fell prey to IT’s biggest meme – the infographic. Do a quick Google search for “Ransomware Timeline” for instance, and you’ll find a plethora of options around infographics about Ransomware. (And who said Ransomware couldn’t get any worse?)

Appearing in February 2017 was Data Protection: Ensuring Data Availability. Yes, that’s right, I’m calling the release of my second book on data protection as a big event in the realm of data storage protection in 2017. Why? This is a topic which is insanely critical to business success. If you don’t have a good data protection process and strategy within your business, you could literally lose everything that defines the operational existence of your business. There’s three defining aspects I see in data protection considerations now:

  • Data is still growing
  • Product capability is still expanding to meet that growth
  • Too many businesses see data protection as a series of silos, unconnected – storage, virtualisation, databases, backup, cloud, etc. (Hint: They’re all connected.)

So on that basis, I do think a new book whose focus is to give a complete picture of the data storage protection landscape is important to anyone working in infrastructure.

And on the topic of stripping the silos away from data protection, 2017 well and truly saw DellEMC cement its lead in what I refer to as convergent data protection. That’s the notion of combining data protection techniques from across the continuum to provide new methods of ensuring SLAs are met, impact is eliminated, and data hops are minimised. ProtectPoint was first introduced to the world in 2015, and has evolved considerably since then. ProtectPoint allows primary storage arrays to integrate with data protection storage (e.g., VMAX3 to Data Domain) so that those really huge databases (think 10TB as a typical starting point) can have instantaneous, incremental-forever backups performed – all application integrated, but no impact on the database server itself. ProtectPoint though was just the starting position. In 2017 we saw the release of Hypervisor Direct, which draws a line in the sand on what Convergent Data Protection should be and do. Hypervisor direct is there for your big, virtualised systems with big databases, eliminating any risk of VM-stun during a backup (an architectural constraint of VMware itself) by integrating RecoverPoint for Virtual Machines with Data Domain Boost, all while still being fully application integrated. (Mark my words – hypervisor direct is a game changer.)

Ironically, in a world where target-based deduplication should be a “last resort”, we saw tech journalists get irrationally excited about a company heavy on marketing but light on functionality promote their exclusively target-deduplication data protection technology as somehow novel or innovative. Apparently, combining target based deduplication and needing to scale to potentially hundreds of 10Gbit ethernet ports is both! (In the same way that releasing a 3-wheeled Toyota Corolla for use by the trucking industry would be both ‘novel’ and ‘innovative’.)

Between VMworld and DellEMC World, there were some huge new releases by DellEMC this year though, by comparison. The Integrated Data Protection Appliance (IDPA) was announced at DellEMC world. IDPA is a hyperconverged backup environment – you get delivered to your datacentre a combined unit with data protection storage, control, reporting, monitoring, search and analytics that can be stood up and ready to start protecting your workloads in just a few hours. As part of the support programme you don’t have to worry about upgrades – it’s done as an atomic function of the system. And there’s no need to worry about software licensing vs hardware capacity: it’s all handled as a single, atomic function, too. For sure, you can still build your own backup systems, and many people will – but for businesses who want to hit the ground running in a new office or datacentre, or maybe replace some legacy three-tier backup architecture that’s limping along and costing hundreds of thousands a year just in servicing media servers (AKA “data funnel$”), IDPA is an ideal fit.

At DellEMC World, VMware running in AWS was announced – imagine that, just seamlessly moving virtual machines from your on-premises environment out to the world’s biggest public cloud as a simple operation, and managing the two seamlessly. That became a reality later in the year, and NetWorker and Avamar were the first products to support actual hypervisor level backup of VMware virtual machines running in a public cloud.

Thinking about public cloud, Data Domain Virtual Edition (DDVE) became available in both the Azure and AWS marketplaces for easy deployment. Just spin up a machine and get started with your protection. That being said, if you’re wanting to deploy backup in public cloud, make sure you check out my two-part article on why Architecture Matters: Part 1, and Part 2.

And still thinking about cloud – this time specifically about cloud object storage, you’ll want to remember the difference between Cloud Boost and Cloud Tier. Both can deliver exceptional capabilities to your backup environment, but they have different use cases. That’s something I covered off in this article.

There were some great announcements at re:Invent, AWS’s yearly conference, as well. Cloud Snapshot Manager was released, providing enterprise grade control over AWS snapshot policies. (Check out what I had to say about CSM here.) Also released in 2017 was DellEMC’s Data Domain Cloud Disaster Recovery, something I need to blog about ASAP in 2018 – that’s where you can actually have your on-premises virtual machine backups replicated out into a public cloud and instantiate them as a DR copy with minimal resources running in the cloud (e.g., no in-Cloud DDVE required).

2017 also saw the release of Enterprise Copy Data Analytics – imagine having a single portal that tracks your Data Domain fleet world wide, and provides predictive analysis to you about system health, capacity trending and insights into how your business is going with data protection. That’s what eCDA is.

NetWorker 9.2 and 9.2.1 came out as well during 2017 – that saw functionality such as integration with Data Domain Retention Lock, database integrated virtual machine image level backups, enhancements to the REST API, and a raft of other updates. Tighter integration with vRealize Automation, support for VMware image level backup in AWS, optimised object storage functionality and improved directives – the list goes on and on.

I’d be remiss if I didn’t mention a little bit of politics before I wrap up. Australia got marriage equality – I, myself, am finally now blessed with the challenge of working out how to plan a wedding (my boyfriend and I are intending to marry on our 22nd anniversary in late 2018 – assuming we can agree on wedding rings, of course), and more broadly, politics again around the world managed to remind us of the truth to that saying by the French Philosopher, Albert Camus: “A man without ethics is a wild beast loosed upon this world.” (OK, I might be having a pointed glance at Donald Trump over in America when I say that, but it’s still a pertinent thing to keep in mind across the political and geographic spectrums.)

2017 wasn’t just about introducing converged data protection appliances and convergent data protection, but it was also a year where more businesses started to look at hyperconverged administration teams as well. That’s a topic that will only get bigger in 2018.

The DellEMC data protection family got a lot of updates across the board that I haven’t had time to cover this year – Avamar 7.5, Boost for Enterprise Applications 4.5, Enterprise Copy Data Management (eCDM) 2, and DDOS 6.1! Now that I sit back and think about it, my January could be very busy just catching up on things I haven’t had a chance to blog about this year.

I saw some great success stories with NetWorker in 2017, something I hope to cover in more detail into 2018 and beyond. You can see some examples of great success stories here.

I also started my next pet project – reviewing ethical considerations in technology. It’s certainly not going to be just about backup. You’ll see the start of the project over at Fools Rush In.

And that’s where I’m going to leave 2017. It’s been a big year and I hope, for all of you, a successful year. 2018, I believe, will be even bigger again.

NetWorker 2017 Usage Survey

 NetWorker, Site  Comments Off on NetWorker 2017 Usage Survey
Dec 012017
Survey Image

It seems like only a few weeks ago, 2017 was starting. But here we are again, and it’s time for another NetWorker usage survey. If you’re a recent blog subscriber, you may not have seen previous surveys, so here’s how it works:

Every year a survey is run on the NetWorker blog to capture data on how businesses are using NetWorker within their environment. As per previous years, the survey runs from December 1 to January 31. At the end of survey, I analyse the data, crunch the numbers, sacrifice a tape to the data protection deities and generate a report about how NetWorker is being used in the community.

My goal isn’t just for the report to be a simple regurgitation of the data input by respondents. It’s good to understand the patterns that emerge, too. Is deduplication more heavily used in the Americas, or APJ? Who keeps data for the longest? Is there any correlation between the longevity of NetWorker use and the number of systems being protected? You can see last year’s survey results here.

Survey Image

To that end, it’s time to run the 2017 NetWorker survey. Once again, I’m also going to give away a copy of my latest book, Data Protection: Ensuring Data Availability. All you have to do in order to be in the running is to be sure to include your email address in the survey. Your email address will only be used to contact you if you win.

The survey should hopefully only take you 5-10 minutes.

The survey has now closed. Results will be published mid-late February 2018.

Dec 012015

It’s that time of the year again where I ask my readers and the NetWorker Community to spend a few minutes of their time answering some questions on how NetWorker is used within their environment. So your forbearance is greatly appreciated.

Each year I pose this survey and run it through from 1 December through to 31 January the following year – a straight 2 months. This year there’s 23 questions – a slight expansion on previous years, but I want to try to start gathering a bit of data on a couple of other topics, but I still believe this is something that will take no more than 5-10 minutes of your time. In return, you get to contribute to the NetWorker Usage Report that I generate in February each year. And trust me, now that I work in EMC, I’m well aware of how useful product managers find these reports, so this is well and truly your opportunity to provide real feedback as to how you use the product.

Edit, December 2: Product management reached out to me and ask if I could ask about CloudBoost, so I’ve added a 24th question to the survey.

While the survey is anonymous, it does offer a prompt for your to provide your email address, and for this survey there’ll be a prize: a signed copy of my upcoming book on Data Protection. This is a much broader tome of work than my previous book, Enterprise Systems Backup and Recovery, a Corporate Insurance Policy. At the moment the book is still under development, but it will be published next year, and if you want to be in the running for the book, please add your email address to the survey results. Your email address will be kept confidential. (I’ll notify the winner individually and then once the book is published, contact the winner for shipping details.)

Edit: The survey is now closed. Thanks for participating! Results coming in February 2016.


 Cloud, General thoughts, Site  Comments Off on Recovering
Nov 162015

Regular visitors will have noticed that has been down quite a lot over the last week.

I’m pleased to say it wasn’t a data loss situation, but it was one of those pointed reminders that just because something is in “the cloud” doesn’t mean it’s continuously available.

Computer crashed

In the interests of transparency, here’s what happened:

  • The domain, it turned out, was due for renewal December 2014.
  • I didn’t get the renewal notification. Ordinarily you’d blame the registrar for that, but I’m inclined to believe the issue sits with Apple Mail. (More of that anon.)
  • My registrar did a complimentary one year renewal for me even without charging me, so got extended until December 2015.
  • did get a renewal notification this year and I’d even scheduled payment, but in the meantime because it was approaching 12 months out of renewal, whois queries started showing it as having a pendingDelete status.
  • My hosting service monitors whois and once the pendingDelete status was flagged stopped hosting the site. Nothing was deleted, just nothing was served.
  • I went through the process of redeeming the domain on 10 November, but it’s taken this long to get processing done and everything back online.

So here’s what this reinforced for me:

  1. It’s a valuable reminder of uptime vs availability, something I’ve always preached: It’s easy in IT to get obsessed about uptime, but the real challenge is achieving availability. The website being hosted was still up the entire time if I went to the private URL for it, but that didn’t mean anything when it came to availability.
  2. You might be able to put your services in public-cloud like scenarios, but if you can’t point your consumers to our service, you don’t have a service.
  3. In an age where we all demand cloud-like agility, if it’s something out of the ordinary domain registrars seemingly move like they’re wading through treacle and communicating via morse code. (It took almost 4 business days, three phone calls and numerous emails to effectively process one domain redemption.)
  4. Don’t rely on Apple’s iCloud/MobileMe/.Mac mail for anything that you need to receive.

I want to dwell on the final point for a bit longer: I use Apple products quite a bit because they suit my work-flows. I’m not into (to use the Australian vernacular), pissing competitions about Apple vs Microsoft or Apple vs Android, or anything vs Apple. I use the products and the tools that work best for my work-flow, and that usually ends up to be Apple products. I have an iPad (Pro, now), an Apple Watch, an iMac, a MacBook Pro and even my work laptop is (for the moment) a MacBook Air.

But I’m done – I’m really done with Apple Mail. I’ve used it for years and I’ve noticed odd scenarios over the years where email I’ve been waiting for hasn’t arrived. You see, Apple do public spam filtering (that’s where you see email hitting your Junk folder), and they do silent spam filtering.  That’s where (for whatever reason), some Apple filter will decide that the email you’ve been sent is very likely to be spam and it gets deleted. It doesn’t get thrown into your Junk folder for you to notice later, it gets erased. Based on the fact I keep all of my auto-filed email for a decade and the fact I can’t find my renewal notification last year, that leaves me pointing the finger for the start of this mess at Apple. Especially when, while trying to sort it out, I had half a dozen emails sent from my registrar’s console to my account only to have them never arrive. It appears Apple thinks my registrar is (mostly) spam.

My registrar may be slow to process domain redemptions, but they’re not (mostly) spam.

A year or so ago I started the process of migrating my email to my own controlled domain. I didn’t want to rely on Google because their notion of privacy and my notion of privacy are radically different, and I was trying to reduce my reliance on Apple because of their silent erasure habit, but the events of the last week have certainly guaranteed I’ll be completing that process.

And, since ultimately it’s still my fault for having not noticed the issue in the first place (regardless of what notifications I got), I’ve got a dozen or more calendar reminders in place before the next time needs to be renewed.

Aug 262012

It’s been a while since I’ve posted anything, and that’s not really what I intended. I had hoped to do a rolling series of articles about NetWorker 8, but somewhere along the line my workload took a huge spike, and my personal life got busier, too.

So those articles I intended to pen weeks ago? Well, weeks have gone by and I’ve barely managed to put together a paragraph on either synthetic fulls or multi-tenancy in NetWorker 8.

I can’t promise those articles are going to appear this week – in fact, I can practically guarantee it. In fact, I may actually do one or two other articles first in the coming 2 weeks, including a very brief survey I’d like to run, before I get back to dissecting NetWorker 8.

Again, apologies – hopefully I’ll get back to my regularly scheduled programming soon.

Mar 152012

I’ve been pretty quiet of late on the site, and it’s not through a lack of interest. Unfortunately there’s several major challenges that I’m currently dealing with that are diverting most of my attention from all kinds of writing – not just this blog, but also my personal blog too.

Most of the articles I’m thinking of for the blog at the moment require more extensive testing and research – or are multi-part ones that require a lot more spare time to be allocated to them. However, I’m not getting a lot of that spare time at the moment – the place we’re renting is up for sale and as a result of that it seems every second day I’m dealing with another open-for-inspection, etc. It’s also fairly draining, so between that and work projects, I’m just not getting a lot of energy to devote to the site.

Things should start to ease in a couple of weeks; we should know by that stage whether an investor or an occupier has bought the property, and even if that means a move, it’ll at least be a period of action rather than a draining holding pattern.

So, bear with me: there is new content being worked on, it’s just my window of opportunity to work on it is a little small at the moment.


Dec 212011

It’s that time of the year where I sit back for a moment and look at what articles have attracted the most readers over the year, and it’s a fairly eclectic bunch. Interestingly, for the first time since forever, the article about fixing NSR Peer Information issues didn’t come first – we have some new winners.

10 – New Micromanual – LinuxVTL and NetWorker

The second micromanual was a step-by-step guide for configuring the open source LinuxVTL system with NetWorker. I had hoped when I started writing micromanuals that I’d get them more frequently delivered, but various factors get in the way of this. Maybe in 2012 I’ll be able to get a couple more out and available.

9 – Killing scheduled cloning operations

When NetWorker’s scheduled clone option was introduced, there were a few bugs relating to stopping a scheduled clone operation from the GUI. Sometimes you could, and sometimes you couldn’t. However, you could always kill a scheduled clone job from the command line, which is what this post explained.

8 – NetWorker Firewall Configuration on Windows

Very early in the year I was doing a lot of work with NetWorker on Windows 2008 R2, and I was noticing a few gaps in the installation process when it came to the process of automated configuration of the Windows Firewall to work with NetWorker daemons. This post explained the lessons I learnt.

7 – Carry a jukebox with you (if you’re using Linux)

This article was my first post about configuring the open source LinuxVTL system with NetWorker. Since then LinuxVTL has evolved quite a lot, and I’ll likely even need to update that micromanual early in the new year as a consequence.

6 – Why I’d choose NetWorker over NetBackup Every Time

Despite the fact that the article was titled “Why I’d choose…”, I had a rather indignant response to this post insisting I was being a jerk by writing it. I stand by every word in that post. I would not, personally, elect to choose NetBackup over NetWorker on the basis that NetBackup only has true image recovery as an option, and that NetBackup doesn’t support dependency chains for backup images. I see both of these factors as critical to a true enterprise backup product, and NetBackup only half supports one of them. That doesn’t make me a jerk, it makes me someone who gives a damn about your data.

5 – Using NetWorker Client with Opensolaris

A guest article written by Ronny Egner, this post covered off getting the NetWorker client working with the OpenSolaris version of Solaris.

4 – Basics – Fixing “NSR peer information” errors

A persistent challenge in NetWorker is when the NSR peer information gets out of whack; usually this can happen when a significant change happens on a client, and the server must have this information reset. I’d still love to see this article become irrelevant by seeing an option appear in NMC to handle it, but until then, this will remain a fairly popular article.

3 – This is wrong

Earlier this year, an Australian hosting service lost thousands of hosted domains and websites due to a “hack attack”. Supposedly the clever hackers destroyed not only the production data, but also all the backups.

What really went wrong was that the company in question had designed a very poor and inadequate backup solution. Rumours were abounding at the time that backups were just simply replicated snapshots. Snapshots may be able to act as backups, but not indefinitely, and not if they’re the only thing configured. (Backups and snapshots are effectively ‘sister’ activities in ILP.)

2 – micromanual: NetWorker Power User Guide to nsradmin

The original micromanual – “NetWorker power user guide to nsradmin” was and remains extremely popular. There’s been thousands of downloads of it since its release, including quite a number from EMC themselves, so it’s clearly a handy resource. If you’ve not downloaded it yourself but you want to boost your NetWorker productivity, it’s a must read.

1 – NetWorker 7.6 SP1

When NetWorker 7.6 SP1 came out, it was a huge release. In my opinion, it should have been numbered NetWorker 7.7 at least; it wasn’t a minor set of changes or a round of bug fixes, it included significant functionality updates (including one of my favourites – support for Boost). As the number one read article of the year, it’s been a big resource for people looking at the functionality of newer releases of NetWorker.

And that, they say, is that

This year has personally been a huge year for me. My partner and I moved state/city in June, going from a regional area just outside of Sydney to the inner west of Melbourne. We also celebrated our 15th anniversary together, surrounded by many of our new friends (who are like family to us) and a few of our old friends. We were even invited to get on the radio to talk about that, not only from the longevity of the relationship and having run the anniversary party up against the monthly Melbourne Den night. (There’s a podcast coming…) It was also the year when I sorted a lot of stuff out, and to boil all this down: it was the year that I spent a lot of time focusing on my personal life and not so much on the blog.

There may still be one or two posts left for 2011, but I’m also starting to get my head around changes and new material for 2012, and I believe 2012 will be a big year for NetWorker users.

Time to move

 Site  Comments Off on Time to move
Sep 082011

After an 8+ hour outage on my domain this week, with no explanation from my hosting provider, I’ve finally had enough!

So, over the coming weeks, as I get things setup in the background, there may be brief outages on this blog as I transfer across to a new, better and more stable hosting provider.

Apologies, in advance, for any disruption.



Nov 172010

In March this year, I ran a NetWorker Usage Survey to gauge the lay of the land in terms of how and on what NetWorker is deployed within the user community. As a result of that, hundreds of people downloaded the March 2010 NetWorker Usage Survey Report.

It’s time to revisit that survey; I’ve adjusted the questions a little based on some of the responses from the previous survey results, and I’m keen for as many answers as possible to make this a worthwhile report. (Don’t forget, the report will be free to download.)

I’ll be aiming to keep the survey running until the end of November, and publish the results early in December. So please, fill out the survey below and have your say!


The survey has now closed. The report will be posted soon.

%d bloggers like this: