Apr 302015
 

I’m pleased to say that on Monday I’ll be starting a new role. While I’ve worked closely with EMC for many a year as a partner, and more recently in a subcontracting position, come Monday that’ll all be changing…

…I’m joining EMC. I’ll be working in the Data Protection Solutions group as a sales engineer.

I’ve got to say (and not just because people from EMC will no doubt see this!) that I’m really looking forward to this role. This will allow me more so than ever before to look holistically at the entire data protection spectrum. While I’ve always had an eye on the bigger picture of data protection, enterprise backup has always been the driving activity I’ve focused on. More so than that, EMC is one of only a very small handful of vendors I’ve ever wanted to work for (and one of the other vendors I wanted to work for was Legato, so you might say I’m achieving two life goals with just one job) – so I’m going to be revved up from the start.

I’ll be continuing this blog, but with a broader exposure to the entire data protection suite I’ll be working with at EMC, expect to see more coverage on those integration points, too.

It’ll be a blast!

Preston de Guise

Jan 162014
 

Elect2014 logoI’m rather pleased to say I’ve been included for the second year running in the EMC Elect programme. Last year was the first time the programme was run and while at times my schedule prevented full participation, I’ve got to say it was an excellent community to be part of.

It’s fair to say I stay fairly focused on backup and data protection in general – it’s a niche area within a niche area, which sometimes creates interesting headaches, but one thing I can be guaranteed on is that EMC remains committed to getting as much information out into their various product communities as possible. It’s almost invariably the case that if you look, you’ll find it.

Elect gives me the opportunity to see more than just backup and recovery. Last year, for instance, I was lucky enough to get to see the VNX MCx Speed2Lead launch live, in Milan. The trip itself was as fast as the products at the launch … from Australia to Italy and back again in under a week meant for a lot of time on planes. A lot. But it was worth it to see live how invested EMC are in their products. Yes, there was criticism of the event, but I stand by my response to that criticism: the storage industry as a whole is too often seen as the “boring” part of the entire IT industry, and it’s refreshing to see a company encouraging their employers, their users and their partners to be proud of what they’re doing.

I’m looking forward to seeing what EMC Elect 2014 brings, and hope to engage a lot more than I found time for last year – the rewards of being connected to a community of experts are obvious!

***

To see a comprehensive list of the EMC Elect 2014 members (well, certainly those on Twitter), check out this EMC Corporate list.

Addendum – the full official list is over at the EMC Community Network.

Happy anniversary, EMC and Legato

 NetWorker  Comments Off on Happy anniversary, EMC and Legato
Jul 012013
 

Fireworks

In July 2003, 10 years ago, EMC announced they were buying Legato for around $1.3 billion dollars (US). The acquisition was completed in October 2003, but almost immediately after it was announced, long-term partners of both Legato and EMC were seeing Legato start aligning to being an EMC company.

It took years to stop referring to NetWorker as “Legato NetWorker” … in fact, there’s many people around who still call it that. I’m too fussy on being accurate, and have been working for systems integrators all that time, where accuracy is critical in sales, to have kept using the Legato moniker. To be honest, it’s a bit of a shame it’s officially gone, but that’s just an old timer reminiscing.

After ten years, how have EMC handled the stewardship of NetWorker? Oh, to be sure, there were other Legato products – the Xtender products in particular, but they’d not been developed internally at Legato, just acquired. They weren’t the heart of the company – that was NetWorker.

EMC obviously liked what NetWorker could do (why wouldn’t they? Even ten years ago it was one of the best, and most underrated enterprise backup products around) … so much so that they quickly killed off their own backup product, EDM, to replace it with NetWorker.

It’s fair to say though that for a few years at least, the NetWorker/EMC mashup seemed to create a few headaches … at least as far as people external to the  company could see. When EMC acquired Legato, Legato were preparing to release NetWorker 7.2. So over the course of ten years, we’ve seen NetWorker go from 7.2 to 7.3, 7.3 to 7.4, 7.4 to 7.5, 7.5 to 7.6, and finally 7.6 to 8.0. That’s a lot of 7s.

The funny thing is though – much as over the years those incremental jumps annoyed me at times, NetWorker 8 represents such a radical overhaul and suite of new features that in hindsight I don’t honestly think any of those previous versions really deserved a new full version number. 7.6 came closest.

That wasn’t to say they were standing still.

OK, for a while they might have been treading water, but…

…NetWorker was a product that was purchased almost ahead of its time. I’ve always said that the best backup products are frameworks, where you can almost infinitely stack permutations on around them, building and enhancing and changing to suit the needs of each environment they’re deployed in. There’s no doubt NetWorker is a framework.

Yet as a software-only stack, NetWorker had been reaching the point where it wasn’t immediately certain where it should go next. That did linger a little while at the start of the EMC acquisition, but eventually the backup message filtered through. EMC now has an entire BRS division … it was created with the acquisition of Data Domain; the need for it was created with the acquisition of Avamar, but the initial thrust for it was created with the acquisition of Legato.

Data Domain is undoubtedly an excellent tool in a backup suite, but in and of itself, it’s like using tar and mtx to achieve enterprise backups. Avamar tells a compelling story, but source based deduplication can’t tell the entire story in a world of burgeoning multimedia resources and massive databases.

EMC needed to acquire Legato because NetWorker is the tool that glues the other bits together. NetWorker gives all the automation required to allow a Data Domain to seriously transform the backup processes for organisations. Equally, couple NetWorker with Avamar and you can solve a bunch of other challenges within an environment. By itself, NetWorker offers the management framework, the operational framework, and the solid dependability of a long-term enterprise backup product. Throw in some deduplication technology on top of that and you’ve got yourself a damn good backup environment for your organisation.

While the germination of the NetWorker success story within EMC has taken a while, we’re now well and truly seeing the fruits of that endeavour.

Happy anniversary, EMC and Legato.

iTWire – Aussie storage growth above average: Gartner

 Tidbit, Tidbits  Comments Off on iTWire – Aussie storage growth above average: Gartner
Mar 172011
 

There’s a report over at iTWire that has two highly pertinent details. (iTWire – Aussie storage growth above average: Gartner.)

The article is about how Australian spending on storage is growing faster than the rest of the world (IMHO that’s just further proof of how helpful the government stimulus package was), and has two particular points of interest.

First:

The big winner was EMC, which saw its revenue from the region grow from $US533.9 million to $US716.0 million. Most other vendors also saw improved revenues…

That doesn’t surprise me. As an employee of an EMC partner, I know EMC have been very strongly pushing in the Australian market over the last 12 months. I fully believe that other vendors have been pushing hard and (for the most part) achieving good results, but EMC has had a really solid story during this spending cycle, and it’s been paying off – time and time again.

What really didn’t surprise me though was the “but” following that above quote:

…but the biggest loser was Oracle. In 2009, Sun had $US134.4 million revenue in 2009. Now part of Oracle, it only recorded $US82.1 million revenue in 2010

Since the Oracle acquisition of Sun, every single one of my customers who had previously been a large Sun customer has either been resolutely turning away from the vendor, or eyeing them with firm displeasure. Why? Oracle’s higher prices for maintenance and product has had a significant impact on the budgetary options available to one of Sun’s biggest previous customer bases – the educational market. (This, for what it’s worth, is why I penned the article last year, “RIP Solaris“.)

While I’m not normally one to put much stock in analyst reports, this one seems to gel with what I’ve been seeing for the past 12 months.

Dec 232010
 

The holiday season is upon many of us – whether you celebrate xmas or christmas, or just the new year according to the Julian calendar, we’re approaching that point where things start to ease off for a lot of people and we spend more time with our families and friends.

Before I wrap up for the year, I wanted to spend a few minutes reintroducing some of the most popular topics of the year on the blog – the top ten articles based on directly linked accesses. Going in reverse order, they are:

  • Number 10 – “Why I’d choose NetWorker over NetBackup every time“. I was basically called an idiot by someone in the storage community for writing this, but the fact remains for me that any backup product that fails to support backup dependencies is not one that I would personally choose. Given that a top search that leads people to the blog is of the kind, “netbackup vs networker” or “networker vs netbackup”, clearly people are out there comparing the two products, and I stand by my support of the primacy of backup dependency tracking.
  • Number 9 – “A tale of 4 vendors“. A couple of months ago I attended SNIA’s first Australian storage blogger event, touring EMC, IBM, HDS and NetApp. Initially I’d planned to blog a fairly literal dump of the information I jotted down during the event, but I realised instead I was more drawn to the total solution stories being told by the 4 vendors.
  • Number 8 – “NetWorker 7.5.2 – What’s it got?“. NetWorker 7.5 represented a big upgrade mark for a lot of sites, particularly those that wanted to jump the v7.3 and v7.4 release trees. I still get a lot of searches coming to the blog based on NetWorker 7.5 features and upgrades.
  • Number 7 – “Using NetWorker Client with Opensolaris“. This was written by guest blogger Ronny Egner, and has seen more interest over the last few months as Oracle’s acquisition continues to grind down paid Sun customers. If you’re interested in writing guest blog pieces for the NetWorker Blog in 2011, let me know!
  • Number 6 – “Basics – Fixing ‘NSR peer information’ errors“. I’ve said it before, and I’ll say it again: there is no valid reason why the resolution for this hasn’t been built into NMC!
  • Number 5 – “NetWorker and linuxvtl, Redux“. The open source LinuxVTL project continues to grow and develop. While it’s not suited for production environments, LinuxVTL is certainly a handy VTL to plug into a NetWorker/Linux system for testing purposes. I know – I use it almost every single day.
  • Number 4 and Number 3 – “NetWorker 7.6 SP1“. Interest in NetWorker 7.6 SP1 has been huge, and I had two blog postings about it – a preview posting based on publicly shared information from EMC, and the actual post-release article that covered some key features more in-depth.
  • Number 2 – “Carry a Jukebox with you (if you’re using Linux)“. The first article I wrote about the LinuxVTL project.
  • Number 1 – “micromanual: NetWorker Power User Guide to nsradmin“. The Power User guide to nsradmin has been downloaded well over a thousand times. I’ve been a fan of nsradmin ever since I started using NetWorker and had to administer a few NetWorker servers over extremely slow links (think dial-up speeds). It’s been very gratifying to be able to introduce so many people to such a useful and powerful tool.

Personally this year has been a pretty big one for me. Probably the biggest single event was that my partner and I made the decision to move from central coast NSW to Melbourne, Victoria during the year. We haven’t moved yet; it’s due for June 2011, but it’s going to necessitate a lot of action and work on our part to get there. It’ll be well worth the effort though, and I’ve already reached that odd point where I no longer think of the place I’m living as “home”. The reasons that led us to that decision are covered on my personal blog here. Continuing the personal front, I was extremely pleased to be able to say goodbye to the mobile “netwont” that is Vodafone in Australia. I’ve been using my personal blog to talk about a lot of varied topics running from internet censorship to invasive information requests to more mundane things, such as what makes a good consultant.

Technically I think the coming few years are going to be fascinating. Deduplication has only just started to make a splash; I think it’ll be a while before it becomes as pervasive as say, plain old disk backup, but it will have a continued and growing effect in the enterprise backup market. I predict that another bevy of dopey analysts will insist that tape is dead, just like they have every year for the last 2 decades, and at the end of the year I predict the majority of companies they interface with will still be using tape in some form or another. However, the use of tape will continue to evolve in the marketplace; as nearline disk storage becomes more regular and cheaper for backup solutions, we’ll see tape continue to be pushed out to longer term retention systems and safety nets – i.e., tape is certainly sliding away from being the primary source for recoveries in an enterprise backup environment.

One last thing – I want to thank the readers of this blog. To those people who subscribe to the mailing list, and those who subscribe to the RSS feed, to those who have the site bookmarked and to those who just randomly stumble across the site – I hope in each case you’re finding something useful, and I’m grateful for your readership.

Happy holidays to those of you celebrating or relaxing over the coming weeks, and peaceful times to those working through.

Nov 102010
 

This is my blog entry about the SNIA blogfest on 4 November that I attended, but it’s not what I originally intended to be my blog entry. I was intending just to do a slightly more structured dump of all the details I took down during the meetings with the four vendors, but I don’t think this would be fair to my readers.

At the same time, I’m not a storage expert like the other bloggers who were at the event. In particular, Rodney Haywood and Justin Warren have been dedicating their blogs to strong amounts of information of the details discussed by the vendors. Ben Di Qual tweeted a huge amount during the session as well, and he tweeted towards the end of the day that maybe it’s time for him to launch a blog as well. I’m presuming Graeme Elliott will post something as well (maybe here?)

Perhaps the delays I’ve experienced in finding time to blog about the event have been useful, because it’s made me realise I want to summarise my experience in a different way.

For that, I need an illustration. Having met with the four vendors, and heard their “how we do things” story, I can now visualise each vendor as a tower. And to me, here’s what they look like:

IBM Tower

EMC Tower

HDS Tower

NetApp Tower

I don’t want to talk about the speeds and feeds of any of the products of any of the vendors. To me, that’s the boring side of storage – though I still recognise its importance. I don’t really care how many IOPs a storage system can do for instance, so long as it can do enough for the purposes of the business that’s deploying it. Though I will say that a common message from vendors was that while companies buy storage initially for capacity, they’ll end up replacing it for performance. (This certainly mirrors what I’ve seen over the years.)

When I got my book published, the first lesson was that it wasn’t so to speak a technical book, but an IT management book. Having learnt that lesson, it subsequently reminded me that my passion in IT is about management, processes and holistic strategies. This is partly why backup is my forté, I think; backup is something that touches on everything within an environment, and needs a total business strategy as opposed to an IT policy.

So the message I got out of the vendors was business strategy, not storage options. This is equally as important a message though – in fact, it’s arguably far more in-depth a message than a single storage message.

I’m hoping the diagrams help to explain the business strategy message I got from the four vendors.

EMC and IBM to me presented the most comprehensive business strategy of the four vendors. This was a mix of what they presented and prior understanding of their overall product range. If I were to rate the actual on-the-day presentations, IBM certainly were the best at communicating their “total business strategy”. IBM though also said that they didn’t quite know what information to present, so they just went for the big picture overview of as much as anything. Whether EMC, as a more active social media company were treating the event as a literal “storage” event, or just through a lack of (foresight? time taken to prepare?), concentrated just on storage. But I still know much of the EMC story. To be fair to IBM, too, they’ve been doing it for a lot, lot longer than any of the other storage companies – so they’re pretty good communicators.

Neither IBM nor EMC provide a total strategy; they both have gaps, of course – I wouldn’t claim that any vendor out there today provides a total strategy. IBM and EMC though are closer to that full picture than either of the other two vendors we met with on that day.

But what about HDS and NetApp?

I don’t believe HDS have a total business strategy story to tell. They want to; they try to, but the message comes out all jumbled and patch-work. They contribute disks and some storage systems themselves, and try to contribute a unifying management layer, and then add an assortment of diverse and not necessarily compatible products into the mix. To be fair: they’re trying to provide a total business strategy, but if I were to put an “IT Manager” hat on, I’m not convinced I’d be happy with their story – as an IT manager I may very well choose to buy one or two products from them, but their total strategy is so piecemeal that there’s no technology reason to try to source everything from them.

For what it’s worth, their “universal management” interface message is as patchy as their product set. It started with (paraphrasing only!) “we provide a unifying management system” and boiled down to “our goal is to provide a unifying management system, when we can”.

To me, that’s not a unified management system.

I should note at this point that I’m working on the blog entry now on battery power, as all of Bunbury appears to have lost power tonight. (Let me speak another time of the appalling lack of redundancy built into Australian power distribution systems.)

On to NetApp. NetApp try to present a total business strategy story. They speak of unified storage where you don’t need to buy product A for function X, product B for function Y and product C for function Z – instead, you just buy über product A that will handle X, Y and Z.

But what about functions P, Q and R? Coming at the process from a “I’m not into storage”, the first thing that jars me with NetApp is the “old backup is dead [edit: John Martin has rightly corrected me – he said ‘backup is evil’]” story. NetApp focus incredibly hard on trying to point out that you never need to backup again so long as you have the right snapshot configuration.

Like IBM’s “incrementals forever are really cool” story, NetApp’s “with snapshot you never need to backup again” strikes me as being almost at 90 degree angles to reality.

(Aside: Of course I’m interested in hearing counter arguments from IBM, and I’ve actually requested trial access to their TSM FastBack product, since I am interested in what it may be able to do – but being told that it doesn’t matter how many incremental backups you’ve done because (paraphrasing!) “a relational database ensures that the recovery is really fast by tracking the backups” isn’t something that convinces me.)

NetApp’s “you never need to backup again” story is at times, from my conservative approach to data protection, a bit like The Wizard of Oz: “Pay no attention to that man behind the curtain!” Yet having a company repeatedly insist that X is true because X is true doesn’t make me for one moment believe that X is true: and so, I remain unconvinced that “snapshots forever = backups never” is a valid business strategy except in the most extreme of cases.

Or to put it another way: HDS trotted out an example about achieving only a 4% saving by doing dedupe against 350 virtual machines. I’d call this trying to use an extreme example as a generic one: I’m sure they did encounter a situation where they only got a 4% storage saving by applying dedupe against 350 virtual machines; but I’m equally sure that one example can’t be used as proof that you can’t dedupe against virtual machines.

Likewise, I call shenanigans on NetApp’s bid to declare traditional backup dead [edit: evil]. Sure the biggest of the big companies with the biggest of budgets might be able to do a mix of snapshots and replication and … etc., but very, very few companies can completely eliminate traditional backup. They may of course reduce their need for it in all situations. As soon as you’ve got snapshot capable storage, particularly in the NAS market, you can let users recover data from snaps and then focus backups on longer term protection etc. But that’s not eliminating traditional backup altogether.

[Edit: Following on from correction; I’d like to see NetApp’s “how backup fits into our picture” strategy in better detail, and based on John’s comments, I’m sure he’ll assist!]

So in that sense, we have four vendors who each try, in their own way, to provide a total business strategy. IBM and EMC are the ones that get closest to that strategy, and both NetApp and HDS in their own way were unable to convince me they’re able to do that.

That doesn’t mean to say they should be ignored, of course – but they’re clearly the underdogs in terms of complete offerings. They both clearly have a more complete story to tell than say, one of the niche storage vendors (e.g., Xiotech or Compellent), but their stories are no where near as complete as the vendors who aim for the total vertical.

It was a fascinating day, and I’d like to thank all those involved: Paul Talbut and Simon Sharwood from SNIA; Clive Gold and Mark Oakey from EMC; John Martin from NetApp; Adrian De Luca from HDS; and Craig McKenna, Anna Wells and Joe Cho from IBM. (There were others from IBM, but I’m sorry to say I’ve forgotten their names). Of course, big thanks must also go to my fellow storage bloggers/tweeters – Rodney Haywood, Justin Warren, Ben Di Qual and Graeme Elliott. Without any of those people, the day could not have been as useful or interesting as it was.

And there you have it.

Now for the disclosures:

  1. EMC bought us coffee.
  2. EMC gave us folios with a pad and paper. I took one. The folio will end up in my cupboard with a bunch of other vendor provided folios from over the years.
  3. IBM gave us “leftover” neoprene laptop bags from a conference, that had a small pen and pad in it. My boyfriend claimed the neoprene bag.
  4. IBM bought us coffee.
  5. IBM provided lunch.
  6. HDS provided afternoon tea.
  7. HDS provided drip-filter coffee. I did not however in any way let the drip filter coffee colour my experience on-site. (Remember, I’m a coffee snob.)
  8. NetApp provided beer and wine. I had another engagement to get to that night, and did not partake in any.

EMC, Sun and Oracle

 General thoughts, NetWorker  Comments Off on EMC, Sun and Oracle
Apr 022010
 

There’s been much speculation as to whether Sun under Oracle would retain the EMC NetWorker OEM arrangement.

Finally there’s some details on Sun’s website under the banner “Sun and EMC“. In it, they state:

Sun will continue to OEM the EMC NetWorker software for backup and recovery which enables Sun to continue offering the EMC software as Sun StorageTek Enterprise Backup Software.

So, business as usual?

Unfortunately it looks like “yes”. Don’t get me wrong, I teethed on Solstice Backup, as it was called then – in fact I used Solstice Backup for 4 years before I even installed NetWorker as a non-OEM product.

Here’s the rub: Sun have been woeful at (a) supporting “NetWorker” in the rebadged form, and (b) providing patches in a timely manner. Again and again, I get complaints from Sun OEM customers that Sun takes ages to update their releases in sync with EMC’s releases. I also hear frequent tales of OEM NetWorker support cases with Sun that take forever. Both of these factors well and truly gel with my experience as a Sun customer in the late 90’s, and I’m still hearing the same stories in the hear and now.

Disclaimer: I sell and support EMC NetWorker native. That should have been obvious but I don’t want to be accused of hiding this.

I didn’t think Sun’s statement went far enough. I don’t want to hear that they’re going to continue to OEM NetWorker, I want to hear that they’re going to OEM NetWorker and pick up their game. Release cycles should be much closer tied to NetWorker, support needs to be considerably improved, and patches need to come out sooner to OEM NetWorker as EMC actually release.

If they’re not, then when you factor in the changes that Oracle are making to Solaris OS licensing, I’m expecting that the reasons for people remaining with the OEM version of NetWorker to shrink considerably.

Mar 052010
 

Over at a website called ignore the code, there’s a fascinating and insightful piece at the moment about removing features.

This is often a controversial topic in software design and development, and Lukas Mathis handles the topic in his typically excellent style. In particular, the summation of the problem through illustrations of two “Swiss Army Knives” demonstrates the issue quite well.

So what does this have to do with NetWorker, you might ask? Well, quite a bit. In light of the recent release of NetWorker 7.5 SP2 I thought it relevant to spend a little time ruminating about the software development process, relating it to NetWorker, and asking EMC product management some questions about their processes.

Within any software development model, there are four requirements:

  1. Adding new features.
  2. Refining existing features.
  3. Removing obsolete features.
  4. Fixing bugs.

It’s a challenging problem – any one or two of these requirements can be readily accommodated without much fuss. The challenge that faces all vendors though is balancing all four software development processes. Personally, I don’t envy the juggling process that faces product managers and product support managers on a daily basis. Why? All four requirements combined create clashing priorities and schedules that makes for a very challenging environment. (It’s not unique to NetWorker of course – it applies pretty equally to just about every software product.)

In most situations, it’s easiest to add new features. This can be a double-edged sword. On the positive side, it can be a key factor in enticing potential customers to become actual customers, and it can equally be a key factor in enticing existing customers to remain customers rather than moving to the competition. On the negative side, it can lead to software bloating – a primary criticism of companies like Microsoft and Adobe. (Thankfully, I don’t think you can accuse NetWorker of being too ‘bloated’; in the 14 or so years I’ve been using it, the install footprint has of course gone up, but there’s not really been any “why the hell did they do that?” new features, and overall the footprint is well within the bounds for backup and recovery software.)

Like any good backup product, NetWorker’s development history is full of new features being added to it, such as the following:

  1. Storage nodes added in v5.x.
  2. Dynamic drive sharing added in v6.
  3. Advanced File Type Devices (ADV_FILE) added in v7.
  4. Jobs database introduced in v7.3.
  5. Virtualisation visualisation in v7.5.
  6. and so on.

Without new features being regularly updated, companies leave themselves open to having the competition overtake them, and so periodically when we see a vendor respond to market forces (or try to push the market in a new direction), we should, even if we aren’t particularly fond of the new feature, accept that adding new features are inevitable in software development.

Equally, NetWorker history is rife with examples of existing features being refined, such as the following:

  1. Support for dedicated storage nodes.
  2. Enhancing the index system in v6 to overcome previous design limitations.
  3. Enhancing the resource configuration database in v7 to overcome previous design limitations.
  4. Frequent enhancement of all the database and application backup modules.
  5. Pool based retention.
  6. and so on.

You could say that feature refinement is all about evolutionary growth of the product. It’s never specifically about introducing entire new features – these are existing features that have grown between releases – usually in response to changing requirements in customer environments. (For instance, the previous resource configuration database worked well so long as you had smallish environments. Over time as environments became more complex, with more clients, and increased configuration requirements, it could no longer cut the mustard, triggering the redesign.)

The more challenging aspect for enterprise backup software is the notion of removing features – if doing so affects legacy recoverability options, it could cause issues for long-term users of the products, and so we usually usability features removed rather than core support features. A few of the features over time that have been removed are:

  1. Support for the old GUIs (networkr.exe from Windows, nwadmin from Unix).
  2. Support for browsing indices via NFS mounts. (This was even before my time with NetWorker. It looks like it would have been fun to play with, but it wasn’t exactly cross-platform compatible!)
  3. Support for cross platform recoveries.
  4. Support for defunct tape formats (e.g., VHS).

I’d argue that it’s rarely the case that decisions to remove functionality are taken lightly. Usually it will be for one of three reasons:

  • The feature was ‘fragile’ and fixing it would take too much effort.
  • The feature is no longer required after a change in direction for the product.
  • The feature is no longer being used by a sufficient number of users and its continued presence would hamper new directions/features for the product.

None of these, I’d argue, are easy decisions.

Finally we have the bugs – or “unanticipated features”, as we sometimes like to call them. Any vendor that tells you their software is 100% bug free is either lying, or their ‘product’ no more complex than /bin/true. Bugs are practically unavoidable, so the focus must be on solid testing, identification and containment. I’ll be the first to admit that there have been spotty patches in the past where testing in NetWorker has seemed to be lacking, but having been on the last couple of betas, I’m seeing a roaring return to rigorous testing in 7.5 and 7.6. Did these pick up all bugs? No – again, see my point about no software ever being 100% bug free.

I’ll hand on my heart say that I can’t cite a single company that has had a spotless record when it comes to bug control – this isn’t easy. Enterprise class backup software introduces new levels of complexity into the equation, and it’s worthwhile considering why. You can take exactly the same piece of enterprise backup software and install it into 50 different companies and I’ll bet that you’ll get a significant number of “unique” situations in addition to the core/standard user experience. Backup software touches on practically every part of an IT environment, and so is affected by a myriad of environment and configuration issues that normal software rarely has to contend with. Or to put it better: while another piece of software may have to contend with one or two isolated areas of environment/configuration uniqueness, backup software will usually have to contend with all of them, and remain as stable as possible throughout.

This isn’t easy. I may periodically get exasperated over bugs, etc., but I recognise the inevitability that I’ll be continuing to deal with bugs in any software I’m using for the rest of my life – so it’s hardly a NetWorker specific issue. (I’m going on the basis here that quantum computing won’t suddenly deliver universal turing machines capable of simulating every possible situation and input for software and hardware.)

While I was writing this article, I thought it would be worthwhile to get some feedback from EMC NetWorker product management on this, and I’m pleased to include my questions to them, as well as their answers, below. These answers come from product management and engineering, and I’m presenting them unedited in their complete form.

Question 1

I’ve been told that EMC has taken considerable steps to speed up the RFE process. Can you briefly summarise the improvements that have been made and the buy-in from product management and engineering on this?

Answer:

With the large size of the NetWorker installed base, we receive many RFEs per month. These requests range in nature from architectural changes to relatively small operational enhancements. We have made great strides in organizing the RFE pool in such a manner so that at the front end of the release planning process we can look back over hundreds of discreet requests and digest those requests into an achievable number of specific and prioritized product requirements.

RFEs come in to the product team through three sources. We take RFEs on PowerLink (EMC’s information portal), through the Support organization, and in face to face meetings with customers and partners. NetWorker Product Management has a central database so that we can consolidate the RFE pool and apply a standard process for scrubbing and categorizing the requests. This is a time consuming process, but it provides us with the capabilities to track the areas of the product that are receiving the most requests and. That allows us to establish goals for a particular release and include RFEs accordingly. An example might be improved back up to disk workflows. The ability to quickly drill down to the requests most relevant to our high-level priorities allows us to efficiently write requirements that directly incorporate end-user feedback.

More customer requests for enhancement will be implemented in 2010 than ever before.  We will address some of the big changes that customers have been calling for, and will also look to implement some bonus enhancements; small changes that won’t make the marketing slides but will make NetWorker operations easier on backup administrators who interact with the product on a daily basis.

Question 2

One challenge with any software vendor is integrating patches (or hot fixes) into stable development trees. How would EMC rate itself with this in relation to NetWorker?

Answer:

We maintain a high level of discipline in maintaining our active code branches.  Hot fixes typically flow into our bug-fix service packs, (such as 7.5 SP1) which then flow back into the main code branch. Any code change made to an active branch must also be applied to the development branch, which builds on a regular basis. Build failures in development are taken very seriously by Engineering, and we engage resources to actively troubleshoot and resolve these issues.

Question 3

Currently we’re seeing cumulative patch cluster releases for most of the supported versions of NetWorker. E.g., NetWorker 7.5 SP1 is now up to cumulative patch cluster 8. These patch clusters currently remain available only via EMC support or partner support programs, and aren’t readily downloadable via standard PowerLink sources. With the projects currently being worked on to improve PowerLink, will we see this change, or is the rationale to not readily provide these cumulative patches a support one?

Answer:

When we post to PowerLink, we want to be sure that anyone who downloads code from EMC knows exactly what they’re getting. If we posted all of the clusters within today’s PowerLink framework, the result would be a confusing PowerLink experience for customers.  We consider the patch cluster process to be an improvement on earlier practices and look forward to continued improvements in this area.

Question 4

What feature are you most pleased to have seen integrated into either NetWorker 7.5 or 7.6?

Answer:

We are very pleased with the NetWorker Management Console work that has done over the course of 7.5 and 7.6. Visualization of virtual environments (introduced in 7.5) has been very well received by customers, and we believe that the improvements in 7.6 around customization and performance will also be greatly appreciated as customers move to 7.6+ releases.

Question 5

One RFE process advocated is to have product management vet RFEs and submit them to a public forum to be voted on by community users. Advocates of this model say that it allows better community involvement and has products evolve to meet existing user requirements. Those who disagree with this model usually suggest that existing user feature suggestions don’t always accommodate design changes that would help boost market share. Is this a model which EMC has considered, or is it seeking to informally do this via the various EMC Community Forums that have been established?

Answer:

A closed loop is ideally what our enterprise customers who submit RFEs look for i.e. to enter an RFE, track it, see if it is relevant and will be seriously considered.  Capturing and allowing other users to vote is an option we are actively exploring. We would have to put some infrastructure in place to do so, but it is under investigation. The first audience for such an option would be our recently launched EMC community for NetWorker. The NetWorker user community is quite sophisticated, and we value their input tremendously. While it is true that some users take a narrow view of how NetWorker should evolve, others take a broader and more market-centric view. Our RFEs run the full spectrum.

Nov 302009
 

With their recent acquisition of Data Domain, some people at EMC have become table thumping experts overnight on why you it’s absolutely imperative that you backup to Data Domain boxes as disk backup over NAS, rather than a fibre-channel connected VTL.

Their argument seems to come from the numbers – the wrong numbers.

The numbers constantly quoted are number of sales of disk backup Data Domain vs VTL Data Domain. That is, some EMC and Data Domain reps will confidently assert that by the numbers, a significantly higher percentage of Data Domain for Disk Backup has been sold than Data Domain with VTL. That’s like saying that Windows is superior to Mac OS X because it sells more. Or to perhaps pick a little less controversial topic, it’s like saying that DDS is better than LTO because there’s been more DDS drives and tapes sold than there’s ever been LTO drives and tapes.

I.e., an argument by those numbers doesn’t wash. It rarely has, it rarely will, and nor should it. (Otherwise we’d all be afraid of sailing too far from shore because that’s how it had always been done before…)

Let’s look at the reality of how disk backup currently stacks up in NetWorker. And let’s preface this by saying that if backup products actually started using disk backup properly tomorrow, I would be the first to shout “Don’t let the door hit your butt on the way out” to every VTL on the planet. As a concept, I wish VTLs didn’t have to exist, but in the practical real world, I recognise their need and their current ascendency over ADV_FILE. I have, almost literally at times, been dragged kicking and screaming to that conclusion.

Disk Backup, using ADV_FILE type devices in NetWorker:

  • Can’t move a saveset from a full disk backup unit to a non-full one; you have to clear the space first.
  • Can’t simultaneously clone from, stage from, backup to and recover from a disk backup unit. No, you can’t do that with tape either, but when disk backup units are typically in the order of several terabytes, and virtual tapes are in the order of maybe 50-200 GB, that’s a heck of a lot less contention time for any one backup.
  • Use tape/tape drive selection algorithms for deciding which disk backup unit gets used in which order, resulting in worst case capacity usage scenarios in almost all instances.
  • Can’t accept a saveset bigger than the disk backup unit. (It’s like, “Hello, AMANDA, I borrowed some ideas from you!”)
  • Can’t be part-replicated between sites. If you’ve got two VTLs and you really need to do back-end replication, you can replicate individual pieces of media between sites – again, significantly smaller than entire disk backup units. When you define disk backup units in NetWorker, that’s the “smallest” media you get.
  • Are traditionally space wasteful. NetWorker’s limited staging routines encourages clumps of disk backup space by destination pool – e.g., “here’s my daily disk backup units, I use them 30 days out of 31, and those over there that occupy the same amount of space (practically) are my monthly disk backup units, I use them 1 day out of 31. The rest of the time they sit idle.”
  • Have poor staging options (I’ll do another post this week on one way to improve on this).

If you get a table thumping sales person trying to tell you that you should buy Data Domain for Disk Backup for NetWorker, I’d suggest thumping the table back – you want the VTL option instead, and you want EMC to fix ADV_FILE.

Honestly EMC, I’ll lead the charge once ADV_FILE is fixed. I’ll champion it until I’m blue in the face, then suck from an oxygen tank and keep going – like I used to, before the inadequacies got too much. Until then though, I’ll keep skewering that argument of superiority by sales numbers.

When is a cumulative patch cluster not a cumulative patch cluster?

 General thoughts, NetWorker  Comments Off on When is a cumulative patch cluster not a cumulative patch cluster?
Jun 192009
 

In the past EMC have not so much “issued” cumulative patch clusters, but let them trickle out on an as-needs basis.

With the 7.5.1 cumulative patch cluster, this appears to be following the same general scenario – there’s certainly nothing in PowerLink’s download section (as of this morning) that indicates anything different.

However, this morning I finally got around to installing the cumulative patch cluster for my primary lab machine, and noticed something very odd. You see, when I’d been given the details for downloading the cumulative patch cluster (as part of a support case), I’d set the download running and kept working on other things, so this is the first time I’ve actually gone to look at the files.

When I decompressed the Linux 64-bit Intel package though, I thought maybe I’d uncompressed the wrong thing – it was a bunch of RPMs. If you’ve got any familiarity with NetWorker cumulative patch clusters, you know they’re usually done as a bunch of standalone binaries. Indeed, the couple of pages of notes I got over the patch cluster indicated just this.

However, the story is very different. The cumulative patch clusters I downloaded as part of my support case for 7.5.1 are actually completely new replacement distributions for 7.5.1.

Here are the file sizes – something I should have looked at earlier, but didn’t think to:

[root@nox 7.5.1.2-Cumulative]# du -hs *
235M    nw75sp1_aix.tar.gz
148M    nw75sp1_hpux11_64.tar.gz
97M     nw75sp1_hpux11_ia64.tar.gz
63M     nw75sp1_linux_ia64.tar.gz
15M     nw75sp1_linux_ppc64.tar.gz
180M    nw75sp1_linux_x86_64.tar.gz
186M    nw75sp1_linux_x86.tar.gz
228M    nw75sp1_solaris_64.tar.gz
62M     nw75sp1_solaris_amd64.tar.gz
24M     nw75sp1_solaris_x86.tar.gz
79M     nw75sp1_tru64.tar.gz
27M     nw75sp1_win_ia64.zip
160M    nw75sp1_win_x64.zip
155M    nw75sp1_win_x86.zip

As you can see, those sizes alone are indicative of distributions. [edit – 2009-06-26 had said “…of patches” by mistake.]

Looking at say, version information for the nsrd binary compared to the original 7.5.1 and the cumulative patch cluster, we get, for the original:

@(#) Release:      7.5.1.Build.269
@(#) Build date:   Fri Mar 20 23:05:02 PDT 2009
@(#) Build info:   DBG=0,OPT=-O2 -fno-strict-aliasing
@(#) Product:      NetWorker
@(#) Build number: 269
@(#) Build arch.:  linux86w

Then for the one installed this morning in the cumulative patch cluster:

@(#) Build date:   Sat May 30 23:05:04 PDT 2009
@(#) Build info:   DBG=0,OPT=-O2 -fno-strict-aliasing
@(#) Product:      NetWorker
@(#) Release:      7.5.1.2
@(#) Build number: 323
@(#) Build arch.:  linux86w

They are two very different – and very obviously different – builds. (So it’s not the case that I’ve say, been accidentally given the distributions as cumulative patch downloads.)

To me, sorry EMC, this is not good way of updating. Patches are either done as patches, in which case they’re issued by support and they’re standalone binaries/zips of binaries, or they’re done as new installs, in which case they are published and updated on PowerLink as well.

This pseudo, “six of one, half a dozen of another” is just going to all end in tears. For goodness sakes, if you go to the trouble of generating the patches as entirely new installs, do the following:

  • Update PowerLink’s download section (currently showing “March 30”, not “May 30”).
  • Notify users of the update.

Note – my complaint here is not that the patches have been issued as new releases of the software. My complaint is that it’s been done in such a way that it’s just going to create confusion by not making the new release readily available under PowerLink.

[root@nox 7.5.1.2-Cumulative]# du -hs *
235M nw75sp1_aix.tar.gz
148M nw75sp1_hpux11_64.tar.gz
97M nw75sp1_hpux11_ia64.tar.gz
63M nw75sp1_linux_ia64.tar.gz
15M nw75sp1_linux_ppc64.tar.gz
180M nw75sp1_linux_x86_64.tar.gz
186M nw75sp1_linux_x86.tar.gz
228M nw75sp1_solaris_64.tar.gz
62M nw75sp1_solaris_amd64.tar.gz
24M nw75sp1_solaris_x86.tar.gz
79M nw75sp1_tru64.tar.gz
27M nw75sp1_win_ia64.zip
160M nw75sp1_win_x64.zip
155M nw75sp1_win_x86.zip
%d bloggers like this: