Dec 222015
 

As we approach the end of 2015 I wanted to spend a bit of time reflecting on some of the data protection enhancements we’ve seen over the year. There’s certainly been a lot!

Protection

NetWorker 9

NetWorker 9 of course was a big part to the changes in the data protection landscape in 2015, but that’s not by any means the only advancement we saw. I covered some of the advances in NetWorker 9 in my initial post about it (NetWorker 9: The Future of Backup), but to summarise just a few of the key new features, we saw:

  • A policy based engine that unites backup, cloning, snapshot management and protection of virtualisation into a single, easy to understand configuration. Data protection activities in NetWorker can be fully aligned to service catalogue requirements, and the easier configuration engine actually extends the power of NetWorker by offering more complex configuration options.
  • Block based backups for Linux filesystems – speeding up backups for highly dense filesystems considerably.
  • Block based backups for Exchange, SQL Server, Hyper-V, and so on – NMM for NetWorker 9 is a block based backup engine. There’s a whole swathe of enhancements in NMM version 9, but the 3-4x backup performance improvement has to be a big win for organisations struggling against existing backup windows.
  • Enhanced snapshot management – I was speaking to a customer only a few days ago about NSM (NetWorker Snapshot Management), and his reaction to NSM was palpable. Wrapping NAS snapshots into an effective and coordinated data protection policy with the backup software orchestrating the whole process from snapshot creation, rollover to backup media and expiration just makes sense as the conventional data storage protection and backup/recovery activities continue to converge.
  • ProtectPoint Integration – I’ll get to ProtectPoint a little further below, but being able to manage ProtectPoint processes in the same way NSM manages file-based snapshots will be a big win as well for those customers who need ProtectPoint.
  • And more! – VBA enhancements (notably the native HTML5 interface and a CLI for Linux), NetWorker Virtual Edition (NVE), dynamic parallel savestreams, NMDA enhancements, restricted datazones and scaleability all got a boost in NetWorker 9.

It’s difficult to summarise everything that came in NetWorker 9 in so few words, so if you’ve not read it yet, be sure to check out my essay-length ‘summary’ of it referenced above.

ProtectPoint

In the world of mission critical databases where impact minimisation on the application host is a must yet backup performance is equally a must, ProtectPoint is an absolute game changer. To quote Alyanna Ilyadis, when it comes to those really important databases within a business,

“Ideally, you’d want the performance of a snapshot, with the functionality of a backup.”

Think about the real bottleneck in a mission critical database backup: the data gets transferred (even best case) via fibre-channel from the storage layer to the application/database layer before being passed across to the data protection storage. Even if you direct-attach data protection storage to the application server, or even if you mount a snapshot of the database at another location, you still have the fundamental requirement to:

  • Read from production storage into a server
  • Write from that server out to protection storage

ProtectPoint cuts the middle-man out of the equation. By integrating storage level snapshots with application layer control, the process effectively becomes:

  • Place database into hot backup mode
  • Trigger snapshot
  • Pull database out of hot backup mode
  • Storage system sends backup data directly to Data Domain – no server involved

That in itself is a good starting point for performance improvement – your database is only in hot backup mode for a few seconds at most. But then the real power of ProtectPoint kicks in. You see, when you first configure ProtectPoint, a block based copy from primary storage to Data Domain storage starts in the background straight away. With Change Block Tracking incorporated into ProtectPoint, the data transfer from primary to protection storage kicks into high gear – only the changes between the last copy and the current state at the time of the snapshot need to be transferred. And the Data Domain handles creation of a virtual synthetic full from each backup – full backups daily at the cost of an incremental. We’re literally seeing backup performance improvements in the order of 20x or more with ProtectPoint.

There’s some great videos explaining what ProtectPoint does and the sorts of problems it solves, and even it integrating into NetWorker 9.

Database and Application Agents

I’ve been in the data protection business for nigh on 20 years, and if there’s one thing that’s remained remarkably consistent throughout that time it’s that many DBAs are unwilling to give up control over the data protection configuration and scheduling for their babies.

It’s actually understandable for many organisations. In some places its entrenched habit, and in those situations you can integrate data protection for databases directly into the backup and recovery software. For other organisations though there’s complex scheduling requirements based on batch jobs, data warehousing activities and so on which can’t possibly be controlled by a regular backup scheduler. Those organisations need to initiate the backup job for a database not at a particular time, but when it’s the right time, and based on the amount of data or the amount of processing, that could be a highly variable time.

The traditional problem with backups for databases and applications being handled outside of the backup product is the chances of the backup data being written to primary storage, which is expensive. It’s normally more than one copy, too. I’d hazard a guess that 3-5 copies is the norm for most database backups when they’re being written to primary storage.

The Database and Application agents for Data Domain allow a business to sidestep all these problems by centralising the backups for mission critical systems onto highly protected, cost effective, deduplicated storage. The plugins work directly with each supported application (Oracle, DB2, Microsoft SQL Server, etc.) and give the DBA full control over managing the scheduling of the backups while ensuring those backups are stored under management of the data protection team. What’s more, primary storage is freed up.

Formerly known as “Data Domain Boost for Enterprise Applications” and “Data Domain Boost for Microsoft Applications”, the Database and Application Agents respectively reached version 2 this year, enabling new options and flexibility for businesses. Don’t just take my word for it though: check out some of the videos about it here and here.

CloudBoost 2.0

CloudBoost version 1 was released last year and I’ve had many conversations with customers interested in leveraging it over time to reduce their reliance on tape for long term retention. You can read my initial overview of CloudBoost here.

2015 saw the release of CloudBoost 2.0. This significantly extends the storage capabilities for CloudBoost, introduces the option for a local cache, and adds the option for a physical appliance for businesses that would prefer to keep their data protection infrastructure physical. (You can see the tech specs for CloudBoost appliances here.)

With version 2, CloudBoost can now scale to 6PB of cloud managed long term retention, and every bit of that data pushed out to a cloud is deduplicated, compressed and encrypted for maximum protection.

Spanning

Cloud is a big topic, and a big topic within that big topic is SaaS – Software as a Service. Businesses of all types are placing core services in the Cloud to be managed by providers such as Microsoft, Google and Salesforce. Office 365 Mail is proving very popular for businesses who need enterprise class email but don’t want to run the services themselves, and Salesforce is probably the most likely mission critical SaaS application you’ll find in use in a business.

So it’s absolutely terrifying to think that SaaS providers don’t really backup your data. They protect their infrastructure from physical faults, and their faults, but their SLAs around data deletion are pretty straight forward: if you deleted it, they can’t tell whether it was intentional or an accident. (And if it was an intentional delete they certainly can’t tell if it was authorised or not.)

Data corruption and data deletion in SaaS applications is far too common an occurrence, and for many businesses sadly it’s only after that happens for the first time that people become aware of what those SLAs do and don’t cover them for.

Enter Spanning. Spanning integrates with the native hooks provided in Salesforce, Google Apps and Office 365 Mail/Calendar to protect the data your business relies on so heavily for day to day operations. The interface is dead simple, the pricing is straight forward, but the peace of mind is priceless. 2015 saw the introduction of Spanning for Office 365, which has already proven hugely popular, and you can see a demo of just how simple it is to use Spanning here.

Avamar 7.2

Avamar got an upgrade this year, too, jumping to version 7.2. Virtualisation got a big boost in Avamar 7.2, with new features including:

  • Support for vSphere 6
  • Scaleable up to 5,000 virtual machines and 15+ vCenters
  • Dynamic policies for automatic discovery and protection of virtual machines within subfolders
  • Automatic proxy deployment: This sees Avamar analyse the vCenter environment and recommend where to place virtual machine backup proxies for optimum efficiency. Particularly given the updated scaleability in Avamar for VMware environments taking the hassle out of proxy placement is going to save administrators a lot of time and guess-work. You can see a demo of it here.
  • Orphan snapshot discovery and remediation
  • HTML5 FLR interface

That wasn’t all though – Avamar 7.2 also introduced:

  • Enhancements to the REST API to cover tenant level reporting
  • Scheduler enhancements – you can now define the start dates for your annual, monthly and weekly backups
  • You can browse replicated data from the source Avamar server in the replica pair
  • Support for DDOS 5.6 and higher
  • Updated platform support including SLES 12, Mac OS X 10.10, Ubuntu 12.04 and 14.04, CentOS 6.5 and 7, Windows 10, VNX2e, Isilon OneFS 7.2, plus a 10Gbe NDMP accelerator

Data Domain 9500

Already the market leader in data protection storage, EMC continued to stride forward with the Data Domain 9500, a veritable beast. Some of the quick specs of the Data Domain 9500 include:

  • Up to 58.7 TB per hour (when backing up using Boost)
  • 864TB usable capacity for active tier, up to 1.7PB usable when an extended retention tier is added. That’s the actual amount of storage; so when deduplication is added that can yield actual protection data storage well into the multiple-PB range. The spec sheet gives some details based on a mixed environment where the data storage might be anywhere from 8.6PB to 86.4PB
  • Support for traditional ES30 shelves and the new DS60 shelves.

Actually it wasn’t just the Data Domain 9500 that was released this year from a DD perspective. We also saw the release of the Data Domain 2200 – the replacement for the SMB/ROBO DD160 appliance. The DD2200 supports more streams and more capacity than the previous entry-level DD160, being able to scale from a 4TB entry point to 24TB raw when expanded to 12 x 2TB drives. In short: it doesn’t matter whether you’re a small business or a huge enterprise: there’s a Data Domain model to suit your requirements.

Data Domain Dense Shelves

The traditional ES30 Data Domain shelves have 15 drives. 2015 also saw the introduction of the DS60 – dense shelves capable of holding sixty disks. With support for 4 TB drives, that means a single 5RU data Domain DS60 shelf can hold as much as 240TB in drives.

The benefits of high density shelves include:

  • Better utilisation of rack space (60 drives in one 5RU shelf vs 60 drives in 4 x 3RU shelves – 12 RU total)
  • More efficient for cooling and power
  • Scale as required – each DS60 takes 4 x 15 drive packs, allowing you to start with just one or two packs and build your way up as your storage requirements expand

DDOS 5.7

Data Domain OS 5.7 was also released this year, and includes features such as:

  • Support for DS60 shelves
  • Support for 4TB drives
  • Support for ES30 shelves with 4TB drives (DD4500+)
  • Storage migration support – migrate those older ES20 style shelves to newer storage while the Data Domain stays online and in use
  • DDBoost over fibre-channel for Solaris
  • NPIV for FC, allowing up to 8 virtual FC ports per physical FC port
  • Active/Active or Active/Passive port failover modes for fibre-channel
  • Dynamic interface groups are now supported for managed file replication and NAT
  • More Secure Multi-Tenancy (SMT) support, including:
    • Tenant-units can be grouped together for a tenant
    • Replication integration:
      • Strict enforcing of replication to ensure source and destination tenant are the same
      • Capacity quota options for destination tenant in a replica context
      • Stream usage controls for replication on a per-tenant basis
    • Configuration wizards support SMT for
    • Hard limits for stream counts per Mtree
    • Physical Capacity Measurement (PCM) providing space utilisation reports for:
      • Files
      • Directories
      • Mtrees
      • Tenants
      • Tenant-units
  • Increased concurrent Mtree counts:
    • 256 Mtrees for Data Domain 9500
    • 128 Mtrees for each of the DD990, DD4200, DD4500 and DD7200
  • Stream count increases – DD9500 can now scale to 1,885 simultaneous incoming streams
  • Enhanced CIFS support
  • Open file replication – great for backups of large databases, etc. This allows the backup to start replicating before it’s even finished.
  • ProtectPoint for XtremIO

Data Protection Suite (DPS) for VMware

DPS for VMware is a new socket-based licensing model for mid-market businesses that are highly virtualized and want an effective enterprise-grade data protection solution. Providing Avamar, Data Protection Advisor and RecoverPoint for Virtual Machines, DPS for VMware is priced based on the number of CPU sockets (not cores) in the environment.

DPS for VMware is ideally suited for organisations that are either 100% virtualised or just have a few remaining machines that are physical. You get the full range of Avamar backup and recovery options, Data Protection Advisor to monitor and report on data protection status, capacity and trends within the environment, and RecoverPoint for a highly efficient journaled replication of critical virtual machines.

…And one minor thing

There was at least one other bit of data protection news this year, and that was me finally joining EMC. I know in the grand scheme of things it’s a pretty minor point, but after years of wanting to work for EMC it felt like I was coming home. I had worked in the system integrator space for almost 15 years and have a great appreciation for the contribution integrators bring to the market. That being said, getting to work from within a company that is so focused on bringing excellent data protection products to the market is an amazing feeling. It’s easy from the outside to think everything is done for profit or shareholder value, but EMC and its employees have a real passion for their products and the change they bring to IT, business and the community as a whole. So you might say that personally, me joining EMC was the biggest data protection news for the year.

In Summary

I’m willing to bet I forgot something in the list above. It’s been a big year for Data Protection at EMC. Every time I’ve turned around there’s been new releases or updates, new features or functions, and new options to ensure that no matter where the data is or how critical the data is to the organisation, EMC has an effective data protection strategy for it. I’m almost feeling a little bit exhausted having come up with the list above!

So I’ll end on a slightly different note (literally). If after a long year working with or thinking about Data Protection you want to chill for five minutes, listen to Kate Miller-Heidke’s cover of “Love is a Stranger”. She’s one of the best artists to emerge from Australia in the last decade. It’s hard to believe she did this cover over two years ago now, but it’s still great listening.

I’ll see you all in 2016! Oh, and don’t forget the survey.

What’s new in 8.2?

 EMC, NetWorker  Comments Off on What’s new in 8.2?
Jun 302014
 

NetWorker 8.2 entered Directed Availability (DA) status a couple of weeks ago. Between finishing up one job and looking for a new one, I’d been a bit too busy to blog about 8.2 until now, so here goes…

what's new in 8.2

First and foremost, NetWorker 8.2 brings some additional functionality to VBA. VBA was introduced as the new backup process in NetWorker 8.1. Closely integrating Avamar backup technologies, VBA leverages a special, embedded virtual Avamar node to achieve high performance backup and recovery. Not only can policies be defined in NMC for VBA can be assigned by a VMware administrator in the vSphere Web Client,  … so too can image level backup and recovery operations be executed there. Of course, regularly scheduled backups are still controlled by NetWorker.

That was the lay of the land in 8.1 – 8.2 reintroduces some of the much-loved VADP functionality, allowing for a graphical visualisation map of the virtual environment from within NMC.

Continuing that Avamar/VMware integration, NetWorker 8.2 also gets something that Avamar 7 administrators have had for a while – instant-on recoveries when backups are performed to Data Domain. There’s also an emergency restore option to pull a VM back to an ESX host even if vCenter is unavailable, and greater granularity of virtual machine backups – individual VMDK files can be backed up and restored if necessary. For those environments where VMware administrators aren’t meant to be starting backups outside of the policy schedules, there’s also the option now to turn off VBA Adhoc Backups in NMC.

Moving on from VMware, there’s some fantastic snapshot functionality in NetWorker 8.2. This is something I’ve not yet had a chance to play around with, but by all accounts, it’s off to a promising start and will continue to get deeper integration with NetWorker over time. Currently, NetWorker supports integrating with snapshot technologies from Isilon, VNX, VNX2, VNX2e and NetApp, though the level of integration depends on what is available from each array. This new functionality is called NSM for NAS (NetWorker Snapshot Management).

The NSM integration allows NAS hosts to be integrated as clients within NetWorker for policy management, whilst still working from the traditional “black box” scenario of NAS systems not getting custom agents installed. There’s a long list of functionality, including:

  • Snapshot discovery:
    • Finding snapshots taken on the NAS outside of NetWorker’s control (either before integration, or by other processes)
    • Facilitate roll-over and recovery from those snapshots (deleting isn’t available)
    • Available as a scheduled task or via manual execution
  • Snapshot operations:
    • Create snapshots
    • Replication snapshots
    • Move snapshots out to other storage (Boost, tape etc) using NDMP protocols
    • Lifecycle management of snapshots and replicas via retention policies
    • Recover from snapshots

Data Domain Boost integration gets a … well, boost, with support for Data Domain’s secure multi-tenancy. This support scaling for large systems designed for service providers, with up to 512 Boost devices supported per secure storage unit on the Data Domain. While previously there was a requirement for a single Data Domain Boost user account across all Data Domain devices, this now allows for better tightening of access.

One of my gripes with BBB (Block Based Backup) in NetWorker 8.1 has been addressed in 8.2 – if you’re stuck using ADV_FILE devices rather than Data Domain, you can now perform BBB even if the storage node being written to is not Windows. Another time-saving option that was introduced in 8.1, Parallel Save Stream (PSS), has been extended to support Windows systems, and has also been updated to support Synthetic and Virtual Synthetic Fulls. in 8.1 it had only supported Unix/Linux, and only in traditional backup models.

Continuing the trend towards storage nodes being seen as a fluid rather than locked resource mapping, there’s now an autoselect storage node option, which if enabled allows NetWorker to select the storage node itself during backup and recovery operations. If this is enabled, it will override any storage node preferences assigned to individual clients, and NetWorker looks for local storage nodes wherever possible.

There’s a few things that have left NetWorker in 8.2, which are understandable: Support for Windows XP, Windows 2003 and the Change Journal Manager. If you still to protect Windows XP or Windows Server 2003, be sure to keep your installers for 8.1. and lower client software around.

There’s some documentation updates in NetWorker 8.2 as well:

  • Server Disaster Recovery and Availability Best Practices – This describes the disaster recovery process for the NetWorker server, including best practices for ensuring you’re prepared for a disaster recovery situation.
  • Snapshot Management for NAS Devices Integration – This documents the aforementioned NSM for NAS new feature of NetWorker.
  • Upgrading to NetWorker 8.2 from a Previous Release – This covers off in fairly comprehensive detail how you can upgrade your NetWorker environment to 8.2.

In years gone by I’ve found that documentation updates have been a lagging component of NetWorker, but that’s long since disappeared. With each new version of NetWorker now we’re seeing either entirely new documents, or substantially enhanced documentation (or both). This speaks volumes of the commitment EMC has to NetWorker.

Jan 162014
 

Elect2014 logoI’m rather pleased to say I’ve been included for the second year running in the EMC Elect programme. Last year was the first time the programme was run and while at times my schedule prevented full participation, I’ve got to say it was an excellent community to be part of.

It’s fair to say I stay fairly focused on backup and data protection in general – it’s a niche area within a niche area, which sometimes creates interesting headaches, but one thing I can be guaranteed on is that EMC remains committed to getting as much information out into their various product communities as possible. It’s almost invariably the case that if you look, you’ll find it.

Elect gives me the opportunity to see more than just backup and recovery. Last year, for instance, I was lucky enough to get to see the VNX MCx Speed2Lead launch live, in Milan. The trip itself was as fast as the products at the launch … from Australia to Italy and back again in under a week meant for a lot of time on planes. A lot. But it was worth it to see live how invested EMC are in their products. Yes, there was criticism of the event, but I stand by my response to that criticism: the storage industry as a whole is too often seen as the “boring” part of the entire IT industry, and it’s refreshing to see a company encouraging their employers, their users and their partners to be proud of what they’re doing.

I’m looking forward to seeing what EMC Elect 2014 brings, and hope to engage a lot more than I found time for last year – the rewards of being connected to a community of experts are obvious!

***

To see a comprehensive list of the EMC Elect 2014 members (well, certainly those on Twitter), check out this EMC Corporate list.

Addendum – the full official list is over at the EMC Community Network.

Speed2Lead: Launch, not hypegasm

 EMC  Comments Off on Speed2Lead: Launch, not hypegasm
Sep 072013
 

I recently went to the EMC Speed2Lead/SpeedToLead product launch in Milan.

After the launch, I saw that The Register had published a piece by Martin Glassborow, aka @Storagebod, called Snide hashtags, F1 cars death by PowerPoint: I’m sick of EMC hypegastms.

I’m a fan of Martin; I’ve followed him on twitter for some time and love his blog on account of his frank honesty from the customer-perspective.

But that’s not to say we both always agree on everything, and this is  one of those times when I disagree with him – mostly.

The standard disclaimers apply – EMC paid for my travel and accommodation. If you think that makes me an EMC stooge, then consider as well that the travel was around 33,000km in economy class and I contracted a serious infection during my trip as a result of all that economy class seating. (Indeed, as I write this, the clock is ticking down on a decision as to whether I need to be hospitalised. I’d like to think those two things balance each other out.)

So what were Martin’s beefs? I’ll quote from his article:

It’s [sic] ongoing teaser campaign started off badly when the storage giant’s marketing types put up a “sneak preview” video and decided to have a dig at their lead competitor – then the flacks started tweeted [sic] using the hashtag #NotAppy.

OK, there’s parts here I’m inclined to agree with. I think the vendor cat-fights in any industry get silly. As far as such heated discussions go though, this one did at least seem fairly restrained. But I’d have  preferred not to have seen a “NotAppy” hashtag throughout the campaign. (Regardless though, the storage industry have kept things fairly tame for the last couple of years – unlike say, the SmartPhone industry, which often resembles something closer to an all-out brawl.)

Next quote from Martin:

EMC is just very bad at this sort of thing; speeds, feeds, marketing mumbo-jumbo linked in with videos and stunts that trash competitors, bore the audience and add little value, in my honest opinion. But all with the highest production values.

I’ve been in EMC presentations that were full of speeds and feeds. I don’t honestly think this was one of these sorts of presentations. In fact, after the presentation, myself and the other Elect attendees got to speak to Dennis Vilfort who spoke passionately about how uninteresting pure speeds and feeds are to customers – and why EMC focused on a simple metric: how many virtual machines you can run on the new revamped VNX line. People who don’t actually sell storage often don’t care about IOPS, and so while EMC did mention IOPS it played second fiddle to their “every customer will understand this” message: VMs.

It’s true, there are other workloads people can put on a storage system, but by and large (using that classic 80/20) rule, EMC are finding that people aren’t buying silo/isolated storage any more – i.e., the majority of companies aren’t buying an array for Oracle, another array for File serving and yet another for virtual machine hosting.

What’s more, because so many organisations are pushing towards the goal of 100% virtualisation*, workloads on arrays are shifting considerably. In the “olden days”, with relatively few applications or systems leveraging an array, workloads were easier and more predictable – consider though an array being leveraged to provide say, storage for 1,000 virtual machines. As per the way virtualisation works, there’ll be a huge mix of applications and services provided by those virtual machines, and in the same way CPU and memory usage will be combined across all of them, so too will storage performance.

Sum of all workloads

(Excuse the fuzzy photo: the memory card on my camera failed part way through the launch and I had to resort to mobile phone photos.)

So, talking “system X with Y resources can run Z VMs” isn’t speeds-and-feeds, it’s bricks-and-mortar – it’s the building blocks of storage utilisation in a hell of a lot of organisations. Not in every organisation, of course, but I’d be willing to bet in 80% of them at least.

To me, there was a subtle (actually, at times not-so-subtle) message to the EMC launch: public cloud doesn’t have to be seen as the only end-game in the IT industry. Amazon et al certainly want you to believe that, but given what’s come about regarding the NSA and GCHQ, etc., over the last few months, big-server public cloud currently has the stench of 3-week unwashed underwear amongst anyone who gives a damn about data security.

Public cloud, after all, is built on trust, and trust has been fairly heavily eroded amongst those who care strongly about security.

That being said, the concept behind cloud is starting to gain momentum, and over the past eighteen months to two years, I’ve come a long way on my opinion of cloud – particularly when it comes to management and provisioning. This is, after all, the start of the commoditisation of the industry – self service point and click allocation of storage, compute and other services. So, it’s clear EMC are aiming all this talk of VMs at businesses who are switching to a private cloud model internally, and those businesses that are hosting private cloud environments.

Cloud is good, they’re saying, as long as you can trust it. But I digress…

OK, there were definitely plenty of PowerPoint slides in the launch, and by comparison there’s absolutely no PowerPoint slides in an Apple launch … unless of course you count all those Keynote slides. Being brutally honest, you do need to accept that a product launch is going to have slides. And slides. And a few more slides. But as long as you don’t fall into any of the Life After Death by PowerPoint traps, you’re heading in the right direction. Also, having watched some of the EMC speakers prepare for the launch in Milan the day before, one thing was abundantly clear: they were focused not only on the flow of the presentation, but simplifying content.

Was every slide perfect? No. Was every slide full of speeds and feeds? No.

And here’s the point where I fully disagree with Martin’s thoughts:

So what did the event feel like? It felt like an internal knees-up, a shindig for EMC bods to high-five themselves while their customers wait patiently. This felt more like an EMC party of aeons ago along with a smattering of cheerleaders from social networks.

You know what it felt like to me? A honest to goodness real product launch.

When I saw Martin’s article being tweeted around by The Register this was my first response:

Q: What’s the problem with storage vendors doing big launches? Why are some tech areas *verboten* for a bit of show & spectacle?

OK, I’m not the oldest person in IT, and I don’t have the greatest longevity in it either, but I have been working in the back-end of the IT industry my entire career.

Systems administration. Backup and recovery. Storage.

These aren’t glamorous positions. Even consulting isn’t glamorous per se if the actual topic is considered unglamorous. There’s a world “system administration appreciation day” which I’m fairly certain was started by system administrators and is unknown of by most of the world’s workers. There’s a backup day which again was self declared and mostly unknown about. I’m not aware of any storage day.

Even people in the industry talk about it being a fairly thankless one. In many companies no-one really thinks much of the system, storage or backup administrators until something goes wrong. And even then, Helpdesk is still the face of IT.

So because our part of the IT industry is often considered to be “un”glamorous, do we have to stick to that notion? Does it have to be simple lit-room launches with a whiteboard and a product assembly demonstration?

Because in a world with increasing commoditisation of IT, where startups promising the world (and mostly just hoping someone will buy them for a few billion bucks) hire people like it’s a talent show, and IT/being a geek is considered to be more desirable, the best way to hire good fresh talent is to make your segment of the industry look as boring as possible.

And because the best way to thank your employees for doing a fantastic job and putting in a lot of hard work is to say “We’ll make up a tech specs sheet for customers in a while. Have a great weekend!”

The code for VNX was substantially rewritten and the resulting machine is a significant boost on the previous generation. Don’t the people involved in that deserve to feel like their employer is proud of them? Don’t they deserve to see a bit of spectacle so they can say “I was part of that!”?

Yes, I was at the launch and yes, EMC paid for my trip. But I sat there during the presentation looking around at the spectacle and effort EMC had put into it thinking “after all these years in the industry, it’s nice to see genuine, palpable enthusiasm about storage”.

Storage is boring. Backup is boring.

Only if we keep letting it be like that. Kudos to EMC for putting on a show that demonstrates the level of pride they take in their products, and the level of appreciation they have for their staff. Kudos to EMC for rejecting the notion that storage has to be boring.


* Regarding 100% virtualisation: As an end-goal, I still don’t think it’s an inevitable one. Some companies will be able to achieve it, and others will have reasons not to. The technical reasons behind it are evaporating for at least a fair percentage of businesses, however.

The quickening

 EMC  Comments Off on The quickening
Sep 042013
 

Speed

I recently attended EMC Forum, where some fairly impressive figures were rolled out relating to what EMC has spent on R&D and acquisitions over the last 10 years*. Backed up by their corporate profile, those figures are:

Our differentiated value stems from our sustained and substantial investment in research and development, a cumulative investment of $16.5 billion since 2003. To strengthen our core business and extend our market to new areas, EMC has invested $17 billion in acquisitions over the same period and has integrated more than 70 technology companies.

So over ten years, that’s almost 35 billion dollars invested by EMC into products, technology, skills and innovation. No matter how you slice that, it’s an impressive commitment towards capabilities growth and leadership in data.

EMC demonstrates that commitment yet again with the Speed2Lead launch in Milan.

I’ll be in Milan for the launch, a guest of EMC**, and I’m looking forward to getting some additional details of the new VNX range in particular – but the information EMC has whet my appetite with is pretty impressive so far.

XtremSW Cache 2.0

v2 of XtremSW Cache sees it really take off via:

  1. Integration with EMC Arrays;
  2. Working with:
    • IBM AIX;
    • Server flash;
    • Oracle RAC (coming in October);
    • VMware vCenter Integration;
  3. XtremeSW Management Center will provide strong control and efficiency options – a single point of management when deploying multiple cache instances.

The EMC array integration I think is going to prove to be particularly popular:

  • VMAX:
    • Strong support out of the box for VMAX integration – XtremSW Cache will be manageable from within Unisphere so that administrators can choose which LUNs should be cached based on trending analysis;
    • Prefetching entire tracks – so if you’ve got a read-intensive application leveraging VMAX storage you’ll get a considerable performance boost – IO rates can increase by 25%;
    • Cache coordination (optimised read miss) moves the read cache tier to the host. That allows the array of course to use resources elsewhere, which can lead to increasing IOPS by as much as 2.5 times.
  • VNX:
    • For the VNX series, too, XtremSW Cache 2.0 starts with Unisphere Remote, which will recommend LUNs to cache based on trending information. (EMC say over time the integration between VNX and XtremSW Cache will be on par with the VMAX support);
    • Additionally, you’ll be able to monitor performance and health, as well as the configuration for XtremeSW Cache all from one location.

VNX

VNX has jumped to being highly multi-core aware in its processing and capabilities. While VNX systems have used multi-core CPUs for a while, the architecture wasn’t taking full advantage of those powerful cores. In particular, depending on the workload involved you might see one or two cores particularly heavily utilised, and others mostly idle. This new design, MCx, sees EMC VNX arrays fully utilising up to 32 cores, with significant boosts to performance: full symmetric multi-processing. Some of the performance improvements being cited are going to be very popular:

  • 6.3x faster for host-side IOPs;
  • 4.9x better on file serving IOPs;
  • Capability of delivering 1 million IOPs deliverable at sub 1-ms latency.

No, there’s no typos on that last point.

Capacity, Performance or Both?

  • With support for up to 1,500 storage devices (SSD, HDD or any mix you want), VNX is offering serious capacity and performance capabilities. Depending on your layout of HDDs vs SSDs your performance and capacity will scale considerably in one of several directions – the more hard drives, the closer you’re going to be to hitting maximum capacity; the more SSDs you have and the closer you’re going to be to hitting maximum IOPS;
  • Of course, if you want to get maximum bang for back with Flash (and Fast), and your data profile supports it, deduplication may very well be a good thing to turn on;
  • VNX is moving to active-active storage-processor capabilities. That’s starting with traditional, thickly provisioned LUNs, but will over time move to encapsulate the rest of the VNX functionality. This is a big change – and a big win for customers who want higher performance but for pricing considerations need to stay within the VNX range;
  • The new VNX 8000 being released in Milan is an absolute beast, with scaleability of to 6PB and support for running a workload of 8000 virtual machines, it’s going to be a major boost to datacentres and cloud environments.

There’s more, of course – much more. I’ll be particularly looking forward to some discussions with EMC folk regarding the performance increases we’re likely to see out of the new VNX MCx architecture when it comes to NDMP.

EMC AppSync

Demonstrating its continuing focus on protection and recovery, EMC’s AppSync system offers a new storage focus to protecting critical applications. With a configuration system based around SLAs, you can define AppSync protection strategies based on Gold (Concurrent remote and location protection), Silver (Remote protection) and Bronze (local protection). Of course, you can change those names to suit your environment, but the Gold/Silver/Bronze often works quite well to define protection levels.

The advantage of that of course is that once you’ve got those SLA policies defined, deciding the protection strategy for an application comes down to picking whether you want Gold, Silver or Bronze…

The applications covered by AppSync are an important collection:

  • SQL Server 2008;
  • VMware NFS file replication ;
  • Exchange 2010 and 2013 (EMC cite being able to protect a 22TB Exchange database in under 5 minutes);
  • VMware Datastore Protection.

[not closing] Thoughts

A few years ago during a blogger forum organised by SNIA-AU, I was part of a group of people who visited EMC, IBM, HDS and NetApp. One of the most telling things said during the day was from Clive Gold at EMC Sydney. It was such a simple statement that I’m trusting myself to quote him verbatim after all this time:

People buy storage for capacity, but they upgrade storage for performance.

Developing big storage is almost a no-brainer. Think about it: your enterprise could attach 12TB of DAS USB-3 storage to most servers for less than $1000 per server. Totally crazy stuff, of course – the management overhead alone would be a nightmare. But that’s the thing: if you’re not worried about performance, storage is easy.

Speed2Lead shows me that Clive Gold wasn’t just speaking from a marketing statement – it’s something EMC fervently believes in: capacity is one thing, but delivering on performance is more important, because it’s performance that customers notice. As a storage admin, you’re not going to get any pats on the back that you’ve got 1PB of storage free and unallocated if the 500TB you do have allocated can’t service the IOPS requirements of the business.

EMC are calling this Speed2Lead … and they’re certainly speaking the truth.

Stay tuned for more. Following the event when I’ve had more time to digest the information further, discuss it with colleagues and customers, I’ll be posting some additional details.

For now, check out EMC’s launch page for Speed2Lead.


* Some companies tend to be dismissive of money spent on acquisitions, but I have a different perspective – companies that aren’t willing to acquire are steadfastly following the “not invented here” approach. In order to survive and grow, a company has to be willing to both invest in research/development and other companies that provide synergistic product sets or skills.

** Disclaimer: EMC flew me from Melbourne to Milan via Qantas and Emirates economy class, and put me up in the Westin Hotel. There were some transfers and a couple of meals included.