Feb 072018
 

The world is changing, and data protection is changing with it. (OK, that sounds like an ad for Veridian Dynamics, but I promise I’m serious.)

One of the areas in which data protection is changing is that backup environments are growing in terms of deployments. It’s quite common these days to see multiple backup servers deployed in an environment – whether that’s due to acquisitions, required functionality, network topology or physical locations, the reason doesn’t really matter. What does matter is that as you increase the number of systems providing protection within an environment, you want to be able to still manage and monitor those systems centrally.

Data Protection Central (DPC) was released earlier this month, and it’s designed from the ground up as a modern, HTML5 web-based system to allow you to monitor your Avamar, NetWorker and Data Domain environments, providing health and capacity reporting on systems and backup. (It also builds on the Multi Systems Manager for Avamar to allow you to perform administrative functions within Avamar without leaving the DPC console – and, well, more is to come on that front over time.)

I’ve been excited about DPC for some time. You may remember a recent post of mine talking about Data Domain Management Center (DDMC); DPC isn’t (at the moment at least) a replacement for DDMC, but it’s built in the same spirit of letting administrators have easy visibility over their entire backup and recovery environment.

So, what’s involved?

Well, let’s start with the price. DPC is $0 for NetWorker and Avamar customers. That’s a pretty good price, right? (If you’re looking for the product page on the support website by the way, it’s here.)

You can deploy it in one of two ways; if you’ve got a SLES server deployed within your environment that meets the requirement, you can download a .bin installer to drop DPC onto that system. The other way – and quite a simple way, really, is to download a VMware OVA file to allow you to easily deploy it within your virtual infrastructure. (Remember, one of the ongoing themes of DellEMC Data Protection is to allow easy virtual deployment wherever possible.)

So yesterday I downloaded the OVA file and today I did a deployment. From start to finish, including gathering screenshots of its operation, that deployment, configuration and use took me about an hour or so.

When you deploy the OVA file, you’ll get prompted for configuration details so that there’s no post-deployment configuration you have to muck around with:

Deploying DPC as an OVA - Part 1

Deploying DPC as an OVA – Part 1

At this point in the deployment, I’ve already selected where the virtual machine will deploy, and what the disk format is. (If you are deploying into a production environment with a number of systems to manage, you’ll likely want to follow the recommendations for thick provisioning. I chose thin, since I was deploying it into my lab.)

You fill in standard networking properties – IP address, gateway, DNS, etc. Additionally, per the screen shot below, you can also immediately attach DPC into your AD/LDAP environment for enterprise authentication:

DPC Deployment, LDAP

DPC Deployment, LDAP

I get into enough trouble at home for IT complexity, so I don’t run LDAP (any more), so there wasn’t anything else for me to do there.

The deployment is quite quick, and after you’re done, you’re ready to power on the virtual machine.

DPC Deployment, ready to power on

DPC Deployment, ready to power on

In fact, one of the things you’ll want to be aware of is that the initial power on and configuration is remarkably quick. (After power-on, the system was ready to let me log on within 5 minutes or so.)

It’s a HTML5 interface – that means there’s no Java Web Start or anything like that; you simply point your web browser at the FQDN or IP address of the DPC server in a browser, and you’ll get to log in and access the system. (The documentation also includes details for changing the SSL certificate.)

DPC Login Screen

DPC Login Screen

DPC follows Dell’s interface guidelines, so it’s quite a crisp and easy to navigate interface. The documentation includes details of your initial login ID and password, and of course, following best practices for security, you’re prompted to change that default password on first login:

DPC Changing the Default Password

DPC Changing the Default Password

After you’ve logged in, you get to see the initial, default dashboard for DPC:

DPC First Login

DPC First Login

Of course, at this point, it looks a wee bit blank. That makes sense – we haven’t added any systems to the environment yet. But that’s easily fixed, by going to System Management in the left-hand column.

DPC System Management

DPC System Management

System management is quite straightforward – the icons directly under “Systems” and “Groups” are for add, edit and delete, respectively. (Delete simply removes a system from DPC, it doesn’t un-deploy the system, of course.)

When you click the add button, you are prompted whether you want to add a server into DPC. (Make sure you check out the version requirements from the documentation, available on the support page.) Adding systems is a very straight-forward operation, as well. For instance, for Data Domain:

DPC Adding a Data Domain

DPC Adding a Data Domain

Adding an Avamar server is likewise quite simple:

DPC Adding an Avamar Server

DPC Adding an Avamar Server

And finally, adding a NetWorker server:

DPC Adding a NetWorker Server

DPC Adding a NetWorker Server

Now, you’ll notice here, DPC prompts you that there’s some added configuration to do on the NetWorker server; it’s about configuring the NetWorker rabbitmq system to be able to communicate with DPC. For now, that’s a manual process. After following the instructions in the documentation, I also added the following to my /etc/rc.d/rc.local file on my Linux-based NetWorker/NMC server to ensure it happened on every reboot, too:

/bin/cat <<EOF | /opt/nsr/nsrmq/bin/nsrmqctl
monitor andoria.turbamentis.int
quit
EOF

It’s not just NetWorker, Avamar and Data Domain you can add – check out the list here:

DPC Systems you can add

DPC Systems you can add

Once I added all my systems, I went over to look at the Activities > Audit pane, which showed me:

DPC Activity Audit

DPC Activity Audit

Look at those times there – it took me all of 8 minutes to change the password on first login, then add 3 Data Domains, an Avamar Server and a NetWorker server to DPC. DPC has been excellently designed to enable rapid deployment and time to readiness. And guess how many times I’d used DPC before? None.

Once systems have been added to DPC and it’s had time to poll the various servers you’re monitoring, you start getting the dashboards populated. For instance, shortly after their addition, my lab DDVE systems were getting capacity reporting:

DPC Capacity Reporting (DD)

DPC Capacity Reporting (DD)

You can drill into capacity reporting by clicking on the capacity report dashboard element to get a tabular view covering Data Domain and Avamar systems:

DPC Detailed Capacity Reporting

DPC Detailed Capacity Reporting

On that detailed capacity view, you see basic capacity details for Data Domains, and as you can see down the right hand side, details of each Mtree on the Data Domain as well. (My Avamar server is reported there as well.)

Under Health, you’ll see a quick view of all the systems you have configured and DPC’s assessment of their current status:

DPC System Health

DPC System Health

In this case, I had two systems reported as unhealthy – one of my DDVEs had an email configuration problem I lazily had not gotten around to fixing, and likewise, my NetWorker server had a licensing error I hadn’t bothered to investigate and fix. Shamed by DPC, I jumped onto both and fixed them, pronto! That meant when I went back to the dashboards, I got an all clear for system health:

DPC Detailed Dashboard

DPC Detailed Dashboard

I wanted to correct those 0’s, so I fired off a backup in NetWorker, which resulted in DPC updating pretty damn quickly to show something was happening:

DPC Dashboard Backup Running

DPC Detailed Dashboard, Backup Running

Likewise, when the backup completed and cloning started, the dashboard was updated quite promptly:

DPC Detailed Dashboard, Clone Running

DPC Detailed Dashboard, Clone Running

You can also see details of what’s been going on via the Activities > System view:

DPC Activities - Systems

DPC Activities – Systems

Then, with a couple of backup and clone jobs run, the Detailed Dashboard was updated a little more:

DPC, Detailed Dashboard More Use

DPC, Detailed Dashboard More Use

Now, I mentioned before that DPC takes on some Multi Systems Manager functionality for Avamar, viz.:

DPC, Avamar Systems Management

DPC, Avamar Systems Management

So that’s back in the Systems Management view. Clicking the horizontal ‘…’ item next to a system lets you launch the individual system management interface, or in the case of Avamar, also manage policy configuration.

DPC, Avamar Policy View

DPC, Avamar Policy View

In that policy view, you can create new policies, initiate jobs, and edit existing configuration details – all without having to go into the traditional Avamar interface:

DPC, Avamar Schedule Configuration

DPC, Avamar Schedule Configuration

DPC, Avamar Retention Configuration

DPC, Avamar Retention Configuration

DPC, Avamar Policy Editing

DPC, Avamar Policy Editing

That’s pretty much all I’ve got to say about DPC at this point in time – other than to highlight the groups function in System Management. By defining groups of resources (and however you want to), you can then filter dashboard views not only for individual systems, but for groups, too, allowing quick and easy review of very specific hosts:

DPC System Management - Groups

DPC System Management – Groups

In my configuration there I’ve lumped by whether systems are associated with an Avamar backup environment or a NetWorker backup environment, but you can configure groups however you need. Maybe you have services broken up by state, or country, or maybe you have them distributed by customer or service you’re providing. Regardless of how you’d like to group them, you can filter through to them in DPC dashboards easily.

So there you go – that’s DPC v1.0.1. It’s honestly taken me more time to get this blog article written than it took me to deploy and configure DPC.

Note: Things I didn’t show in this article:

  • Search and Recovery – That’s where you’d add a DP Search system (I don’t have DP-Search deployed in my lab)
  • Reports – That’s where you’d add a DPA server, which I don’t have deployed in my lab either.

Search and Recovery lets you springboard into the awesome DP-Search web interface, and Reports will drill into DPA and extract the most popular reports people tend to access in DPA, all within DPC.

I’m excited about DPC and the potential it holds over time. And if you’ve got an environment with multiple backup servers and Data Domains, you’ll get value out of it very quickly.

Dec 302017
 

With just a few more days of 2017 left, I thought it opportune making the last post of the year to summarise some of what we’ve seen in the field of data protection in 2017.

2017 Summary

It’s been a big year, in a lot of ways, particularly at DellEMC.

Towards the end of 2016, but definitely leading into 2017, NetWorker 9.1 was released. That meant 2017 started with a bang, courtesy of the new NetWorker Virtual Proxy (NVP, or vProxy) backup system. This replaced VBA, allowing substantial performance improvements, and some architectural simplification as well. I was able to generate some great stats right out of the gate with NVP under NetWorker 9.1, and that applied not just to Windows virtual machines but also to Linux ones, too. NetWorker 9.1 with NVP allows you to recover tens of thousands or more files from image level backup in just a few minutes.

In March I released the NetWorker 2016 usage survey report – the survey ran from December 1, 2016 to January 31, 2017. That reminds me – the 2017 Usage Survey is still running, so you’ve still got time to provide data to the report. I’ve been compiling these reports now for 7 years, so there’s a lot of really useful trends building up. (The 2016 report itself was a little delayed in 2017; I normally aim for it to be available in February, and I’ll do my best to ensure the 2017 report is out in February 2018.)

Ransomware and data destruction made some big headlines in 2017 – repeatedly. Gitlab hit 2017 running with a massive data loss in January, which they consequently blamed on a backup failure, when in actual fact it was a staggering process and people failure. It reminds one of the old manager #101 credo, “If you ASSuME, you make an ASS out of U and ME”. Gitlab’s issue may have at a very small level been a ‘backup failure’, but only in so much that everyone in the house thinking it was someone else’s turn to fill the tank of the car, and running out of petrol, is a ‘car failure’.

But it wasn’t just Gitlab. Next generation database users around the world – specifically, MongoDB – learnt the hard way that security isn’t properly, automatically enabled out of the box. Large numbers of MongoDB administrators around the world found their databases encrypted or lost as default security configurations were exploited on databases left accessible in the wild.

In fact, Ransomware became such a common headache in 2017 that it fell prey to IT’s biggest meme – the infographic. Do a quick Google search for “Ransomware Timeline” for instance, and you’ll find a plethora of options around infographics about Ransomware. (And who said Ransomware couldn’t get any worse?)

Appearing in February 2017 was Data Protection: Ensuring Data Availability. Yes, that’s right, I’m calling the release of my second book on data protection as a big event in the realm of data storage protection in 2017. Why? This is a topic which is insanely critical to business success. If you don’t have a good data protection process and strategy within your business, you could literally lose everything that defines the operational existence of your business. There’s three defining aspects I see in data protection considerations now:

  • Data is still growing
  • Product capability is still expanding to meet that growth
  • Too many businesses see data protection as a series of silos, unconnected – storage, virtualisation, databases, backup, cloud, etc. (Hint: They’re all connected.)

So on that basis, I do think a new book whose focus is to give a complete picture of the data storage protection landscape is important to anyone working in infrastructure.

And on the topic of stripping the silos away from data protection, 2017 well and truly saw DellEMC cement its lead in what I refer to as convergent data protection. That’s the notion of combining data protection techniques from across the continuum to provide new methods of ensuring SLAs are met, impact is eliminated, and data hops are minimised. ProtectPoint was first introduced to the world in 2015, and has evolved considerably since then. ProtectPoint allows primary storage arrays to integrate with data protection storage (e.g., VMAX3 to Data Domain) so that those really huge databases (think 10TB as a typical starting point) can have instantaneous, incremental-forever backups performed – all application integrated, but no impact on the database server itself. ProtectPoint though was just the starting position. In 2017 we saw the release of Hypervisor Direct, which draws a line in the sand on what Convergent Data Protection should be and do. Hypervisor direct is there for your big, virtualised systems with big databases, eliminating any risk of VM-stun during a backup (an architectural constraint of VMware itself) by integrating RecoverPoint for Virtual Machines with Data Domain Boost, all while still being fully application integrated. (Mark my words – hypervisor direct is a game changer.)

Ironically, in a world where target-based deduplication should be a “last resort”, we saw tech journalists get irrationally excited about a company heavy on marketing but light on functionality promote their exclusively target-deduplication data protection technology as somehow novel or innovative. Apparently, combining target based deduplication and needing to scale to potentially hundreds of 10Gbit ethernet ports is both! (In the same way that releasing a 3-wheeled Toyota Corolla for use by the trucking industry would be both ‘novel’ and ‘innovative’.)

Between VMworld and DellEMC World, there were some huge new releases by DellEMC this year though, by comparison. The Integrated Data Protection Appliance (IDPA) was announced at DellEMC world. IDPA is a hyperconverged backup environment – you get delivered to your datacentre a combined unit with data protection storage, control, reporting, monitoring, search and analytics that can be stood up and ready to start protecting your workloads in just a few hours. As part of the support programme you don’t have to worry about upgrades – it’s done as an atomic function of the system. And there’s no need to worry about software licensing vs hardware capacity: it’s all handled as a single, atomic function, too. For sure, you can still build your own backup systems, and many people will – but for businesses who want to hit the ground running in a new office or datacentre, or maybe replace some legacy three-tier backup architecture that’s limping along and costing hundreds of thousands a year just in servicing media servers (AKA “data funnel$”), IDPA is an ideal fit.

At DellEMC World, VMware running in AWS was announced – imagine that, just seamlessly moving virtual machines from your on-premises environment out to the world’s biggest public cloud as a simple operation, and managing the two seamlessly. That became a reality later in the year, and NetWorker and Avamar were the first products to support actual hypervisor level backup of VMware virtual machines running in a public cloud.

Thinking about public cloud, Data Domain Virtual Edition (DDVE) became available in both the Azure and AWS marketplaces for easy deployment. Just spin up a machine and get started with your protection. That being said, if you’re wanting to deploy backup in public cloud, make sure you check out my two-part article on why Architecture Matters: Part 1, and Part 2.

And still thinking about cloud – this time specifically about cloud object storage, you’ll want to remember the difference between Cloud Boost and Cloud Tier. Both can deliver exceptional capabilities to your backup environment, but they have different use cases. That’s something I covered off in this article.

There were some great announcements at re:Invent, AWS’s yearly conference, as well. Cloud Snapshot Manager was released, providing enterprise grade control over AWS snapshot policies. (Check out what I had to say about CSM here.) Also released in 2017 was DellEMC’s Data Domain Cloud Disaster Recovery, something I need to blog about ASAP in 2018 – that’s where you can actually have your on-premises virtual machine backups replicated out into a public cloud and instantiate them as a DR copy with minimal resources running in the cloud (e.g., no in-Cloud DDVE required).

2017 also saw the release of Enterprise Copy Data Analytics – imagine having a single portal that tracks your Data Domain fleet world wide, and provides predictive analysis to you about system health, capacity trending and insights into how your business is going with data protection. That’s what eCDA is.

NetWorker 9.2 and 9.2.1 came out as well during 2017 – that saw functionality such as integration with Data Domain Retention Lock, database integrated virtual machine image level backups, enhancements to the REST API, and a raft of other updates. Tighter integration with vRealize Automation, support for VMware image level backup in AWS, optimised object storage functionality and improved directives – the list goes on and on.

I’d be remiss if I didn’t mention a little bit of politics before I wrap up. Australia got marriage equality – I, myself, am finally now blessed with the challenge of working out how to plan a wedding (my boyfriend and I are intending to marry on our 22nd anniversary in late 2018 – assuming we can agree on wedding rings, of course), and more broadly, politics again around the world managed to remind us of the truth to that saying by the French Philosopher, Albert Camus: “A man without ethics is a wild beast loosed upon this world.” (OK, I might be having a pointed glance at Donald Trump over in America when I say that, but it’s still a pertinent thing to keep in mind across the political and geographic spectrums.)

2017 wasn’t just about introducing converged data protection appliances and convergent data protection, but it was also a year where more businesses started to look at hyperconverged administration teams as well. That’s a topic that will only get bigger in 2018.

The DellEMC data protection family got a lot of updates across the board that I haven’t had time to cover this year – Avamar 7.5, Boost for Enterprise Applications 4.5, Enterprise Copy Data Management (eCDM) 2, and DDOS 6.1! Now that I sit back and think about it, my January could be very busy just catching up on things I haven’t had a chance to blog about this year.

I saw some great success stories with NetWorker in 2017, something I hope to cover in more detail into 2018 and beyond. You can see some examples of great success stories here.

I also started my next pet project – reviewing ethical considerations in technology. It’s certainly not going to be just about backup. You’ll see the start of the project over at Fools Rush In.

And that’s where I’m going to leave 2017. It’s been a big year and I hope, for all of you, a successful year. 2018, I believe, will be even bigger again.

Dec 222015
 

As we approach the end of 2015 I wanted to spend a bit of time reflecting on some of the data protection enhancements we’ve seen over the year. There’s certainly been a lot!

Protection

NetWorker 9

NetWorker 9 of course was a big part to the changes in the data protection landscape in 2015, but that’s not by any means the only advancement we saw. I covered some of the advances in NetWorker 9 in my initial post about it (NetWorker 9: The Future of Backup), but to summarise just a few of the key new features, we saw:

  • A policy based engine that unites backup, cloning, snapshot management and protection of virtualisation into a single, easy to understand configuration. Data protection activities in NetWorker can be fully aligned to service catalogue requirements, and the easier configuration engine actually extends the power of NetWorker by offering more complex configuration options.
  • Block based backups for Linux filesystems – speeding up backups for highly dense filesystems considerably.
  • Block based backups for Exchange, SQL Server, Hyper-V, and so on – NMM for NetWorker 9 is a block based backup engine. There’s a whole swathe of enhancements in NMM version 9, but the 3-4x backup performance improvement has to be a big win for organisations struggling against existing backup windows.
  • Enhanced snapshot management – I was speaking to a customer only a few days ago about NSM (NetWorker Snapshot Management), and his reaction to NSM was palpable. Wrapping NAS snapshots into an effective and coordinated data protection policy with the backup software orchestrating the whole process from snapshot creation, rollover to backup media and expiration just makes sense as the conventional data storage protection and backup/recovery activities continue to converge.
  • ProtectPoint Integration – I’ll get to ProtectPoint a little further below, but being able to manage ProtectPoint processes in the same way NSM manages file-based snapshots will be a big win as well for those customers who need ProtectPoint.
  • And more! – VBA enhancements (notably the native HTML5 interface and a CLI for Linux), NetWorker Virtual Edition (NVE), dynamic parallel savestreams, NMDA enhancements, restricted datazones and scaleability all got a boost in NetWorker 9.

It’s difficult to summarise everything that came in NetWorker 9 in so few words, so if you’ve not read it yet, be sure to check out my essay-length ‘summary’ of it referenced above.

ProtectPoint

In the world of mission critical databases where impact minimisation on the application host is a must yet backup performance is equally a must, ProtectPoint is an absolute game changer. To quote Alyanna Ilyadis, when it comes to those really important databases within a business,

“Ideally, you’d want the performance of a snapshot, with the functionality of a backup.”

Think about the real bottleneck in a mission critical database backup: the data gets transferred (even best case) via fibre-channel from the storage layer to the application/database layer before being passed across to the data protection storage. Even if you direct-attach data protection storage to the application server, or even if you mount a snapshot of the database at another location, you still have the fundamental requirement to:

  • Read from production storage into a server
  • Write from that server out to protection storage

ProtectPoint cuts the middle-man out of the equation. By integrating storage level snapshots with application layer control, the process effectively becomes:

  • Place database into hot backup mode
  • Trigger snapshot
  • Pull database out of hot backup mode
  • Storage system sends backup data directly to Data Domain – no server involved

That in itself is a good starting point for performance improvement – your database is only in hot backup mode for a few seconds at most. But then the real power of ProtectPoint kicks in. You see, when you first configure ProtectPoint, a block based copy from primary storage to Data Domain storage starts in the background straight away. With Change Block Tracking incorporated into ProtectPoint, the data transfer from primary to protection storage kicks into high gear – only the changes between the last copy and the current state at the time of the snapshot need to be transferred. And the Data Domain handles creation of a virtual synthetic full from each backup – full backups daily at the cost of an incremental. We’re literally seeing backup performance improvements in the order of 20x or more with ProtectPoint.

There’s some great videos explaining what ProtectPoint does and the sorts of problems it solves, and even it integrating into NetWorker 9.

Database and Application Agents

I’ve been in the data protection business for nigh on 20 years, and if there’s one thing that’s remained remarkably consistent throughout that time it’s that many DBAs are unwilling to give up control over the data protection configuration and scheduling for their babies.

It’s actually understandable for many organisations. In some places its entrenched habit, and in those situations you can integrate data protection for databases directly into the backup and recovery software. For other organisations though there’s complex scheduling requirements based on batch jobs, data warehousing activities and so on which can’t possibly be controlled by a regular backup scheduler. Those organisations need to initiate the backup job for a database not at a particular time, but when it’s the right time, and based on the amount of data or the amount of processing, that could be a highly variable time.

The traditional problem with backups for databases and applications being handled outside of the backup product is the chances of the backup data being written to primary storage, which is expensive. It’s normally more than one copy, too. I’d hazard a guess that 3-5 copies is the norm for most database backups when they’re being written to primary storage.

The Database and Application agents for Data Domain allow a business to sidestep all these problems by centralising the backups for mission critical systems onto highly protected, cost effective, deduplicated storage. The plugins work directly with each supported application (Oracle, DB2, Microsoft SQL Server, etc.) and give the DBA full control over managing the scheduling of the backups while ensuring those backups are stored under management of the data protection team. What’s more, primary storage is freed up.

Formerly known as “Data Domain Boost for Enterprise Applications” and “Data Domain Boost for Microsoft Applications”, the Database and Application Agents respectively reached version 2 this year, enabling new options and flexibility for businesses. Don’t just take my word for it though: check out some of the videos about it here and here.

CloudBoost 2.0

CloudBoost version 1 was released last year and I’ve had many conversations with customers interested in leveraging it over time to reduce their reliance on tape for long term retention. You can read my initial overview of CloudBoost here.

2015 saw the release of CloudBoost 2.0. This significantly extends the storage capabilities for CloudBoost, introduces the option for a local cache, and adds the option for a physical appliance for businesses that would prefer to keep their data protection infrastructure physical. (You can see the tech specs for CloudBoost appliances here.)

With version 2, CloudBoost can now scale to 6PB of cloud managed long term retention, and every bit of that data pushed out to a cloud is deduplicated, compressed and encrypted for maximum protection.

Spanning

Cloud is a big topic, and a big topic within that big topic is SaaS – Software as a Service. Businesses of all types are placing core services in the Cloud to be managed by providers such as Microsoft, Google and Salesforce. Office 365 Mail is proving very popular for businesses who need enterprise class email but don’t want to run the services themselves, and Salesforce is probably the most likely mission critical SaaS application you’ll find in use in a business.

So it’s absolutely terrifying to think that SaaS providers don’t really backup your data. They protect their infrastructure from physical faults, and their faults, but their SLAs around data deletion are pretty straight forward: if you deleted it, they can’t tell whether it was intentional or an accident. (And if it was an intentional delete they certainly can’t tell if it was authorised or not.)

Data corruption and data deletion in SaaS applications is far too common an occurrence, and for many businesses sadly it’s only after that happens for the first time that people become aware of what those SLAs do and don’t cover them for.

Enter Spanning. Spanning integrates with the native hooks provided in Salesforce, Google Apps and Office 365 Mail/Calendar to protect the data your business relies on so heavily for day to day operations. The interface is dead simple, the pricing is straight forward, but the peace of mind is priceless. 2015 saw the introduction of Spanning for Office 365, which has already proven hugely popular, and you can see a demo of just how simple it is to use Spanning here.

Avamar 7.2

Avamar got an upgrade this year, too, jumping to version 7.2. Virtualisation got a big boost in Avamar 7.2, with new features including:

  • Support for vSphere 6
  • Scaleable up to 5,000 virtual machines and 15+ vCenters
  • Dynamic policies for automatic discovery and protection of virtual machines within subfolders
  • Automatic proxy deployment: This sees Avamar analyse the vCenter environment and recommend where to place virtual machine backup proxies for optimum efficiency. Particularly given the updated scaleability in Avamar for VMware environments taking the hassle out of proxy placement is going to save administrators a lot of time and guess-work. You can see a demo of it here.
  • Orphan snapshot discovery and remediation
  • HTML5 FLR interface

That wasn’t all though – Avamar 7.2 also introduced:

  • Enhancements to the REST API to cover tenant level reporting
  • Scheduler enhancements – you can now define the start dates for your annual, monthly and weekly backups
  • You can browse replicated data from the source Avamar server in the replica pair
  • Support for DDOS 5.6 and higher
  • Updated platform support including SLES 12, Mac OS X 10.10, Ubuntu 12.04 and 14.04, CentOS 6.5 and 7, Windows 10, VNX2e, Isilon OneFS 7.2, plus a 10Gbe NDMP accelerator

Data Domain 9500

Already the market leader in data protection storage, EMC continued to stride forward with the Data Domain 9500, a veritable beast. Some of the quick specs of the Data Domain 9500 include:

  • Up to 58.7 TB per hour (when backing up using Boost)
  • 864TB usable capacity for active tier, up to 1.7PB usable when an extended retention tier is added. That’s the actual amount of storage; so when deduplication is added that can yield actual protection data storage well into the multiple-PB range. The spec sheet gives some details based on a mixed environment where the data storage might be anywhere from 8.6PB to 86.4PB
  • Support for traditional ES30 shelves and the new DS60 shelves.

Actually it wasn’t just the Data Domain 9500 that was released this year from a DD perspective. We also saw the release of the Data Domain 2200 – the replacement for the SMB/ROBO DD160 appliance. The DD2200 supports more streams and more capacity than the previous entry-level DD160, being able to scale from a 4TB entry point to 24TB raw when expanded to 12 x 2TB drives. In short: it doesn’t matter whether you’re a small business or a huge enterprise: there’s a Data Domain model to suit your requirements.

Data Domain Dense Shelves

The traditional ES30 Data Domain shelves have 15 drives. 2015 also saw the introduction of the DS60 – dense shelves capable of holding sixty disks. With support for 4 TB drives, that means a single 5RU data Domain DS60 shelf can hold as much as 240TB in drives.

The benefits of high density shelves include:

  • Better utilisation of rack space (60 drives in one 5RU shelf vs 60 drives in 4 x 3RU shelves – 12 RU total)
  • More efficient for cooling and power
  • Scale as required – each DS60 takes 4 x 15 drive packs, allowing you to start with just one or two packs and build your way up as your storage requirements expand

DDOS 5.7

Data Domain OS 5.7 was also released this year, and includes features such as:

  • Support for DS60 shelves
  • Support for 4TB drives
  • Support for ES30 shelves with 4TB drives (DD4500+)
  • Storage migration support – migrate those older ES20 style shelves to newer storage while the Data Domain stays online and in use
  • DDBoost over fibre-channel for Solaris
  • NPIV for FC, allowing up to 8 virtual FC ports per physical FC port
  • Active/Active or Active/Passive port failover modes for fibre-channel
  • Dynamic interface groups are now supported for managed file replication and NAT
  • More Secure Multi-Tenancy (SMT) support, including:
    • Tenant-units can be grouped together for a tenant
    • Replication integration:
      • Strict enforcing of replication to ensure source and destination tenant are the same
      • Capacity quota options for destination tenant in a replica context
      • Stream usage controls for replication on a per-tenant basis
    • Configuration wizards support SMT for
    • Hard limits for stream counts per Mtree
    • Physical Capacity Measurement (PCM) providing space utilisation reports for:
      • Files
      • Directories
      • Mtrees
      • Tenants
      • Tenant-units
  • Increased concurrent Mtree counts:
    • 256 Mtrees for Data Domain 9500
    • 128 Mtrees for each of the DD990, DD4200, DD4500 and DD7200
  • Stream count increases – DD9500 can now scale to 1,885 simultaneous incoming streams
  • Enhanced CIFS support
  • Open file replication – great for backups of large databases, etc. This allows the backup to start replicating before it’s even finished.
  • ProtectPoint for XtremIO

Data Protection Suite (DPS) for VMware

DPS for VMware is a new socket-based licensing model for mid-market businesses that are highly virtualized and want an effective enterprise-grade data protection solution. Providing Avamar, Data Protection Advisor and RecoverPoint for Virtual Machines, DPS for VMware is priced based on the number of CPU sockets (not cores) in the environment.

DPS for VMware is ideally suited for organisations that are either 100% virtualised or just have a few remaining machines that are physical. You get the full range of Avamar backup and recovery options, Data Protection Advisor to monitor and report on data protection status, capacity and trends within the environment, and RecoverPoint for a highly efficient journaled replication of critical virtual machines.

…And one minor thing

There was at least one other bit of data protection news this year, and that was me finally joining EMC. I know in the grand scheme of things it’s a pretty minor point, but after years of wanting to work for EMC it felt like I was coming home. I had worked in the system integrator space for almost 15 years and have a great appreciation for the contribution integrators bring to the market. That being said, getting to work from within a company that is so focused on bringing excellent data protection products to the market is an amazing feeling. It’s easy from the outside to think everything is done for profit or shareholder value, but EMC and its employees have a real passion for their products and the change they bring to IT, business and the community as a whole. So you might say that personally, me joining EMC was the biggest data protection news for the year.

In Summary

I’m willing to bet I forgot something in the list above. It’s been a big year for Data Protection at EMC. Every time I’ve turned around there’s been new releases or updates, new features or functions, and new options to ensure that no matter where the data is or how critical the data is to the organisation, EMC has an effective data protection strategy for it. I’m almost feeling a little bit exhausted having come up with the list above!

So I’ll end on a slightly different note (literally). If after a long year working with or thinking about Data Protection you want to chill for five minutes, listen to Kate Miller-Heidke’s cover of “Love is a Stranger”. She’s one of the best artists to emerge from Australia in the last decade. It’s hard to believe she did this cover over two years ago now, but it’s still great listening.

I’ll see you all in 2016! Oh, and don’t forget the survey.

What’s new in 8.2?

 EMC, NetWorker  Comments Off on What’s new in 8.2?
Jun 302014
 

NetWorker 8.2 entered Directed Availability (DA) status a couple of weeks ago. Between finishing up one job and looking for a new one, I’d been a bit too busy to blog about 8.2 until now, so here goes…

what's new in 8.2

First and foremost, NetWorker 8.2 brings some additional functionality to VBA. VBA was introduced as the new backup process in NetWorker 8.1. Closely integrating Avamar backup technologies, VBA leverages a special, embedded virtual Avamar node to achieve high performance backup and recovery. Not only can policies be defined in NMC for VBA can be assigned by a VMware administrator in the vSphere Web Client,  … so too can image level backup and recovery operations be executed there. Of course, regularly scheduled backups are still controlled by NetWorker.

That was the lay of the land in 8.1 – 8.2 reintroduces some of the much-loved VADP functionality, allowing for a graphical visualisation map of the virtual environment from within NMC.

Continuing that Avamar/VMware integration, NetWorker 8.2 also gets something that Avamar 7 administrators have had for a while – instant-on recoveries when backups are performed to Data Domain. There’s also an emergency restore option to pull a VM back to an ESX host even if vCenter is unavailable, and greater granularity of virtual machine backups – individual VMDK files can be backed up and restored if necessary. For those environments where VMware administrators aren’t meant to be starting backups outside of the policy schedules, there’s also the option now to turn off VBA Adhoc Backups in NMC.

Moving on from VMware, there’s some fantastic snapshot functionality in NetWorker 8.2. This is something I’ve not yet had a chance to play around with, but by all accounts, it’s off to a promising start and will continue to get deeper integration with NetWorker over time. Currently, NetWorker supports integrating with snapshot technologies from Isilon, VNX, VNX2, VNX2e and NetApp, though the level of integration depends on what is available from each array. This new functionality is called NSM for NAS (NetWorker Snapshot Management).

The NSM integration allows NAS hosts to be integrated as clients within NetWorker for policy management, whilst still working from the traditional “black box” scenario of NAS systems not getting custom agents installed. There’s a long list of functionality, including:

  • Snapshot discovery:
    • Finding snapshots taken on the NAS outside of NetWorker’s control (either before integration, or by other processes)
    • Facilitate roll-over and recovery from those snapshots (deleting isn’t available)
    • Available as a scheduled task or via manual execution
  • Snapshot operations:
    • Create snapshots
    • Replication snapshots
    • Move snapshots out to other storage (Boost, tape etc) using NDMP protocols
    • Lifecycle management of snapshots and replicas via retention policies
    • Recover from snapshots

Data Domain Boost integration gets a … well, boost, with support for Data Domain’s secure multi-tenancy. This support scaling for large systems designed for service providers, with up to 512 Boost devices supported per secure storage unit on the Data Domain. While previously there was a requirement for a single Data Domain Boost user account across all Data Domain devices, this now allows for better tightening of access.

One of my gripes with BBB (Block Based Backup) in NetWorker 8.1 has been addressed in 8.2 – if you’re stuck using ADV_FILE devices rather than Data Domain, you can now perform BBB even if the storage node being written to is not Windows. Another time-saving option that was introduced in 8.1, Parallel Save Stream (PSS), has been extended to support Windows systems, and has also been updated to support Synthetic and Virtual Synthetic Fulls. in 8.1 it had only supported Unix/Linux, and only in traditional backup models.

Continuing the trend towards storage nodes being seen as a fluid rather than locked resource mapping, there’s now an autoselect storage node option, which if enabled allows NetWorker to select the storage node itself during backup and recovery operations. If this is enabled, it will override any storage node preferences assigned to individual clients, and NetWorker looks for local storage nodes wherever possible.

There’s a few things that have left NetWorker in 8.2, which are understandable: Support for Windows XP, Windows 2003 and the Change Journal Manager. If you still to protect Windows XP or Windows Server 2003, be sure to keep your installers for 8.1. and lower client software around.

There’s some documentation updates in NetWorker 8.2 as well:

  • Server Disaster Recovery and Availability Best Practices – This describes the disaster recovery process for the NetWorker server, including best practices for ensuring you’re prepared for a disaster recovery situation.
  • Snapshot Management for NAS Devices Integration – This documents the aforementioned NSM for NAS new feature of NetWorker.
  • Upgrading to NetWorker 8.2 from a Previous Release – This covers off in fairly comprehensive detail how you can upgrade your NetWorker environment to 8.2.

In years gone by I’ve found that documentation updates have been a lagging component of NetWorker, but that’s long since disappeared. With each new version of NetWorker now we’re seeing either entirely new documents, or substantially enhanced documentation (or both). This speaks volumes of the commitment EMC has to NetWorker.

Jan 162014
 

Elect2014 logoI’m rather pleased to say I’ve been included for the second year running in the EMC Elect programme. Last year was the first time the programme was run and while at times my schedule prevented full participation, I’ve got to say it was an excellent community to be part of.

It’s fair to say I stay fairly focused on backup and data protection in general – it’s a niche area within a niche area, which sometimes creates interesting headaches, but one thing I can be guaranteed on is that EMC remains committed to getting as much information out into their various product communities as possible. It’s almost invariably the case that if you look, you’ll find it.

Elect gives me the opportunity to see more than just backup and recovery. Last year, for instance, I was lucky enough to get to see the VNX MCx Speed2Lead launch live, in Milan. The trip itself was as fast as the products at the launch … from Australia to Italy and back again in under a week meant for a lot of time on planes. A lot. But it was worth it to see live how invested EMC are in their products. Yes, there was criticism of the event, but I stand by my response to that criticism: the storage industry as a whole is too often seen as the “boring” part of the entire IT industry, and it’s refreshing to see a company encouraging their employers, their users and their partners to be proud of what they’re doing.

I’m looking forward to seeing what EMC Elect 2014 brings, and hope to engage a lot more than I found time for last year – the rewards of being connected to a community of experts are obvious!

***

To see a comprehensive list of the EMC Elect 2014 members (well, certainly those on Twitter), check out this EMC Corporate list.

Addendum – the full official list is over at the EMC Community Network.

Speed2Lead: Launch, not hypegasm

 EMC  Comments Off on Speed2Lead: Launch, not hypegasm
Sep 072013
 

I recently went to the EMC Speed2Lead/SpeedToLead product launch in Milan.

After the launch, I saw that The Register had published a piece by Martin Glassborow, aka @Storagebod, called Snide hashtags, F1 cars death by PowerPoint: I’m sick of EMC hypegastms.

I’m a fan of Martin; I’ve followed him on twitter for some time and love his blog on account of his frank honesty from the customer-perspective.

But that’s not to say we both always agree on everything, and this is  one of those times when I disagree with him – mostly.

The standard disclaimers apply – EMC paid for my travel and accommodation. If you think that makes me an EMC stooge, then consider as well that the travel was around 33,000km in economy class and I contracted a serious infection during my trip as a result of all that economy class seating. (Indeed, as I write this, the clock is ticking down on a decision as to whether I need to be hospitalised. I’d like to think those two things balance each other out.)

So what were Martin’s beefs? I’ll quote from his article:

It’s [sic] ongoing teaser campaign started off badly when the storage giant’s marketing types put up a “sneak preview” video and decided to have a dig at their lead competitor – then the flacks started tweeted [sic] using the hashtag #NotAppy.

OK, there’s parts here I’m inclined to agree with. I think the vendor cat-fights in any industry get silly. As far as such heated discussions go though, this one did at least seem fairly restrained. But I’d have  preferred not to have seen a “NotAppy” hashtag throughout the campaign. (Regardless though, the storage industry have kept things fairly tame for the last couple of years – unlike say, the SmartPhone industry, which often resembles something closer to an all-out brawl.)

Next quote from Martin:

EMC is just very bad at this sort of thing; speeds, feeds, marketing mumbo-jumbo linked in with videos and stunts that trash competitors, bore the audience and add little value, in my honest opinion. But all with the highest production values.

I’ve been in EMC presentations that were full of speeds and feeds. I don’t honestly think this was one of these sorts of presentations. In fact, after the presentation, myself and the other Elect attendees got to speak to Dennis Vilfort who spoke passionately about how uninteresting pure speeds and feeds are to customers – and why EMC focused on a simple metric: how many virtual machines you can run on the new revamped VNX line. People who don’t actually sell storage often don’t care about IOPS, and so while EMC did mention IOPS it played second fiddle to their “every customer will understand this” message: VMs.

It’s true, there are other workloads people can put on a storage system, but by and large (using that classic 80/20) rule, EMC are finding that people aren’t buying silo/isolated storage any more – i.e., the majority of companies aren’t buying an array for Oracle, another array for File serving and yet another for virtual machine hosting.

What’s more, because so many organisations are pushing towards the goal of 100% virtualisation*, workloads on arrays are shifting considerably. In the “olden days”, with relatively few applications or systems leveraging an array, workloads were easier and more predictable – consider though an array being leveraged to provide say, storage for 1,000 virtual machines. As per the way virtualisation works, there’ll be a huge mix of applications and services provided by those virtual machines, and in the same way CPU and memory usage will be combined across all of them, so too will storage performance.

Sum of all workloads

(Excuse the fuzzy photo: the memory card on my camera failed part way through the launch and I had to resort to mobile phone photos.)

So, talking “system X with Y resources can run Z VMs” isn’t speeds-and-feeds, it’s bricks-and-mortar – it’s the building blocks of storage utilisation in a hell of a lot of organisations. Not in every organisation, of course, but I’d be willing to bet in 80% of them at least.

To me, there was a subtle (actually, at times not-so-subtle) message to the EMC launch: public cloud doesn’t have to be seen as the only end-game in the IT industry. Amazon et al certainly want you to believe that, but given what’s come about regarding the NSA and GCHQ, etc., over the last few months, big-server public cloud currently has the stench of 3-week unwashed underwear amongst anyone who gives a damn about data security.

Public cloud, after all, is built on trust, and trust has been fairly heavily eroded amongst those who care strongly about security.

That being said, the concept behind cloud is starting to gain momentum, and over the past eighteen months to two years, I’ve come a long way on my opinion of cloud – particularly when it comes to management and provisioning. This is, after all, the start of the commoditisation of the industry – self service point and click allocation of storage, compute and other services. So, it’s clear EMC are aiming all this talk of VMs at businesses who are switching to a private cloud model internally, and those businesses that are hosting private cloud environments.

Cloud is good, they’re saying, as long as you can trust it. But I digress…

OK, there were definitely plenty of PowerPoint slides in the launch, and by comparison there’s absolutely no PowerPoint slides in an Apple launch … unless of course you count all those Keynote slides. Being brutally honest, you do need to accept that a product launch is going to have slides. And slides. And a few more slides. But as long as you don’t fall into any of the Life After Death by PowerPoint traps, you’re heading in the right direction. Also, having watched some of the EMC speakers prepare for the launch in Milan the day before, one thing was abundantly clear: they were focused not only on the flow of the presentation, but simplifying content.

Was every slide perfect? No. Was every slide full of speeds and feeds? No.

And here’s the point where I fully disagree with Martin’s thoughts:

So what did the event feel like? It felt like an internal knees-up, a shindig for EMC bods to high-five themselves while their customers wait patiently. This felt more like an EMC party of aeons ago along with a smattering of cheerleaders from social networks.

You know what it felt like to me? A honest to goodness real product launch.

When I saw Martin’s article being tweeted around by The Register this was my first response:

Q: What’s the problem with storage vendors doing big launches? Why are some tech areas *verboten* for a bit of show & spectacle?

OK, I’m not the oldest person in IT, and I don’t have the greatest longevity in it either, but I have been working in the back-end of the IT industry my entire career.

Systems administration. Backup and recovery. Storage.

These aren’t glamorous positions. Even consulting isn’t glamorous per se if the actual topic is considered unglamorous. There’s a world “system administration appreciation day” which I’m fairly certain was started by system administrators and is unknown of by most of the world’s workers. There’s a backup day which again was self declared and mostly unknown about. I’m not aware of any storage day.

Even people in the industry talk about it being a fairly thankless one. In many companies no-one really thinks much of the system, storage or backup administrators until something goes wrong. And even then, Helpdesk is still the face of IT.

So because our part of the IT industry is often considered to be “un”glamorous, do we have to stick to that notion? Does it have to be simple lit-room launches with a whiteboard and a product assembly demonstration?

Because in a world with increasing commoditisation of IT, where startups promising the world (and mostly just hoping someone will buy them for a few billion bucks) hire people like it’s a talent show, and IT/being a geek is considered to be more desirable, the best way to hire good fresh talent is to make your segment of the industry look as boring as possible.

And because the best way to thank your employees for doing a fantastic job and putting in a lot of hard work is to say “We’ll make up a tech specs sheet for customers in a while. Have a great weekend!”

The code for VNX was substantially rewritten and the resulting machine is a significant boost on the previous generation. Don’t the people involved in that deserve to feel like their employer is proud of them? Don’t they deserve to see a bit of spectacle so they can say “I was part of that!”?

Yes, I was at the launch and yes, EMC paid for my trip. But I sat there during the presentation looking around at the spectacle and effort EMC had put into it thinking “after all these years in the industry, it’s nice to see genuine, palpable enthusiasm about storage”.

Storage is boring. Backup is boring.

Only if we keep letting it be like that. Kudos to EMC for putting on a show that demonstrates the level of pride they take in their products, and the level of appreciation they have for their staff. Kudos to EMC for rejecting the notion that storage has to be boring.


* Regarding 100% virtualisation: As an end-goal, I still don’t think it’s an inevitable one. Some companies will be able to achieve it, and others will have reasons not to. The technical reasons behind it are evaporating for at least a fair percentage of businesses, however.

The quickening

 EMC  Comments Off on The quickening
Sep 042013
 

Speed

I recently attended EMC Forum, where some fairly impressive figures were rolled out relating to what EMC has spent on R&D and acquisitions over the last 10 years*. Backed up by their corporate profile, those figures are:

Our differentiated value stems from our sustained and substantial investment in research and development, a cumulative investment of $16.5 billion since 2003. To strengthen our core business and extend our market to new areas, EMC has invested $17 billion in acquisitions over the same period and has integrated more than 70 technology companies.

So over ten years, that’s almost 35 billion dollars invested by EMC into products, technology, skills and innovation. No matter how you slice that, it’s an impressive commitment towards capabilities growth and leadership in data.

EMC demonstrates that commitment yet again with the Speed2Lead launch in Milan.

I’ll be in Milan for the launch, a guest of EMC**, and I’m looking forward to getting some additional details of the new VNX range in particular – but the information EMC has whet my appetite with is pretty impressive so far.

XtremSW Cache 2.0

v2 of XtremSW Cache sees it really take off via:

  1. Integration with EMC Arrays;
  2. Working with:
    • IBM AIX;
    • Server flash;
    • Oracle RAC (coming in October);
    • VMware vCenter Integration;
  3. XtremeSW Management Center will provide strong control and efficiency options – a single point of management when deploying multiple cache instances.

The EMC array integration I think is going to prove to be particularly popular:

  • VMAX:
    • Strong support out of the box for VMAX integration – XtremSW Cache will be manageable from within Unisphere so that administrators can choose which LUNs should be cached based on trending analysis;
    • Prefetching entire tracks – so if you’ve got a read-intensive application leveraging VMAX storage you’ll get a considerable performance boost – IO rates can increase by 25%;
    • Cache coordination (optimised read miss) moves the read cache tier to the host. That allows the array of course to use resources elsewhere, which can lead to increasing IOPS by as much as 2.5 times.
  • VNX:
    • For the VNX series, too, XtremSW Cache 2.0 starts with Unisphere Remote, which will recommend LUNs to cache based on trending information. (EMC say over time the integration between VNX and XtremSW Cache will be on par with the VMAX support);
    • Additionally, you’ll be able to monitor performance and health, as well as the configuration for XtremeSW Cache all from one location.

VNX

VNX has jumped to being highly multi-core aware in its processing and capabilities. While VNX systems have used multi-core CPUs for a while, the architecture wasn’t taking full advantage of those powerful cores. In particular, depending on the workload involved you might see one or two cores particularly heavily utilised, and others mostly idle. This new design, MCx, sees EMC VNX arrays fully utilising up to 32 cores, with significant boosts to performance: full symmetric multi-processing. Some of the performance improvements being cited are going to be very popular:

  • 6.3x faster for host-side IOPs;
  • 4.9x better on file serving IOPs;
  • Capability of delivering 1 million IOPs deliverable at sub 1-ms latency.

No, there’s no typos on that last point.

Capacity, Performance or Both?

  • With support for up to 1,500 storage devices (SSD, HDD or any mix you want), VNX is offering serious capacity and performance capabilities. Depending on your layout of HDDs vs SSDs your performance and capacity will scale considerably in one of several directions – the more hard drives, the closer you’re going to be to hitting maximum capacity; the more SSDs you have and the closer you’re going to be to hitting maximum IOPS;
  • Of course, if you want to get maximum bang for back with Flash (and Fast), and your data profile supports it, deduplication may very well be a good thing to turn on;
  • VNX is moving to active-active storage-processor capabilities. That’s starting with traditional, thickly provisioned LUNs, but will over time move to encapsulate the rest of the VNX functionality. This is a big change – and a big win for customers who want higher performance but for pricing considerations need to stay within the VNX range;
  • The new VNX 8000 being released in Milan is an absolute beast, with scaleability of to 6PB and support for running a workload of 8000 virtual machines, it’s going to be a major boost to datacentres and cloud environments.

There’s more, of course – much more. I’ll be particularly looking forward to some discussions with EMC folk regarding the performance increases we’re likely to see out of the new VNX MCx architecture when it comes to NDMP.

EMC AppSync

Demonstrating its continuing focus on protection and recovery, EMC’s AppSync system offers a new storage focus to protecting critical applications. With a configuration system based around SLAs, you can define AppSync protection strategies based on Gold (Concurrent remote and location protection), Silver (Remote protection) and Bronze (local protection). Of course, you can change those names to suit your environment, but the Gold/Silver/Bronze often works quite well to define protection levels.

The advantage of that of course is that once you’ve got those SLA policies defined, deciding the protection strategy for an application comes down to picking whether you want Gold, Silver or Bronze…

The applications covered by AppSync are an important collection:

  • SQL Server 2008;
  • VMware NFS file replication ;
  • Exchange 2010 and 2013 (EMC cite being able to protect a 22TB Exchange database in under 5 minutes);
  • VMware Datastore Protection.

[not closing] Thoughts

A few years ago during a blogger forum organised by SNIA-AU, I was part of a group of people who visited EMC, IBM, HDS and NetApp. One of the most telling things said during the day was from Clive Gold at EMC Sydney. It was such a simple statement that I’m trusting myself to quote him verbatim after all this time:

People buy storage for capacity, but they upgrade storage for performance.

Developing big storage is almost a no-brainer. Think about it: your enterprise could attach 12TB of DAS USB-3 storage to most servers for less than $1000 per server. Totally crazy stuff, of course – the management overhead alone would be a nightmare. But that’s the thing: if you’re not worried about performance, storage is easy.

Speed2Lead shows me that Clive Gold wasn’t just speaking from a marketing statement – it’s something EMC fervently believes in: capacity is one thing, but delivering on performance is more important, because it’s performance that customers notice. As a storage admin, you’re not going to get any pats on the back that you’ve got 1PB of storage free and unallocated if the 500TB you do have allocated can’t service the IOPS requirements of the business.

EMC are calling this Speed2Lead … and they’re certainly speaking the truth.

Stay tuned for more. Following the event when I’ve had more time to digest the information further, discuss it with colleagues and customers, I’ll be posting some additional details.

For now, check out EMC’s launch page for Speed2Lead.


* Some companies tend to be dismissive of money spent on acquisitions, but I have a different perspective – companies that aren’t willing to acquire are steadfastly following the “not invented here” approach. In order to survive and grow, a company has to be willing to both invest in research/development and other companies that provide synergistic product sets or skills.

** Disclaimer: EMC flew me from Melbourne to Milan via Qantas and Emirates economy class, and put me up in the Westin Hotel. There were some transfers and a couple of meals included.

%d bloggers like this: