DDOS 7.3 loves the Cloud

Data Domain isn’t just hardware – the hardware-based systems pack some punch, of course, but core to the system functionality is the Data Domain software.

This is the third release of the DDOS 7 tree, so we’ve already had some great features already released – this release includes changes such as support for Vormetric DSM for encryption-at-rest key management support, updates to segment analysis processes, and some additional security updates.

But the real boost (no pun intended) comes with Data Domain Virtual Edition for the public cloud – and in particular, AWS and GCP. (Azure got some loving as well, with support for the premium SSD option with that cloud provider.)

In addition to GCP getting support for DDVE deployment from the marketplace, the DDVE solution for AWS and GCP has now been enhanced to support 256TB deployments.

Up until now, DDVE in the public cloud has been limited to 96TB. That’s not in any way shabby, of course – a 96TB DDVE with even just 20:1 deduplication is going to give you almost 2PB of logical backup storage.

But 256TB gives you a spectacular level of data protection support in the public cloud. Again, at a simple 20:1 deduplication ratio, a 256TB DDVE will yield 5PB of logical backup storage.

(One of my customers is currently getting a little over 60:1 in the public cloud. So that would equate to over 15PB of logical storage.)

The importance of a larger deduplication pool can’t be underestimated. You’ll achieve a higher storage efficiency every time by having a single large deduplication bucket; after all, the larger the deduplication pool, the more data you can compare against for incoming content.

There is another benefit, too – a reduction in virtual machine resources. Running a single 256TB DDVE will use less memory and CPU resources – i.e., lower in-Cloud chargeable resources than running 3 x ~86TB DDVEs. So you’re not only going to achieve higher deduplication ratios and lower object storage costs, but you’re also going to get a lowered TCO when it comes to computing/memory resources. Finally, that’ll flow through to any data transfer costs that may result from your configuration (e.g., if you’re backing up or replicating over a network link that incurs data costs). Again, that’s because if you’ve got a bigger deduplication pool, you’ve got more chance of reducing the amount of data that needs to be sent for backup or replication purposes.

This is a real win-win for businesses seeking an efficient data protection platform within the public cloud.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.