Deduplicated VM Cloning and Backup

I mentioned earlier that I’d been thinking a lot about the consequences of using a deduplicated store for holding live VMs. This thinking came from hearing the SimpliVity pitch quite a few times and also hearing the questions that came from the audience. To be clear this isn’t something that has to be unique to SimpliVity and SimpliVity did not ask me to write this post.

shutterstock_Clones

A deduplicated store contains a collection of unique data blocks and sets of metadata. The metadata allows those unique blocks to be assembled into things. The things we’re working with here are the files that make up a VM, the largest of which are the disk files. At the simplest level, the metadata is simply a list of which unique blocks are needed in what order to make up the disk. When a VM needs to read or write data the dedupe store uses the metadata to work out which unique block needs to be accessed. Typically the metadata is a fraction of a percent of the size of the data it represents. A 100GB disk file, half full of data, may be represented by 30GB of unique blocks and 200MB of the metadata. The combination of metadata and unique blocks is what the store needs to represent the VM disk at a point in time. The actual size of the metadata is very dependent on the dedupe techniques in use so I’ve made up a number that is fairly high. There is a lot of secret sauce involved in reducing the size of the metadata. I don’t know SimpliVity’s secret sauce or any other vendors, so I’ll use my high number. You should understand that different vendors may have much smaller metadata sizes and more efficient ways of managing metadata.

For the storage array, cloning a VM on a dedupe store is simply a matter of making a copy of the metadata. Since the metadata points to unique data blocks that the store already has there is no need to copy any data blocks. This means that to clone our 100GB VM the array will only need to copy the 200MB of the metadata. A near instant action, particularly if the metadata is on solid state storage. A single SSD could sustain multiple of these VM clones per second. Let that sink in. One SSD provides enough performance to support cloning a 100GB VM a hundred times per minute. Without deduplication, we’d need an array capable of 100 x 50GB = 5TB per minute of throughput. Also, the non-deduped array would also need 5TB of capacity to hold 100 VMs. The deduped array just needs space for 100 sets of metadata, 100 x 200MB = 20GB. Both arrays would also require space for data growth over time. Guess which array would need to grow faster?

I doubt that you want to clone VMs 100 times per second. On the other hand, if you have a thousand VMs how about if you could have a full copy of any VM anytime you want? On a dedupe store there is no difference between a clone and a backup within the store. Both are simply a copy of the metadata. That storage clone of the VM is really a backup of the whole VM. As long as we have a copy of the metadata at a point in time (and the unique blocks) we have the VM contents at that point in time. On a deduplicated store, you can have a full copy of the VM for the IO cost of copying just the metadata. This is an enabler for backups throughout the day on every VM and maybe multiple times per hour for critical VMs. Each backup is just a copy of the metadata, barely any overhead. You might use this to enable backups throughout the day. This could enable protection of files that users create during the day. It could also enable rollback to the moment before a catastrophe occurred. Some small businesses were struck by cryptolocker, forcing them to pay a ransom to recover their files. Having storage level backups would allow these customers to roll back to minutes before the infection struck.

I really like the consequences of using a deduplicated store for live VMs. The ability to backup and recover VMs rapidly with minimal impact is awesome. I like the capacity and IO reduction that comes with deduplicated stores. There are potential issues with deduplicated stores, I’ll be writing about some of those too. I do think that in the future we will have a lot more deduplicated stores in use, particularly while solid-state storage has a high cost for capacity.

© 2015, Alastair. All rights reserved.

About Alastair

I am a professional geek, working in IT Infrastructure. Mostly I help to communicate and educate around the use of current technology and the direction of future technologies.
This entry was posted in General. Bookmark the permalink.

2 Responses to Deduplicated VM Cloning and Backup

  1. Frederi Mandin says:

    I definitely agree.
    We ran a PoC using Simplivity (and also one wiht Nutanix) and I was baffled by the deduplicated storage. I am not sure wether there is another solution where dedupe occurs as soon as the data is written (even before it is written), but this solution is very impressive. It allows for faster faster faster backups, recovery, remote synchronisation, VM creation. It even has a large benefit when you store many VMs of the same type. Just imagine how many common blocks of data you have when you deploy dozens of Windows 2012 VMs.
    I am not a Simplivity employee, nor even a partner, just a regular IT guy, and I was really convinced of the benefits of dedupe at source, using their solution.

  2. Vladan SEGET says:

    The key is efficiency. Less data needs to be copied, shifted…. better the solution is. Not to forgot about = an overhead while processing, which could lead to higher CPU usage.

    Storage systems (or compute+storage) with integrated backup solutions is the next big thing we will witness. Actually it’s all of those All-in-one solutions (hyper-converged) which include the hardware, software, backup, monitoring….

    Imagine that you receive an email from your cluster telling you that in 3month time you’ll need to add another node. And when you add another node to the cluster, does the node is recognized and configured automatically? Guessing right. I guess, that finally we can get really, really lazy as admins… -:).

Comments are closed.