SimpliVity: A New Paradigm for VDI Implementations

I need to start this post of with a disclaimer: The OmniCube was developed to run all workloads in the data center. VDI is just another workload that will run on the OmniCube.

Brief History

In the past VMware and Citrix have released a number of features to help improve the performance and ROI of VDI deployments.  One of those features is the ability to use linked clones.  One of the primary features of linked clones was designed to allow you to save disk capacity on one of the most expensive resources of your deployment.  For VMware View we would tell customers they could save roughly 40% of the overall disk capacity requirements by implementing linked clones.  The problem with linked clones was the additional design work that had to be maintained (number of clones per source) and the poor performance that resulted from all the clones accessing the same spindles to read from.  This caused us to implement different tiers of disk, maybe some cache devices, and ultimately cost us more money.

Another issue with virtualizing desktops and having persistent data was the difficulty in designing a disaster recovery solution and then actually getting it to work during the recovery. I implemented several View designs by using DFS as my persistent data storage and then replicating that data to the alternate data center.  It worked most of the time, but if the customer didn't maintain their AD and DNS infrastructures then calamity was bound to ensue.  We have to be honest (because there has been some debate) if we could all run persistent desktops in any VDI implementation we would choose to do so.  We can use some of our traditional desktop management tools making management easier and we all like to have our own data where we know someone else can't lose it. Naturally, running persistent desktops also uses more disk space because we have to store all of that data in multiple places.

Deduplication for Performance

Yep, that's how we get it done at SimpliVity.  We started from the beginning with data mobility and performance as the core design elements of our solution and inline deduplication how we solved both of those problems.  From that core came the OmniCube Accelerator and the OmniStack software.  We dedupe, compress, and optimize data inline (in memory) and then sequentialize the I/O before the I/O hits any storage media.  The whole purpose behind that is to never have to write duplicate data to disk thus optimizing all of the I/O throughout the system.  We all know that while hard drives have gotten larger of the years they haven't gotten any faster. Traditional architectures are capacity rich but I/O poor on those spinning disks.  This data efficiency is across all tiers (DRAM, SSD, HDDs, and cloud storage) and global so once the data has been deduplicated within the OmniCube Federation the data is NEVER duplicated again.  We are not only saving you storage capacity, but more importantly we are saving you I/O which is a much more expensive resource.

Post process dedupe is only going to save you on capacity.  It doesn't do anything for performance and doesn't reduce I/O. You still have to write all of the data before it is deduplicated.  And I won't even get into the fact that it doesn't help you with your disaster recovery SLAs.

We don't need no linked clones!

We all know that VDI environments contain a LOT of duplicate data.  We have copies and copies of our OS images.  In the OmniCube, since the data is deduplicated at inception then why wouldn't you just use full clones.  All of the blocks from the original desktop template have already been deduplicated so just create a bunch of copies from that and not have to worry about using up all of your disk space.  For boot, imagine that the desktop needs 100 MB of disk space to boot from.  If I have 100 desktops that's 10 GB of data I need to read to boot those 100 desktops.  On traditional architectures all 10 GB of those reads are coming from either spinning HDDs or a cache tier. Well if the data is already deduplicated you only have to read that first 100 MB of data so the boot time for all 100 of those desktops is tremendously fast and even more so since all the OmniCube reads come from SSD.  It's simple math, reading 100 MBs of data is always faster than reading 10 GB of data.

So for persistent desktops this inline deduplication becomes a dream scenario.  All of the data is deduplicated inline so I will have plenty of both performance and capacity. Why wouldn't you just give everyone their own desktop and let them keep it?

What about Disaster Recovery?

As I said earlier DR can be difficult, putting it mildly, in traditional VDI environments.  With the OmniCube we do full backups at the VM level without using anymore physical storage. Yep, read that again.  We aren't doing the traditional array based snapshots that effect performance and go away if I roll back to an earlier snap.  We are doing real full backups without producing more I/O for those backups and guaranteeing the backup will be there when you need to do a restore, even if you delete the VM.  Since we are doing full backups without producing more I/O and not using more capacity then why wouldn't you backup a desktop VM just like you do your servers?  The next time a user calls and says their desktop has blown up, just restore it from an earlier point in time.

The other piece of DR is offsite replication.  Remember I said that once data has been deduplicated and resides in the OmniCube Federation that it is never duplicated again?  That goes for replication across the WAN as well.  We only send the blocks (4KB - 8KB each) to the alternate site that don't already exist at that site.  If your remote office or corporate office becomes a smoking hole, then just restore the desktops on OmniCubes in a surviving location and get your employees (IT customers) back up and running.

 Summary

So as you can see SimpliVity really changes the way that you've had to think about and design VDI the past few years.  We've removed (or fixed) a lot of the issues that have been plaguing IT departments in the past when designing and running VDI environments.  We make it simpler and enable you to manage your desktops the same way you always have and we've improved the survivability of your VDI infrastructure thus enabling your IT customers to continue to make your company money.  And that's the reason companies have IT departments in the first place isn't it?

You can check out a great article and video here: Link