Symmetrix V-Max

Since I'm recovering on the couch from a small operation and therefore with time to spare I thought Id join the throng in commenting on Symmtrix V-Max....
For perspective – At the time of writing, I don’t work for EMC and I don’t work for any of their competitors. My own area of expertise is Hitachi storage, but I classify myself as a nonpartisan storage nut – I like good storage, whatever the colour or flavour. Basically, it’s not my job to promote it, and it’s not my job to throw mud at it…… So I think my opinion is fair and unbiased.
So for my penny’s worth – although that might be grossly over-valuing my opinion :-D
To be honest, my initial reaction was “yawn”. But after taking a closer look, I’m quite impressed. So I guess if your feeling underwhelmed about the whole thing, it might be worth taking a second and closer look.
HARDWARE
From a hardware point-of-view the Symmetrix V-Max is revolutionary (for EMC). The new hardware approach and design is radically different from anything else EMC has done at the high end. 
Some highlights –
·         Intel Xeon quad core processors
·         Globally accessible local cache :-S
·         Radically new “unified director” design. With all of the following on a single card –
o   front end ports
o   back end ports
o   cache
o   processors
·         Virtual Matrix Interfaces
·         RapidIO fabric interconnect to connect multiple “engines”
Symmetrix XIV V-Max
At first glance looking through the marketing bumf I thought I saw shades of XIV (V-Max unified directors vs XIV data modules…..). However, on closer inspection any such comparison would be an outright insult to the Symmetrix V-Max and all those associated with it. What, with XIVs lack of global memory and severely limited drive support, not to mention limited replication abilities etc, XIV is nothing short of pathetic in comparison. If you previously thought XIV was high end, think again.
No more monolith
There can be no doubt that this Symmetrix is modular, oh and scalable. Each pair of unified directors is called an “engine” and controls up to 360 drives (any mix of EFD, FC, SATA). Multiple engines (currently up to eight) can then be connected via the Virtual Matrix interfaces and work in unison as a single large system. Custom ASICs on the virtual matrix interfaces enable all cache memory, even that on remote engines, to be treated as a single large global cache.  Clever.
Oh and each of the 8 engines does not have to be racked side by side in a row, you could theoretically have engine number 1 in Hall A, engine in number 2 in Hall B….. Now this may be significant in that I often find that customers would like to add capacity to an existing array, only to find that there is no floor space adjacent to the existing kit in order to bolt on another frame. Virtual Matrix Architecture may get around this without breaking a sweat.
This kind of architecture makes it simpler to scale, and future generations of the V-Max promise to scale to far more than 8 engines. Beasts but not monoliths!
Interestingly this builds upon EMCs approach of large arrays. While Hitachi has not increased the number of internal disks to its high end disk arrays for several years, EMC has. While there was previously debate over whether or not Symmetrix DMX could actually scale to use the maximum number of drives listed on the spec sheet, this new modular architecture should help V-Max scale very smoothly.
In the past there were also situations where the purchase of additional arrays was required to increase capacity. Now, for V-Max customers at least, these types of capacity increases/upgrades will be fewer and farther between, wth many customers opting to simply add more engines to an existing V-Max. This must surely be cheaper, easier to manage, easier to license…..
Hardware absentees
Noticeable absentees being FCoE and 10GigE Data Centre Ethernet. But the demand for FCoE doesn’t exist at the moment and this will no doubt appear in future product refreshes. As for 10Gig DCE ,this may replace the RapidIO virtual matrix fabric interconnect in the future – that should keep Cisco happy :-D
 
SOFTWARE
First up, Symmetrix V-Max uses the tried and tested Enginuity code base. This is good and bad, but probably mostly good.
The good – you get field proven software (think SRDF etc) that has literally been battle hardened for years.
The bad – there will no doubt be some annoying “backward compatible features” ported over with Enginuity that will take a while to be ironed out. This is inevitable.
But with such a radical overhaul from a hardware point of view, would anybody really want a radical new software stack on top of that? Not me.
For me it’s a great combination - an old wise head on a young broad set of shoulders.
 
FAST
On the topic of software let me mention a couple of things I like about FAST (Fully Automated Storage Tiering).
According to the hype, FAST brings two game changers to the party, well, one to this weeks party and the other to next weeks –
1.            The ability to migrate LUNs between tiers without disrupting remote copy jobs.
2.            The ability to migrate at a sub-LUN level.
On point 1. This is huge. While Hitachi Tiered Storage Manager is a good product, the fact that you cannot migrate a LUN from one set of spindles to another without first suspending the TrueCopy remote copy jobs means its not yet a great product.
Strangely, production environments do not like interruptions to remote replication during LUN migrations. As a result I rarely see HTSM being used outside of the realm of migrating LUNs in from older/competitors arrays into a USP/USP V. Once the data is in the USP V, migrations up and down tiers are rarely done because of this limitation. 
Assuming there is no hidden small print, FAST will immediately be far more useful, and will hopefully spur Hitachi on to provide the same functionality in HTSM.
On point 2. First up, this feature is not available at GA of the Symmetrix V-Max and is only in the pipeline (how many times do we hear that). However, once available will change way we manage storage tiering and performance optimisation forever.
The promise is that for VP (Virtual Provisioning) LUNs we will be able to migrate LUN extents between different tiers rather than entire LUNs. (Each VP LUN is comprised of multiple 768K extents). Each extent can be migrated up and down tiers, allowing scenarios where extents from a single LUN reside on multiple tiers.
Why is this good? In the past, even if only a few tracks of a LUN have been hot and demand better performance, we have had to migrate the entire LUN. With the price of EFD/SSD still being relatively high, extent based migrations will enable far superior usage.
Oh and at GA there is no "Automated" to it. It would have to be pretty impressive though before Id trust it on my data......
  
CONCLUSION
I’m impressed. I think EMC has created itself a solid foundation on which to build for the future.
Of course the devil is always the detail and it remains to be seen how these new architectures and features translate in to real world benefits. However, for me, EMC have done a great job. Too much change and customers would be loath to risk their business. Too little change and there would be no business case to make the switch.
I think high end storage may just have changed, and for the good. It really is an interesting time to be involved in high end storage.
Oh and I wonder where this leaves CLARiiON? With V-Max being able to start out so small and scale so large!?
And finally, the zillion references to "Virtualisation" in the press releases and blogs etc............ While the V-Max doesn't do Virtualisation Hitachi style, I dare say there are several features that merit the term virtualisation (remember, nobody has a patent on what you can and cant call virtualisation). Two that spring to mind being the ability to bring all cache memorys together as "global" as well as the, albeit only on the runway, ability to migrate and re-tier at the sub-LUN level for VP LUNs.....
Nigel
PS. All of a sudden EMC World got more interesting