Choices, SATA and a touch of DMX-4

Choice is good right?  A core principle of a free society and played an important role in fostering innovation, prosperity and a ton of other good stuff.  And of course we never make the wrong choices do we ;-)

Today Im thinking about two things that are at least loosely related -

  • Lack of support for external storage on the DMX
  • Lack of support for ATA disks within a USP

Talk to a Hitachi guy and he will tell you how dangling the carrot of external storage (behind a USP or NSC) in front of potential customers has helped steal many a customer away from the competition.

Talk to an EMC’er and they will tell you how the DMX is wiping the floor with the USP because of its ability to “tier in-the-box” (thanks to the ability to mix and match a wide range of drives within the DMX, oh and lots of them).

In reality each solution has its place, with neither an out-and-out winner over the other.  Like Copy On Write snapshots versus full-block copies - there are cases where one fits better than the other - but neither wins every time.

So my point is this – If EMC are haemorrhaging customers to Hitachi because externalising storage makes sense to some customers, and vice versa, if the USP is losing ground on the DMX because some folks like putting ATA inside their tier 1 array, why not bite the bullet, build it into your kit, and keep hold of your customers.

There is certainly a demand for both, and demand has sparked the odd u-turn or two in the past – EMC adding RAID6 functionality being just one example.

When compared to something like Thin Provisioning, which both vendors are working on, implementing the above features would be a comparative walk in the park.

So if its not that hard to implement, and by doing so you potentially hang on to your customers, why not pinch your nose and take the plunge?

Lets take a closer look at each of the technologies and some of their pros and cons and find out what might be bothering each vendor (I promise not to quote any vendors or their blogs) -

Externally attached storage

Pros.

Allows you to attach lower performing ATA disks while mitigating many of the impacts they would otherwise have on your tier 1 array – like it or not, they bring baggage with them including –

  • longer rebuilds
  • supposedly more frequent rebuilds
  • slower destaging
  • ….. 

However, even when externalising storage behind a tier 1 array, you still have to assign some tier 1 resources such as ports, CPU and cache.

Enables existing lower tier arrays to take advantage of more reliable and scalable functionality found only in the tier 1 array.  Although this can be achieved by throwing ATA directly into a tier 1 frame, by implementing external storage you are able to extend this tier 1 functionality out to your existing kit – protecting and even maximising on existing investments.

Cons

Performance – There is no getting away from it.  If your data is destined for disk on an xternalised array it will take longer to put there and longer to get back.  Its journey will be -

FED-cache-FED-FED-cache-disk instead of FED-cache-disk. 

But if this is a problem for you, don’t do it.  A lot people I’ve seen are hanging SATA disks off the back of a USP, supporting applications where performance is not a huge requirement.

Complexity – similar to performance, by hanging one array off the back of another you are introducing more moving parts, increasing complexity, and increasing the chances of a component failure in the path.  The whole FED-cache-FED-FED…… thingy

Power consumption – In most scenarios it takes more energy to power two arrays than one.

People also ask the question of what happens to your data in the event of a lengthy power out.  One array cannot guarantee that the other has enough, if any, battery power to keep running, so cant safely destage cache to external disks………..  The thing is, with the kind of battery backup you get in a DMX and USP, coupled with the solid UPS + diesel generator type protection your data centre no doubt has, I even wonder if this is a real risk or just one of those made up ones that we use to sell stuff or slate our competitors?  Feel free to educate me.

Internal support for ATA on enterprise arrays

You just need to take a look at the Hitachi implementation of SATA in the AMS to see where Hitachi stands on ATA.  The AMS performs a verify-after-write operation on every write destined for SATA – end of story.  No option to turn this feature off.  Why?  I can think of only one reason, Hitachi does not trust ATA……. yet!

With this in mind, I wouldn’t recommend holding your breath waiting for the USP to support internal ATA - famous last words ;-)

The thing is…………. I can’t help but wonder if Hitachi are still testing the water with ATA.  After all, ATA is relatively new and unknown.  They may well have über-techies queueing up and devouring ATA related logs, like kids with the latest Harry Potter adventure.  That’s my hope at least.

Also, I’ve talked about it before but Im gunna mention it again – there will be a world of difference between the reliability of a SATA disk in my laptop and a SATA disk in a purpose built storage array.  My laptop gets bashed, dropped, sprinkled with food and most worryingly, attacked by my 18 month old daughter.  This is a polar opposite to the treatment it will receive when installed in a decent storage array - controlled temperature, air-flow, humidity, vibration……  The storage array is to a disk what a mother’s womb is to an unborn child – the perfect place for it!

So once Hitachi are happy that they can predict the behaviour of ATA inside their kit, may be – HOPEFULLY - we will see support included in the USP!

Pros

Tiering within a box.  It will often be cheaper and more energy efficient to plug fat and slow disks into your tier 1 array than to run two separate arrays.

Less complexity.  Less moving parts - FED-cache-disk instead of FED-cache-FED-FED-cache-disk.

More predictable in the event of a power out.  Not sure if this is really an issue??

Cons

ATA disks hinder the performance of your tier 1 array.  To continue the womb analogy, while a baby thrives in the womb, many women take a performance hit – tiredness, slowness… and dare I suggest swollen feet and stretch marks (bet you weren’t expecting that when you started reading Embarassed ).  The same can happen to your array – slow rebuilds, slow reads, slower destaging…..  And as far as Im aware, nobody in the storage industry has figured out how to avoid these. 

Interestingly though, not everyone buying a DMX or a USP these days is buying it for sheer feeds and speeds and would live quite happily with the performance hit.

Verdict: While neither will solve 3rd world poverty or global warming, neither is the cause either (well not a huge cause).  I’ve done a bit of external storage, it has worked well and I know of no horror stories.  And while I see that internal support for ATA in a tier1 array may not be the greatest thing for performance, I see how it can be useful.  Certainly nothing that would scare me off, and plenty to encourage me to implement both in my tier 1 arrays.

While I understand where each vendor may be coming from, I wonder if the risks outweigh losing customers?

May be Hitachi and EMC believe they are doing the right thing by removing the choice – therefore protecting us and our data.  After all, although making the right choice can reap great rewards, making the wrong choice can bring a world of hurt!

Hmmmmmmmm.....

Two final things –

Interestingly I see that the DMX-4 is supporting SATA II drives with a bolted on FC-ATA bridge.  Sounds cool and I like how this allows some of the ATA baggage to be offloaded from the DMX to the drive itself.  I’ll be interested to know how much of a real world impact this has.  Oh and what kind of a difference it will make to the price of these drives, SATA disks supposedly being cheap and cheerful.

And finally…. Im not quite sure where I stand on the energy efficiency side of ATA that people are banging on about……  I don’t profess to be an expert and haven’t the time to look into the figures, but surely there is something in the following – SATA disks are slower than FC so they will need to work for longer to complete any given request.  Working longer = consuming more power.  And that’s not even mentioning more frequent and elongated rebuilds (working and consuming power again).  So in reality, is there much of a difference?  Personally I doubt its worth making a song or dance about.

Nigel