Response to Hu Yoshida's Blog Entry

I thought I would post my response to his blog entry about a comment I made on Drunken Data regarding replication. Evidently HDS screens comments prior to publishing them, so I need a record of the comment just in case they edit it. Here ya go. Hu's blog entry: http://blogs.hds.com/hu/2006/01/data_replicatio.html My response: I guess I should be a little more eloquent when describing software costs from now on. I guess I could just say poo instead of (insert explicative here). Wink Well overall it was a good marketing synopsis of some of the advantages that HDS is bringing to the table for asynchronous replication. Some of the things that I take issue with are: 1. "time stamped journal" - From what I understand you are only using time stamps for the mainframe based data being replicated. You know, the old IBM XRC. The open systems data is still held to an IOD method, but not using time stamps. Of course, I could be wrong so please clarify this for us. 2. "definition of a consistency group" - This is one of my biggest headaches from the disk array vendors. Everyone's definition is different from each other and none of these consistency group concepts are perfect. My biggest issue with HDS's implementation of consistency groups is that I cannot have mainframe data and open systems data in the same consistency group. Even though I have applications residing on multiple platforms (mainframe, unix, and windows) who all share data with each other, I can't recover their all of their data to the same point in time. Now HDS has told us "you don't need to worry about mainframe and open in the same consistency group". "You'll be replicating the data so often, you'll be within seconds of each other at the time of recovery." Now those systems are doing hundreds, if not thousands, of transactions per second, so I could be several units of work off at the time of recovery. Granted, this recovery method will be much better than what we have today recovering from tape, but it's not perfect and I hope HDS will strive to improve upon this in the future. Consider this the gauntlet being thrown down for HDS to solve. 3. "have all the disk for your applications on one subsystem (for time consistency)" - I don't believe in my comment that I ever stated that the data had to be on a single disk subsystem for "time consistency" as you put it, however I agree with this statement and applaud what HDS has done with the Tagma and it's enablement of replication of heterogeneous storage. My primary reason for all of the data being on a single subsystem was for the huge cost savings for the replication software. Why by three replication licenses when I can buy a single license? 4. The overall cost of the software for these disk subsystems is totally out of whack and way too expensive. I'm not going to publish any numbers here, but I feel (and every customer does for that matter) that all the disk vendors are over pricing their software. I know (and you do too) it doesn't cost this much to develop updates to 4th generation software. I mean your still selling Graphtrack for goodness sake. We've been using Graphtrack since the 7700 Classic days, and it hasn't changed so much that we should be charged for the new version if we buy a new disk subsystem. It may be okay to charge a little more for Universal Replicator since it's new, but you have to make up the expense it took to develop it after only a few sales at the current cost. Bottom line, no vendors software is worth what they charge for it! Thanks for including me in one of your posts. I dig what HDS is doing with it's technology, but you guys still have some work to do. Keep listening to customers and addressing their needs and you'll be fine. Take Care, Snig