Xsigo - Try it out, I dare you!

me head for blog UPDATED 19/11/09 minor corrections/clarifications in purple:

OK, if you don’t already know Xsigo Systems, and what they do, then you are seriously missing out!

 

The way that I see it, Xsigo (pronounced “see-go”) are of particular interest for two reasons -

Firstly, they are playing in the steaming hotbed that is the I/O consolidation and virtualisation space.  This area of Data Center computing is probably experiencing its biggest period of change and upheaval since the birth of Local Area Networking.  Also, the I/O subsystems of modern servers and blades are becoming increasingly important in modern data centers.

In a nutshell, just about everything is changing in the Data Center I/O space!

RupturedMonkey advice to vendors: Now is not a time to stand still or try and defend your traditional core competencies. Move with the times or risk falling behind!

RupturedMonkey advice to consultants and architects: Now is not a good time to take a professional snooze. If you do, you might find that you dont recognise the world you wake up to. Stay awake!

The second reason Xsigo Systems are of interest is because they have an absolutely kick-ass product - the Xsigo VP780 I/O Director.  So lets talk about it……

The Xsigo VP780 I/O Director

Before digging in to the specs and architecture, I should point out that the VP780 I/O Director is the only offering from Xsigo!  Also, as well as being the only product they currently offer, it is also pitched squarely at enterprise customers.  I suppose one of their responses to that would be that it allows them to be laser focussed, but for me, my initial reaction was that this makes them a bit of a one trick pony……  Compare their I/O consolidation portfolio to the likes of Brocade, and especially Cisco, and you will see what I mean.

During their presentation at the recent GestaltIT Tech Field Day they did say that they are working on similar but scaled down offerings for the SMB, but nothing to announce at the moment.

However, despite being the only noteworthy member of the Xsigo family, the VP780 is no wimp!  On the contrary, in a one-on-one it would probably fancy its chances against any of its competitors.  I certainly wouldn’t bet against it from a technology point of view! 

Specs and Techs

The VP780 is a 4RU high-speed low-latency 780Gbps I/O consolidation platform that was over two and a half years in the making.  It provides PXE boot and boot from SAN across 20Gbps connections to your servers and makes cable once and do the rest in software a reality!

Below is a picture of the front panel of the one on display at GestaltIT Tech Field Day -

Xsigo VP480 front view

From a high level architecture point of view the VP780 has -

  • Server-side connectivity via 20Gbps Infiniband XFP ports
  • Network-side connectivity via 15 hot-plug slots that can be loaded with 1Gbps Ethernet, 10Gbps Ethernet and 4Gbps FC.
  • Passive midplane
  • High-speed low-latency internal switching fabric

Hopefully the scribble below will be helpful as I attempt to dig deeper and explain some of the main components. Later in the week I will record a whiteboard session and upload as a complimentary post.

Xsigo scribble

 

What does it do - in a nutshell

In a nutshell, the Xsigo I/O Director does for I/O what VMware does for CPUs.  Only it has an added benefit that it removes the physical limits of the server chassis –> instead of installing your NICs and HBAs in your servers and then carving them into virtual adapters that can only be used by that server, you install your NICs and HBAs in the Xsigo VP780 chassis so that they can be carved up and dynamically allocated to any connected server.  In doing this, you are effectively moving the edge of the network out of the server, enabling servers to be entirely stateless from an I/O perspective. Cool.

Connections to your servers… the hardware stuff

The VP780 has 24 x 20Gbps Infiniband ports for connections to your servers (server-side in the above diagram). They utilise copper or optical CX4 cables and XFP interfaces and are terminated at the server side on Host Channel Adapters (HCA in Infiniband parlance and yes each connected server needs an HCA). These HCAs are not used directly by the OS, instead, host-side drivers work together with the Xsigo I/O Director to ensure that the appropriate vNIC and vHBA devices are available to the OS.  Of critical importance is that thes vNIC and vHBA devices work exactly as physical NICs and HBAs and the OS is none the wiser.

Some quick comments on these physical aspects –

1. The 20Gbps XFP interfaces are not hot swappable and not upgradeable to 40Gbps QDR Infiniband.  These Infiniband HCA and switch ports are more energy efficient, lower-latency and higher-throughput than their 10GigE counterpoarts.  They also support longer distances over copper meaning that copper is an option more than it is for 10GigE which currently has practical limits between 5-10 meters.

2. As these are CX4 Infiniband connections, you will need Infiniband Host Channel Adapters (HCA) in your servers.

3. XFP and copper CX4 is more power hungry than SFP+ and copper commonly used with 10Gbps CEE. However, XFP optical is not more power hungry than SFP+ optical.

XFP back of server back to Xsigo

On point 1 from above - this should not be seen as a major issue. 20Gbps to your servers is lightning fast by todays server standards, and with the current wave of PCIe 2.0, you’re unlikely to be pushing beyond 20bps anyway. 40Gbps models are planned, as well as faster HCA cards, although this will be down to Mellanox as the silicon is 3rd partied form Mellanox.

Also, in reality,  how many people are racing to crack open servers and blades to upgrade I/O cards? Most people seem to be opting to buy newer servers and blades when higher bandwidth I/O is required – may be Nehalem-EX…  20Gbps is more than fast enough for the vast majority of todays applications and servers.

On point 2. Don’t be put off by the word Infiniband! 

FUD watch: Infiniband is not a disease, nor is it dead!  It is actually a rock solid ultra high performance low-latency channel interconnect designed for data center use and high performance computing. In fact many of the worlds fastest supercomputers use and are built around Infiniband.  So its definitely alive and well.

You don’t have to learn a boat-load of new Infiniband skills. You will run a copper CX4 cable from the HCAs in your servers to the Xsigo director and that is about as much Infiniband as you will ever see or need to configure in the solution. The rest is normal Ethernet and FC.

Connections to your servers… the clever stuff

So, if all of this talk about Infiniband hasn’t scared you off, well done.

The Xsigo VP780 I/O Director allows you to carve its NIC and HBA resources in to virtual NICs (vNIC) and virtual HBAs (vHBA). Each of these virtual vNICs and vHBAs acts exactly like a normal physical NIC or HBA and thanks to some clever work in the server side drivers, Operating Systems (ESX, Windows, Linux etc) see just like they would physical NICs and HBAs.

Another thing not to be underestimated is that fact that the physical NIC and HBA hardware sits outside of the physical server or blade chassis.  This enables the physical servers to be stateless from and I/O point of view, and for virtual resources to be moved around from physical server to physical server with great ease.  The VP780 owns the server profiles which include MAC addresses and WWPNs etc and allows up to 16 vHBAs and 32 vNICs to be assigned to a single physical server, all of which can be created and deployed in literally seconds with no reboots.   Ideal for VMware and the c c cl cll cllllll clllllll cloud!?  Think thats the first time Ive said the “c” word in a blog.

In my opinion, the previous paragraph is of huge importance.  If you don’t think this is huge, then I suggest that you re-read it and take a minute or two to think about it.  This is flexibility like no other solution I know of.

Is it just me, or does this look and feel very much like MR-IOV (PCI-SIG Multi-Root I/O Virtualisation)? 

Does anybody else do anything like this?

NOTE: I’ll post on this in the near future, but I personally think that MR-IOV has huge potential to rock the I/O consolidation world, and I’m not alone in thinking that! However, there is a case for Infiniband being a better Rack Area Networking (RAN?) interconnect than PCIe. One for a future post if people are interested.

Connecting to existing backbones

On the network side of the VP780, there are 15 slots that can be populated with various modules.  Currently 3 module types are available -

  1. 1 x 10Gbps Ethernet module
  2. 10 x 1Gbps Ethernet module
  3. 2 x 4Gbps HBA module

This gives you traditional LAN And SAN Connectivity, with FCoE and iSCSI offload being on the map.  So ,connecting to your existing LAN and SAN is “as easy as organising a tweet-up at TechFieldDay” ;-)  Your up-stream LAN and SAN is oblivious to the fact that the I/O is not initiated at the server chassis and just hums away as normal (NPIV is implemented on the SAN side of the HBAs).  The diagram below shows native FC connections coming out of the network side of the VP780 at the demo lab at VMware. 

Back of rack

At the moment, the VP780 has no support for FCoE.  Not a huge drawback as the standard and shipping products are still young, however, if they drag their heels over this they will fall behind in an important new and emerging market.  Something like the Emulex UCNA with its 10Gbps Ethernet, FCoE and iSCSI offload all on a single module would be like the cherry on the cake for this.

FUD Watch:  Be careful to note that the VP780 is not a switch.  True, it can switch frames between servers without passing traffic to the upstream network switch, but it does not HAVE to. It can forward the frames to the upstream switch if the switch supports hairpin switching.  So it does not have to alter existing network management models. However, there are several scenarios such as HPC or RDMA where performing hairpin switching over the internal IB fabric is beneficial for performance reasons. Choice is a good thing!

Nice Management GUI

While visiting with Xsigo at GestaltIT Tech Field Day I got my hands on some Xsigo kit including the management interface.  I was able to present vNIC’s and vHBA’s to ESX servers and have them picked and recognised on the fly by virtual machines. I was also able to play a little with some simple QOS features – increasing and decreasing bandwidth is very simple and also dynamic.  All simple stuff and worked a treat.  Oh and it has a CLI.

There is a ton more I could say, but this is already pretty long so I’ll wrap up with some final thoughts…..

Conclusion

The VP780 does some interesting stuff and in some respects is ahead of the curve. For instance -

  • Performance to the server is 20Gbps over Infiniband
  • Flexibility. Removing the physical NICs and HBAs from the server or blade chassis makes this a hugely flexible solution. 
  • Number of vNIC and vHBA devices that can be carved per physical card and presented to each host is more than most of the competition. E.g. HP VC Flex-10 and IBM Virtual Fabric can only create 4 virtual functions per port. This offer superior utilisation as a result.

As always though there is no perfect solution.  There is currently no FCoE, and both the Infiniband and the 10Gbps Ethernet options use XFP transceivers which are not as good as SFP+ when it comes to the likes of size, cost and power consumption.

There is also the fact that Xsigo are a relatively small and new company pitching to the enterprise.  From a technology point of view they are brilliant, but one has to wonder if they will still be around in 10 years time supporting and developing their products?

However, when all is said and done, I really like what they are offering. My final question for Camden Ford after his presentation was “can I have one for my garage”.  Says it all really.

If you are looking in to I/O consolidation and virtualization then you should definitely at least check out Xsigo… unless you’re too chicken! 

Thoughts and comments welcome.

Nigel

You can follow me on Twitter where I talk about storage technologies (@nigelpoulton)

I am also available for hire as a free-lance consultant.