Rack Area Networking: IOV

The following is reposted from my new blog site http://blog.nigelpoulton.com  As a result, comments are disabled on this site, but feel free to visit my new site and leave a comment. Thanks to Snig for allowing me to post here for a short transition period…

One of the key technologies or principles in Rack Area Networking (RAN) is I/O Virtualisation (IOV).  In fact, IOV is about to rock the world of physical server and Hypervisor design.

If you work deploying VMware, Hyper-V, XenServer etc or if you have anything to do with the so called Virtual Data Centre, then you need to be all over IOV.

This is the second post in my mini-series on RAN and IOV.  In this particular post Im going to talk about the concept virtual adapters – Virtual NICs and Virtual HBAs.

The vNIC and the vHBA

The concept is simple: Take a physical NIC, perform some magic on it, and make it appear to the OS as multiple NICs.  Same goes for HBAs.

The diagram below shows a single physical NIC carved into 4 virtual NICs (vNIC) and a single HBA carved in to 4 virtual HBAs (vHBA).

IOV-1The benefits of such technologies should be obvious – higher utilisation, requirement for fewer physical NICs, fewer cables, and fewer edge switch ports – just to name a few.

Another added benefit is flexibility.  Assume you have a 10Gbps NIC in a server which you have carved in to 2 vNICs.  That server now has a requirement for an additional NIC.  You no longer have to power down the server, open it up, install a new physical card and then wait for new cables to be laid.  Instead, you can simply create a new vNIC, from the already installed physical NIC, and have it dynamically discovered and initialised by the OS.  All done in software – no cracking the server open and no waiting for cables!  Talk about reducing the time taken to implement a change, not to mention reducing the risk (there is always added risk when opening up servers and messing around in the floor void….).

The CNA

In the above diagram we labelled the vNIC solution as Good.  If we swap out that IOV capable NIC and replace it with a CNA (Converged Network Adapter) that can act as both a NIC and a CNA, then we suddenly have the ability to carve vNICs and vHBAs from a single physical Adapter.  The diagram below has been expanded to now include a CNA based solution.  The CNA solution is labelled as “Better”.

IOV-2

NOTE: I should point out that in most IOV solutions most of the  legwork is done in hardware.  The vNIC and vHBA devices are created in hardware, as well as most modern CNAs providing protocol offloads…..

Single Root

The above approach - of creating multiple virtual adapters from a single physical adapters located within a single server - falls under the category of Single Root (SR).  Single Root being another way of saying single server (PCIe root complex).  Single Root approaches are limited to presenting their virtual adapters to a single PCIe root complex – operating systems executing within a single physical server.

While talking about Single Root technologies I need to mention SR-IOV.  SR-IOV is a semi-open PCI-SIG standard for SR style I/O Virtualisation.  As with all standards, it will take time to take-off and become widely deployed, and is open to implementation interpretation (some vendors may implement SR-IOV slightly differently to others).

True PCI-SIG SR-IOV requires the following components to be SR-IOV aware in order to support it -

  • BIOS
  • OS/Hypervisor
  • Physical I/O Adapter
  • Driver

Changes to the above components are required due to the fact that SR-IOV changes the architecture and model for PCIe adapters.  It introduces the concept of Virtual Functions (VF) which look and feel like a normal physical I/O adapter.  However, VFs are a lightweight version of a physical I/O adapter and inherit some configuration options from their parent physical I/O adapter.  As a result, vNICs and vHBAs are enumerated on a servers PCIe device tree as VFs, and the BIOS, OS and driver must understand this.

Citrix recently demoed XenServer working with SR-IOV NICs and Intel VT-d technology.

While SR-IOV is a great technology and is destined to play a role in driving IOV forward, it is very early days for the technology and the many of the currently shipping IOV technologies are using proprietary techniques and not PCI-SIG SR-IOV.  Some of these technologies include -

NOTE: While the Virtual Fabric for IBM BladeCenter solution is not currently SR-IOV, the chip that powers the Emulex CNA that sits at the heart of the solution, is SR-IOV capable….. just waiting for the other components (BIOS, OS, Drivers…) to catch up.

Good, better, BEST!

So far we have talked about SR style solutions where the vNIC and vHBA devices are only available to Operating Systems executing on the same physical server that the adapter is installed in.  While these technologies are all good and a step in the right direction, there exists a superior solution – Multi Root (MR).

Taking IOV to the next step involves removing the physical I/O adapters from the physical server chassis and re-house them in an external device that I am generically referring to as the I/O Aggregator.

The diagram below has been expanded to include an example I/O Aggregator approach.

IOV-3

Such technologies can be referred to as Multi Root (MR).

There are already Multi Root I/O Aggregator style solutions shipping from the likes of NextIO, VirtenSys and Xsigo - all are delivering next generation IOV benefits today!

Of the currently available solutions, these MR technologies offer the greatest levels of virtualisation and flexibility and for me represent the future.  By removing the I/O adapter from within the physical confines of the server chassis, you enable any vNIC or vHBA to be assigned to any server.  Your physical server becomes entirely stateless from an I/O perspective!

I used to be excited about LOM style CNA implementations…. until I discovered I/O Aggregators.

NOTE: PCI-SIG also have a specification for MR-IOV.  However, I do not know of anybody deploying it at the moment

Moving Home

Opinion time here, but they way I see it, the I/O adapter is folding its underwear and packing its bags ready to ship out of the server chassis into a bigger, better and more comfortable new home – the I/O Aggregator.

PCIe adapters in servers…… don’t be so yesterday ;-)

The following is reposted from my new blog site http://blog.nigelpoulton.com  As a result, comments are disabled on this site, but feel free to visit my new site and leave a comment. Thanks to Snig for allowing me to post here for a short transition period…

Nigel

You can follow me on Twitter. I’m @nigelpoulton and I only talk about technology.

I am also available as a consultant on any of the topics I write on.