FCoE: Cables and the likes....

Ewan Leith (@ewantoo) asked if I could whip something up re cables and infrastructure required for FCoE (DCE/CEE).  So here goes. 

I think there has been a lively discussion on FCoE over the last 24-48 hours and it would be good to keep it going…… Before I start I’d be interested in any feedback, updates comments and the likes - I havent had a chance to properly research this and my schedule for the next few days makes it difficult.

Anyway.................. in order to accommodate the enhancements and improvements that come as part of what I have been calling “Enhanced Ethernet” (DCE/CEE), as well as facilitating the goal of consolidation to a single unified fabric, there are obviously some physical infrastructure changes required.  In the next few paragraphs we will discuss some of them.

NOTE:  When I say Enhanced Ethernet I’m referring to 10GigE, lossless, low latency, ETS, congestion notification as seen in DCE and CEE…

First up, cables!

Unfortunately our existing install base of Cat5 and Cat6 unshielded twisted pair and the likes, for the most part do not meet the demands of the unified fabric.  They don’t kcik it when it comes to latencies and Bit Error Rate etc.  In order to deploy and use Enhanced Ethernet, and therefore FCoE, we need to lay shiny new cables.  No problems though, that’s cheap and easy, right? 

<cough cough>

In order to keep with some of the major goals of convergence (drive down costs and power consumption), a cable and transceiver combination with low power demands and that has a good cost value is required.  This predominant combination is a passive or active twinaxial copper cable with SFP+ transceivers - sometimes referred to as “SFP+ Copper”. 

Twinaxial cables, usually referred to as Twinax or twin-ax, and named as such because they have a dual core.  The specification generally allows for cable runs of up to ~10 metres, although some kit combinations may support longer or shorter runs.  Twinax copper cables are typically ~6mm in diameter.



Twinax can transmit at 10Gbps full duplex (half duplex is not specified or supported in the IEEE DCB standards for 10GigE) over distances of up to 10 metres.  Although at first glance this is a relatively short distance, it is actually fairly well suited to Data Centre environments, which are typically short range high speed networks (remember we are not running these cables to workstations).  Such run lengths are especially suited for routing within a single cabinet or adjacent cabinets, such as from a server to an access layer switch mounted in the top of the rack. 

If longer runs are required then fibre can be used but at a higher cost. 

Twinax copper has been rated with a bit error rate in the region of 10−18 making it ideal for FCoE. 

Bit error rate:  or BER for short, is the ratio of erroneous bits received divided by the total number of bits transmitted.

SFP+, or to give its full name Small Form-factor Pluggable Plus, is an extension of the popular SFP standard seen in Fibre Channel SANs and legacy Gigabit Ethernet.  The design of SFP+ is relatively simple for a transceiver. 

In saying the design is simple, I refer to the fact that much of the signal processing circuitry and logic, often found in the transceiver module (such as with XFP), is removed from the transceiver and relocated to the switch and CNA (Converged Network Adapter).  This allows SFP+ to be smaller and cheaper in comparison to the more complex XFP, allowing for higher density switches.

As can be seen by the two diagrams below, SFP+ modules exist for both copper and glass and can transmit at 10Gbps.




SFP+ Optical transceiver




SFP+ copper transceiver with casing removed


Obviously this transferring of logic (silicon) from the transceiver to the switch and CNA may merely offset the cost from the transceiver to the switch.  While this may be the case it is generally agreed that such logic is better placed in the switch and CNA and usually allows for overall cheaper manufacturing costs. 


Backplanes

While on the topic of cables, copper and glass…..  As well as standards for carrying 10GigE over copper and fibre cables, the IEEE has also defined standards for backplane implementations.  One such commonly implemented standard is 10GBASE-KR.  This is commonly seen in blade servers as well as routers and switches and utilises a single lane running at a baud rate of 10Gbps, sometimes referred to as 10Gbps Backplane Ethernet.  It supports distances of up to 1m on copper based circuit boards which is plenty of distance for intra-chassis communications.


Other Infrastructure Requirements

At this point I suppose I should mention CNAs.  However, I can’t realistically talk about CNAs without talking about things like NPIV, SRIOV which is a topic and a half in and of itself.  So, for now I’ll skip over CNAs and say a quick word or two about FCoE capable switches.


FCoE capable Switches

A ton of new Ethernet standards as well as a new ULP, new network adapters and new cables inevitably leads to one thing…….. new switches.

Fortunately FCoE and the aforementioned enhancements are evolutionary.  By saying that, I am implying that implementing them in your Data Centre does not have to cause huge upheaval.  Some upheaval yes, but there is no requirement for large scale rip-and-replace.  In fact FCoE and her attendant technologies will happily site side-by-side with the likes of native Fibre Channel and 1Gbps Ethernet and at many levels the adjacent technologies will not even bat an eyelid.


Starting at the Edge

In order to simplify and expedite the adoption of Enhanced Ethernet and especially FCoE, some switch vendors provide edge switches at the access layer that allow companies to start deploying CNAs, at the edge of the network, and have them feed in to existing FC and Ethernet backbones via edge switches. 

For example, a company may deploy a new blade farm fully equipped with FCoE capable 10GigE Converged Network Adapters.  These CNA ports can be connected to the network via edge switches that support GigE, 10GigE, FCoE and FC.  This allows the blades to connect via 10GigE Enhanced Ethernet and then branch out to the existing network core via the 1Gbps Ethernet ports and to the FC SAN via the native FC ports.

These switches tend to be 2u pizza box style switches that are deployed within the server rack.  For some, they are a good place to start, especially in situations where ripping out your existing core and replacing it with shiny new 10GigE Enhanced Ethernet kit is not an option (i.e. just about everywhere).


Also at the Core

Some switch vendors also offer ultra-high performance ultra-scalable 10GigE Enhanced Ethernet FCoE aware switches for the network core.  These are typically modular blade based switches supporting multiple interface types and scaling to hundreds of ports.  Interface types include;10GigE Enhanced Ethernet, 1Gbps Ethernet, 4Gbps FC and 8Gbps FC.  These switches represent the next generation of data centre switches and represent a huge move toward the virtual data centre and I/O consolidation.

As always, comments and thoughts welcome.

Nigel

You can follow me on Twitter @nigelpoulton - I only talk about storage and virtualisation.

I'm a freelance consultant and can be contacted at nigel at rupturedmonkey dot com