Brocade Switch Over-Subscription

Frequent poster and commenter to the Ruptured Monkey sites is Stephen2615.  He posted over in the forums this morning that he was little upset by a statement that a Brocade SAN Engineer (Chip Cooper) made during an interview with SearchStorage back when the McData and Brocade merger was taking place.

What's your strategy for oversubscription? Cisco has made so many inroads here.

[It's an old debate. Cisco says oversubscription of ports saves users money by letting them use less bandwidth for applications that don't need full bandwidth. Brocade does not oversubscribe ports saying it causes contention.]

Cooper: Our 32-port card is running at 4 Gbit completely nonoversubscribed … we can help you with the correct fan-in density … Let's face it, we've had switches that have been running longer than Cisco's been in the Fibre Channel business.

It turns out that in the Brocade 48000 technical guide it describes the over subscription that actually does take place on the 48000 32 port and 48 port blades.

32 Port Blade Over Subscription Description:

The 32-port blade is designed with a 16:8 subscription ratio at 4 Gbit/sec for non-local traffic, and a 1:1 ratio at 2 Gbit/sec for any traffic pattern. If some or all of the attached servers and storage devices run at 2 Gbit/sec, or if I/O profiles are “bursty,” the 32-port blade typically provides the same performance as the 16-port blade.

48 Port Blade Over Subscription Description:

At 24:8, the 48-port blade has a higher backplane over-subscription ratio but also has larger port groups to take advantage of locality. The backplane connectivity of this blade is identical to the 32-port blade. The only difference is that, rather than just 16 ports per ASIC, the 48-port blade exposes 24 outward-facing ports (96 Gbit/sec or 192 Gbit/sec full duplex of localswitching per ASIC).


This blade is especially useful for high-density SAN deployments where:

  • Large numbers of servers need to be connected to the director
  • Some or all hosts are running below line rate much of the time
  • Potential localization of most traffic flows is achievable

So the question is, does all this really matter?  I guess in a high performance environment where there is a potential for all the servers to run at a full 4 Gb on each port, this would matter.  But in the real world, where users can barely push 2 Gb speeds I don't think that end users are going to really care whether or not Cisco or Brocade port cards are over-subscribed.  I have seen only a few high end servers push to a full 2 Gb of bandwidth, but most often the amount of I/O in the SAN is actually more the restricting factor than the actual bandwidth on a switch.  Disk subsystems can only do so much I/O before their processors are maxed out, and that usually comes way before the bandwidth in a switch is the limiting factor.  Rarely have I seen some Tru64 clusters or Oracle databases using the full bandwidth in a switch, but I have seen it happen (It all depends on the size of your I/O right?  So size really does matter. ;)).  For those high bandwidth needs I would recommend the following configuration scenario if you want to use a Brocade 48000 in your high performance environments.

  • Use a 16 port card for your high throughput ISL connections between your core and edges.
  • Use a 32 port card for your disk and tape connections and keep them as localized within the same ASIC as possible.
  • Use a 48 port card for your server connections.

Following that you should be able to get the most out of your switch as possible with the most overall connection:performance ratio.  If you don't really care about maximizing the number of ports in a chassis, then just buy 16 port cards and forget about it.  :)

Snig