Following on from snig’s post, I promised a blog on FC switch oversubscription. It’s been on my list for some time and I have discussed it before, however it has also a subject I’ve discussed with clients from a financial perspective and here’s why; most people look at the cost of a fibre channel switch on a per port basis, regardless of the underlying feature/functionality of that port.
Not that long ago, switches from companies such as McDATA (remember them? :-) ) provided full non-blocking architecture. That is, they allowed full 2Gb/s for any and all ports, point to point. As we moved to 4Gb/s, it was clear that Cisco, McDATA and Brocade couldn’t manage (or didn’t want) to deliver full port speed as the port density of blades increased. I suspect there were be issues with ASIC cost and fitting the hardware onto blades and cooling it (although Brocade have just about managed it).
For example, on Generation 2 Cisco 9513 switches, the bandwidth per port module is an aggregate 48Gb/s. This is regardless of the port count (12, 24 or 48), so, although a 48 port blade can (theoretically) have all ports set to 4Gb/s, the ports on average only have 1Gb/s of bandwidth.
However the configuration is more complex; ports are grouped into port groups, 4 groups per blade of 12Gb/s each, putting even more restriction on the ability to use available bandwidth across all ports. Ports can be dedicated or use shared bandwidth within a port group. In a port group of 12 ports, set three to dedicated bandwidth of 4Gb/s and the rest are (literally) unusable. Whilst I worked at a recent client, we challenged this option with Cisco. As a consequence, 3.1(1) of SAN-OS allows the disabling of all restrictions so you can take the risk and set all ports to 4Gb/s and then you’re on your own.
How much should you pay for these ports? What are they actually worth? Should a 48-port line card port cost the same as a 24-port line card port? Or, should they be rated on bandwidth? Some customers choose to use 24-port line cards for storage connections or even the 4-port 10Gb/s cards for ISLs. I think they are pointless. Cisco 9513’s are big beasts; they eat a lot of power and need a lot of cooling. Why wouldn’t you want to cram as many ports into a chassis as possible?
The answer is to look at a new model for port allocation. Move away from the concepts of core-edge and edge-core-edge and mix storage ports and host ports on the same switch and where possible within the same port group. This would minimise the impact of moving off-blade, off-switch or even out of port group.
How much should you pay for these ports? I’d prefer to work out a price per Gb/s. From the prices I’ve seen, that makes Brocade way cheaper than Cisco.
Wednesday 9 May 2007
Port Oversubscription
Subscribe to:
Post Comments (Atom)
1 comment:
Hi Chris,
I agree with the idea of keeping storage and hosts as close together as possible. On the same port group if possible but at least on the same switch. In fact I mentioned it in my latest post over on RM.
The biggest problem to this that I see though, is how dynamic most environments are these days. And its just so much easier to rezone, even if your traffic will cross an ISL or two, than it is to recable (especially if the guy who does the cabling is a bit slapdash).
Post a Comment