Monday 13 August 2007

8-gig fibre channel

The Register reported last week that Emulex and Qlogic have announced the imminent availability of 8Gbps fibre channel products.

I have to ask the question...Is there any point?

Firstly, according to the article, the vendors of switch products aren't clarifying their position (both have 10Gb/s technology already). In addition, if a host can push data out at 8Gb/s levels, it will quickly overrun most storage arrays.

How many people out there are using 10Gb/s for anything other than ISLs? (In fact, I think 10Gb/s HBAs don't exist, please prove me wrong :-) ). How many people are even using them for ISLs? If you look at Cisco's technology you can see that you will still be limited to the speed of the line card (about 48Gb/s) even with 10Gb/s, so other than a few less ISL lines, what has it actually bought you?

I still think at this stage we need to focus on using the bandwidth we have; sensible layout and design and optimise fabric capacity.

One final point, presumably 8Gb/s devices will run hotter than existing 4Gb/s - even if we can't/don't use that extra capacity??

4 comments:

Stephen said...

Chris,

I found your observation about the 8 Gbps interesting. Brocade discussed this at their last Road Show and I asked why this was being done. Well it seems they believe that virtual hosts such as VMware will start needing that sort of speed. I seriously doubted it but hey, what do I know about the near future? We are using Virtual hosts with about 10 to 16 virtual servers per host ( 8 CPU’s with 16 GB RAM). From what I have seen from Windows, anything more than 10 MB’s is exceptional. Linux could well push this a lot more. So, even using 16 of these virtual servers running flat out barely gets past 160 MB’s which is below 2 Gbps. We are running 4 Gpbs ports on our storage. The numbers just don’t add up yet. I seriously doubt that VMware could manage too many hosts unless they started using high end systems like the Itaniums. But would you spend so much money just to run VMware?

I have been using HSSM to take snapshots of our switches. We are using Brocade 48000’s and would you believe with 100 Windows hosts that during the day, we are averaging less than 400 MB’s throughput in the whole switch?

So to the 10 Gbps ports. We are looking at moving data centres in the next year. At the same time, we are looking at new switches and upgrading ISL’s between the data centres. We currently have 4 Gbps per fabric so we asked for 8 Gbps per fabric. I said, heck just give us 1 x 10 Gbps card in the Cisco ONS. It saves ports over the next few years infrastructure wise. It also saves trunking licences with the Brocades which I seriously doubt we will be using. I am begging to get a whole bunch of MDS 9513’s to replace the aging Brocade infrastructure.

So yes, 10 Gpbs for ISL’s is good but it sort of adds a single point of failure. 4 x 2 Gbps per cards spreads the load and the risk of a card failure. 8 Gpbs for average work.. well the jury is still out especially where I work.

Stephen

Chris M Evans said...

I doubt VMware will need that level of FC connectivity. All the "advice" I've seen on VMware design recommends not moving high I/O hosts onto a VM cluster. There's also the issue of how big/small LUNs should be, also compromising and reducing the I/O throughput (and impacting other hosts as the hypervisor processes all those I/Os). Anyway, what's wrong with multiple HBAs, also providing extra redundancy...? :-)

DCed said...

Chris, Stephen,

Why moving to 10G ? Well Cisco has a feature to suppport 10G FC but using 10G ethernet optics (nothing to do with DataCenter ethernet).

Where is the benefit ? You consolidate connection type (all ethernet and 10G) with your carrier service provider... so more pressure on the price.


Ced

Chris M Evans said...

Cedric, I think my point was that Cisco would still only provide you 48Gbps of bandwidth regardless of the connection type so, in datacentre, there's not a lot of benefit. I would agree though, that if you're putting in lots of bandwidth between sites using DWDM/10Ge then there is benefit if you pay by circuit to using faster links.