I've been thinking about green issues this week after we received our "bottle box" from the local council to encourage us to recycle glass bottles and jars. In particular, my thoughts were drawn to the environmental friendliness of datacentres, or not as the case may be.
Datacentres are getting more demanding in their power and cooling requirements. It's now possible for more space to be set aside for plant covering the provision of electricity and air conditioning than actual datacentre space and the usage of space is having to be carefully planned to ensure high demand equipment can be catered for. Take for example fabric switches, or more specifically directors as they are more large scale and environmentally hungry. Let'st start with the new Cisco beast, the 9513. In a full configuration, this has 528 ports in a 14U frame but takes 6000 watts of power and outputs the same amount of heat requiring cooling. That's 11.36W per port, or 11.36W per 1Gb/s of bandwidth (the Cisco in a full configuration provides 48Gb/s per 48-port card.
For McDATA, compare the i10K. This offers up to 256 ports, again in 14U of space and requires 2500W of power. That equates to 9.76W per port, but as the i10K offers full bandwidth of 2Gb/s on those ports, that's 4.88W per 1Gb/s of bandwidth, twice as good as Cisco.
Finally, Brocade. Looking at the Silkworm 48000, this offers up to 256 ports in a 14U chassis all at up to 4Gb/s bandwidth. Power demand is quoted in VA (volt amps) and assuming 1VA=1W (and OK, that's a big assumption but I'm assuming the power factor is 100% here), then the maximum power requirement is 750W for a full chassis, or a remarkable 2.93W per port or 0.73W per 1Gb/s of bandwidth.
Brocade seems to offer a far better environmental specification than the other two manufacturers and that translates to more than just power consumption per port. Power consumption (and more importantly) power dissipation or cooling per rack has a direct impact on how many directors could be placed in a single cabinet and therefore the amount of floorspace required to house the storage equipment. All three vendors could rack up 3 units in a single 42U cabinet but could you power and cool that much equipment? With Brocade probably, with Cisco I doubt it (in any case you probably couldn't cable the Cisco directors - imagine nearly 1600 connections from a single cabinet!). What that means is either a lot of empty space per rack or other equipment in the same rack that doesn't need anywhere near as much power.
So what's my point? Well clearly there's a price to pay for storage connectivity over and above the per port cost. Datacentre real estate is expensive and you want to make best use of it. What I'm saying is that the technology choice may not purely be driven by feature and hardware cost but on the overall TCO of housing and powering storage networking equipment (and obviously the feature set the products offer). Incidentally, I've also done a similar calculation on enterprise storage frames from IBM, EMC and HDS. I'll post those another time.
Tuesday 9 May 2006
The Green Datacentre
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment