I've been working on getting my "home SAN" into a usable configuration over the last few weeks. One hassle has been VMware (and I won't even mention Hyper-V again) and the support for fibre channel.
I guess thinking logically about it, VMware can't support every type of HBA out there and the line has to be drawn somewhere, but that meant my Qlogic and JNI cards were no use to me. Hurrah for Ebay, as I was able to pick up Emulex Lp9002L HBA cards for £10 each! I remember when these cards retailed at £600 or more.
Now I have two VMware instances accessing Clariion storage through McDATA switches and it all works perfectly.
That leads me on to a couple of thoughts. How many thousands of HBA cards are out there that have been ditched as servers are upgraded to the latest models? Most of them are perfectly servicable devices that will continue to give years of useful service, but "progress" to 4/8Gb fibre channel and FCoE dictates we must ditch these old devices and move on.
Why? It's not as if they cause a "green" issue - they're not power hungry or take up lots of space. I would also challenge anyone who can claim that they need more than 2Gb/s bandwidth on all but a small subset of their servers. (Just for the record, I see the case for using 4/8Gbs for large virtual server farms and the like as you've concentrated the I/O nicely and optimised resources)
So we need two things; (a) a central repository for returning old and unwanted HBAs (b) vendors to "open source" the code for their older HBA models to allow enthusiast programmers to develop drivers for the latest O/S releases.
If we can reduce the amount of IT waste, then I think this is a key strategy to any company claiming to be green in the world moving forward.
Tuesday 21 October 2008
Who needs FCoE anyway?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment