Friday 13 June 2008

FC Enhancements

A comment posted to my previous blog entry reminds me of a requirement I've had for some time from Fibre Channel. In the "Good Old Days" in my first working life as a mainframe systems programmer, I could very easily see a breakdown of response time against each storage device on an LPAR. Now, the passing years may have given me "rose tinted spectacles" (or more accurately now, contact lenses) of that time, but I seem to remember the reason that I could see seek time, disconnect and connect time was due to the design of the (then) MVS I/O subsystem. As each I/O (CCW) was processed, the hardware must have been adding a consistent timestamp to each part of the process; the I/O initiation, the connect, the disconnect and subsequent seek and then the reconnect and data transfer time to complete the I/O (if none of this makes sense, don't worry, it probably means you are under 40 years old and never wore sandals to work).

Nowadays, the I/O infrastructure is a different kettle of fish. Each part of the infrastructure (host, HBA, fabric, array) are provided by different vendors and have no consistent time reference, therefore tracking the time to execute a storage "exchange" is very difficult. There is (as far as I am aware) nowhere within a fibre channel packet to track this response time at each stage of the journey from host to storage.

If we want the next generation of storage networks to scale, then without a doubt we need to be able to track the journey of the I/O at each stage and use this information to provide better I/O profiling.

Now, just how do I become a member of the t11 committee.....

7 comments:

BarryWhyte said...

Sounds like a good idea to me. If there was a consistent way to report such data it would make my life A LOT easier. At least with something like SVC we can tell you to upstream and downstream response times, and have plans to seriously enhance these stats in a future release to essentially do what the SAN should be and help diagnose 'rogue' hosts etc!

If you need a seconder... :)

Chris M Evans said...

Cheers Barry...I'll let you know if I consider it... :-)

Stephen said...

I tend to think that no one really cares about time/performance anymore. If they did, then there is no way that these larger drives would make it into arrays. In the last 18 months, the only complaint that I have had about (what appeared to the end user as storage) performance was Exchange 2003 (which is a pig of an app anyway) and SAS with 40 people trying to get data out of 10 TB at the same time.. I figured out how to fix the Exchange issues but I cant enforce users to not do exactly the same thing at the same time. Besides, being in a purely Windows based environment lets me off the hook anyway. Our Mainframes hammer the storage but 100 Windows servers barely keeps the disks from rusting from inactivity.

Also, the cost of getting very good performance data is so expensive, people are willing to accept a few milliseconds of lag.

Oh, and a SVC is bottom on our list of must haves. It rates up there with making decisions (now) on purchasing Brocade switches and their future in the Data Centre. Am I the only person who thinks that Cisco will completely rule in a few years with the Nexus?

Chris M Evans said...

Stephen

Are you focusing purely on Windows requirements here? There are plenty of non-Windows applications which required high performance SAN environments.

Unknown said...

High Performing SANs.

Im managing a few different fabrics, with alot of different vendor disk-arrays at the very bottom of it all.
Luckly at most places I have the SVC´s infront of them. Making everyday life alittle easier.
(The fabrics are not vendor mixed, clean brocade or cisco).

However, at times complaints do come about slow replies. So if i from the storage side could really tell how long time it would take for an I/O to go from when it leaves the HBA to where it hits my first edgeswitch to when it is on my disksystem it would be lovely!. Especially if i could the the DBA og SysAdmin that its his hosts that is causing the lag. :)

At the moment, the way we do it is by having the host write some data to the LUN in question.
But would be ideal to know the exact time the exchange spends in my SAN.. Instead of having the host variable in it as well.

Usally the guys complaining are the DBAs.. So i feel daily the requirement of for high performing SANs. More than i feel the requirement for just "storage".

What would you guys think is acceptable latency?? - here Im attepempting to hold it at 6 - 8ms.

Unknown said...

High Performing SANs

Hello,

Im managing a some larger fabrics, clean brocade or clean cisco.
Under them all, im having HDS,HP or IBM. At most sites the disk arrays are under SVC-control.

On occassions i feel the performance requirements, usally from DBAs thinking their databases are having slow writes/reads, and think ofcourse its my disarrays/san thats the source of this. So for me to be able to give them the exact time their packet spend in my network, to my diskarray and back.. would be priceless. :)

Would be nice to tell them, well the problem is on the host/application or dbtables go debug. And i could go back to study the darn support-matrix from "random-vendor", we all suffer under.

At the moment the way we test it is sadly, making test writes to a LUN over a period and looking how many ms the IO takes. But this brings in the host as a variable.

Anyone have a better way?. Without spending many 100.000£ on vendor SW, that in my oppenion is not quite ready yet.

This also leads me to another question? - What do guys consider an acceptable read/write time today?. I take 5-8ms as acceptable
If it is more than 10, i usally react on it. (it differs ofcourse from the tier of storage the data is located on, but my question is for databases primarily).

/Jeppe

Chris M Evans said...

Jeppe

Seems like 5-8ms is reasonable, however to be honest it depends on whether the application users are also receiving acceptable response. DBAs will complain about response time, however very few of the issues I've ever seen are actually storage related and they tend to be down to poor configuration.