Comments on my last post reminded me of a conversation I had recently regarding the setting of specific performance criteria for storage tiers. Some companies attempt to set specific performance guarantees; for example setting tier 1 storage to a response time of 10ms or less, tier 2 to 20ms or less.
I think there are two problems with this approach. Firstly, within a shared infrastructure it is *impossible* to guarantee that all I/O operations will complete within a pre-determined performance range (like the 10ms example). It may be possible to say 99.9% of I/Os measured over a 5 minute period are guaranteed, but there will always be the odd spike which will make 100% an unrealistic target.
Secondly, I think most users have no idea what a response time of 10ms or 20ms has on their application when they are planning a new project and consequently they will take the best tier possible - unless there is a big difference in the cost.
Take the following analogy. Imagine you are looking for a new broadband Internet service. Your ISPs now offer from 512Kb/s up to 24Mb/s. Which service do you take? I think most people will take the fastest service possible - until they see the cost. Imagine the price structure was as follows;
- 24Mb/s - £100
- 8Mb/s - £20
- 2Mb/s - £10
- 512Kb/s - £2
In this scenario, I'd be happy with the 8Mb/s service, because in reality I have no idea whether I *need* 24Mb/s (but rather I just *want* it).
The same logic applies for tiering. Most users think they will need tier 1 and unless they have a price/cost disincentive to not use it, then tiering won't work.
No comments:
Post a Comment