Tuesday, 30 September 2008

Beating the Credit Crunch

Yesterday was probably the worst day for the credit crunch and the global problems being experienced by banks. Recession is biting and without a doubt many companies are looking at how they can reduce their storage costs.

Reducing costs doesn't mean buying new hardware. Ask yourself some searching questions. Am I sure my configuration is optimally configured? Can I release any overallocated storage? Can I reclaim any orphan LUNs?

The picture on the right shows how storage attrition occurs as raw disk is carved up, converted into LUNs and presented out to hosts. There's plenty of room for resource recovery.

This diagram doesn't even cover orphan storage. Typically I've found anywhere between 10-20% orphan resources reclaimable in large environments. This is free money you can get back tomorrow (or today if you're quick). Most SAN arrays have:

  1. LUNs presented to storage ports but not masked to anything

  2. LUNs presented to hosts which have "gone away" - decommissioned, HBAs changed etc

  3. LUNs presented to HBAs/hosts for which no zoning is present

  4. Snapshot (BCV/ShadowImage) volumes never synchronised

This is only a small subset of what can be reclaimed. With a bit of time and the right tool, you could even delay that next storage purchase and keep the boss happy!!


marcfarley said...

Chris, this is an excellent chart showing the attrition of storage capacity as the resource is prepared for use by applications. It should be possible to analyze each of the "process seams" for different products and best practices and then arrive at realistic projections of storage yields. This would help people understand how they are doing in comparison with others.

If you don't mind I think I'll post about this on my site (www.storagerap.com) and expand on it some.

Chris M Evans said...


No problem.


Matt Povey said...

Hi Chris,

What do you mean by the complexity curve? If you're looking at things from the PoV of the storage team, I see the point. A capacity manager taking a more end-to-end view might have a different idea of the relative complexities though.

Chris M Evans said...


It's meant to show the effort for the storage team to reclaim or improve the utilisation in each area. For instance, getting data back from a database or filesystem will be more effort than deploying the next array with a more efficient RAID layout.

Matt Povey said...


Ah. See what you mean.

Interestingly though, the benefits of actually tackling a DBA's habit of pre-allocating three years of storage up front probably has a much greater pay-back. Likewise, thinly\dynamically provisioning storage for file systems full of unstructured data is more complex (changes to process are required and no-one likes that) but again, the payoff is potentially huge.

In other words, there is a benefits line which is almost the mirror image of the complexity curve.

Chris M Evans said...

Now, Matt you've touched on an issue I have been trying to work in the diagram, exactly as you say; if you make a change in storage usage on the far right (i.e. the storage as the DB or application sees it), then the saving is magnified in terms of the raw storage saved.