Storage Standards - Arrays
After a recent posting by Stephen I thought it would be good to discuss standards and the sort of standards I'd look to implement when designing an infrastructure from scratch. Obviously it isn't always possible to start from a clean slate and consequently many sites have what I describe as "partial implementations of multiple standards" where different people have put their stamp on an environment but not necessarily been able or wanted to go back and change the previous configuration. Still, let's assume we have a little flexibility and have a blank canvas, so to speak.
Objectives
I think it's worth having an idea of what you are trying to achieve by laying down standards in the first place. I think it boils down to a number of issues:
- Risk: A messy configuration poses more risk to data loss. If Storage Admins aren't sure whether disks are in use and to which servers then devices can easily be re-used inadvertently or perhaps be neglected from replication and cause issues in a DR scenario. For me, reducing risk is my main reason for adherence to a set of rigourous standards (sounds like an airline, where they always tell you as you board that their main priority is your safety, I always thought it was to make money from me).
- Efficiency: Storage is easy to overallocate and allocations today are very inefficient. We keep getting told this on the surveys that come out on a weekly basis. In almost no time at all we'll have more wasted data than atoms in the known universe or something like that. I think the figures get quoted as anywhere between 30-70% wastage. Either way, that's a lot of money being wasted on spinny stuff which doesn't need to be. With sensible standards, at least the storage can be accounted for and attributed to an owner (who can then be charged lots for the privilege), even if that owner isn't then efficient themselves. Efficiency at the host level is a whole separate argument to be discussed later.
- Manageability: When the storage environment is easy to understand, it is easier to administer. I once worked in a site which had four EMC 8830 arrays (two local, two in a remote site) all connected to each other. The local arrays had SRDF'd gatekeepers! (Warning: That was an EMC-based joke; apologies to all the hardended HDS fans out there who don't get that one). Needless to say, locating storage which could be reused and determining hosts which had or did not have replication was a time consuming job. Each allocation took 3-4 times more than necessary and half my time was spent attempting to clean the environment up.
So now we know what the reasons are, perhaps we can look at some of the standard to apply. Unfortunately most standards will tend to be very generic, as local restrictions will constrain exactly how arrays are laid out.
- Use RAID protection. This may seem a little obvious, however what I mean by this statement is that you should be reviewing your use of RAID and the RAID format used by tier. Lower tiered storage may be more suited to a higher RAID level but high performance data may need RAID 1/10. Either way, you need it and you should have a standard per tier.
- Distribute data across as many physical disks as possible. This also may seem like common sense but achieving it isn't always that simple. As disks have become larger, then more LUNs can be carved from each RAID group. This can have a negative impact on performance if only a small number of array groups are in use. There are also issues for upgrading arrays; a single 4-drive array group can provide nearly a TB of storage using 300GB drives (much more with new 500 and 750 drives as they become the norm), so the physical restrictions of the hardware become more apparent. It can be a real challenge to balance cost versus performance if your organisation insists on only adding small increments of storage as they are needed.
- Keep LUN sizes consistent. I don't like having a large number of differing LUN sizes. In fact I prefer to have just one if I can get away with it, however it isn't always that simple. LUN sizes should be defined in blocks (as most arrays use 512 byte blocks) and be the same even across storage from different vendors. This makes any type of migration easier to achieve. One tricky problem is choosing an appropriate LUN size. I think choosing an appropriate size or sizes borrows heavily from historical numbers but you should consider the size of your array groups (or physical disks) when planning LUN sizes. The more LUNs in a RAID group then the higher risk of contention at the physical level. Consider keeping any LUNs sizes as multiples of each other; as systems grow, LUNs can then be consolidated down to larger sizes.
- Use a dispersed method for numbering LUNs. Both EMC and HDS (can't speak for IBM) will recommend an inital configuration in which consecutively numbered LUNs are taken from different array groups one at a time and then repeated until all data is allocated. This means whichever group of sequential LUNs numbers you choose they will automatically be balanced across array groups. I have worked in sites that have used both methods and I can say that sequential numbering adds a significant amount of extra workto the allocation process.
- Don't dedicate array groups to specific LUN sizes. It may be nice to use one array group to create smaller LUNs for say a log device. This is *not* a good idea as you will end up creating an I/O bottleneck on those volumes. If you must have differing LUN sizes, create an even number from each array group.
- Distribute physical disks across all back-end directors. This may seem obvious but it is possible to unevenly balance disks across back-end directors, especially if storage is purchased in increments and different drive sizes are used. Keep things consistent, distribute the disks of each size evenly across the available directors.
- Distribute hosts across all front-end directors. There are two possible ways to distribute hosts, by capacity and by performance. You should decide which is more important for you and load balance accordingly. Continue to monitor both performance and allocations to ensure you are gettingt the best out of your FEPs.
- Dedicate replication links from each front-end director card. I like to ensure I have plenty of ports for replication (I've seen one issue recently, vendor name withheld, where lack of processor power in terms of FEPs for replication caused a performance problem, which was resolved by having more replication links (but not necessarily more cross site bandwidth), so I reserve at least one port/port pair on each FED card for this purpose.
- Dedicate specific FE ports for replicated and non-replicated storage. I prefer if possible to dedicate FEPs to production/replicated and non-replicated hosts in order to ensure that the same performance is available on both the primary and DR sites. If a DR host is also used as a UAT or test box, then place those disks on a separate FEP; that's what SAN is for!
- Dedicate array groups for replicated and non-replicated storage. This seems to contradict some of the other statements however, from a logistical point of view and if enough storage is available, it can be attractive to reserve out certain array groups for replicated and non-replicated storage, ensuring that the same arrays and LDEV/LUN numbers are used on replicated boxes.
- Allocate host storage from as many array groups as possible. When performing an allocation, try and spread the load across as many RAID groups as possible.
- Make use of your LDEV/LUN ranges. On HDS systems as an example, I like to reseve out CU ranges for tiers, so 00-0F for tier 1, 10-1F for tier 2 and 30-3F for command devices, external devices etc.
This is by no means an exhaustive list and I'm sure I will remember more. If anyone has any suggestions on what they do, I'd be interested to hear!
2 comments:
Chris,
Some really good points and nice to see them in a single place.
Just a couple from the top of my head. As you suggested, a symmetrical configuration across prod and DR boxes is a huge one for me. I always have identical LDEV and LU numbers in replicated pairs, as well as identical front end ports and host group numbers.
I also try to leave at least 2 LDEVs in each array group free. This helps keep down AG utilisation but also allows you to migrate data from other AGs if required.
Nigel
Oh I fogot to mention as well, slightly unrelated, but I would like to se the ability to switch an LDEVs CU:LDEV number while the LDEV is mapped and in use by a host. I often see it where a customer is using CU numbers to distinguish TIERs and they occasionally have to boorow and LDEV from another TIER. So they then end up with an LDEV in the wrong CU. Making sense?
;-)
Post a Comment