I may have mentioned before that I'm working on deploying HDS virtualisation. I'm deploying a USP100 with no disk (well 4 drives, the bare minimum) virtualising 3x AMS1000 with 65TB of storage each. So now the tricky part; how to configure the storage and present it through the USP.
The trouble is; with the LUN size that the customer requires (16GB), the AMS units can't present all of their storage. The limit is 2048 devices per AMS (whilst retaining dual pathing), so that means having either only 32TB of usable storage per AMS or increasing the LUN size to 32GB. Now that presents a dilemma; one of the selling points of the HDS solution is the ability to remove the USP and talk directly to the AMS if I so chose to remove virtualisation (unlikely in this instance but as Sean Connery learned, you should Never Say Never). I can't present the final LUN size from the AMS of 16GB, I'll have to present larger LUNs, carve them up using the USP and forego the ability remove the USP in the future. In this instance this may not be a big deal, but bear it in mind, for some customers is may be.
So, presentation will be a 6+2 array group; 6x 300GB which actually results in 1607GB of usable storage. This is obviously salesman sized disk allocations; my 300GB disk actually gives me 267.83GB... I'll then carve up this 1607GB of storage using the USP. At this point it is very important to consider dispersal groups. A little lesson for the HDS uninitiated here; the USP (and NSC and 99xx before it) divides up disks into array groups (also called RAID groups), which with 6+2 RAID, is 8 drives. It is possible to create LUNs from the storage in an array group in a sequential manner, i.e. LUN 00:00 then 00:01, 00:02 and so on. This is a bad idea as the storage for a single host will probably be allocated out sequentially by the Storage Administrator and then all the I/O for a single host will be hitting a small number of physical spindles. More sensible is to disperse the LUNs across a number of array groups (say 6 or 12) where the 1st LUN comes from the first array group, the second from the second and so on until the series repeats at the 7th (or 13th using our examples) LUN. This way, sequentially allocated LUNs will be dispersed across a number of array groups.
Good, so lesson over; using external storage as presented, it will be even more important to ensure LUNs are dispersed across what are effectively externally presented array groups. If not, performance will be terrible.
Having thought it over, what I'll probably do is divide the AMS RAID group into four and present four LUNs of about 400GB each. This will be equivalent to having a single disk on a disk loop behind the USP, as internal storage would be configured. This will be better than a single 1.6TB LUN. I hope to have some storage configured by the end of the week - and an idea of how it performs; watch this space!
Wednesday, 19 July 2006
HDS Virtualisation
Monday, 17 July 2006
EMC Direction
EMC posted their latest figures. So they're quoting double digit revenue growth again (although I couldn't get the figures to show that). My question is; where is EMC going? The latest DMX and Clariion improvements are just that - performance improvements over the existing systems. I don't see anything new here. The software strategy seems to be to purchase lots of technology, but where's the integration piece, where's the consolidated product line? ECC still looks as poor as ever.
So what is the overall strategy? I think it's to be IBM. Shame IBM couldn't hold on to their dominant position in the market. I can see EMC going the same way as people overtake and improve on their core technologies.
Data Migration Strategies
Everyone loves the idea of a brand-new shiny SAN or NAS infrastructure. However over time this new infrastructure needs to be maintained. Not just at the driver level, but eventually arrays, fabrics and so on. So, data migration will become a continous BAU process we'll all have to adopt. Whilst I think further, I've distilled the requirements into some simple criteria:
Migration Scenarios
Within the same array
Between arrays from the same vendor
Between arrays from different vendors
Between similar protocol types
Between different protocol types
Migration Methods
Array based
Host Based
3rd Party Host based
Migration Issues
Performance – time to replicate, ability to keep replica refreshed
Protection – ensuring safety of the primary, secondary and any replicated copies
Platform Support – migration between storage technologies
Application Downtime – ensuring migration has minimum application downtime
Migration Reasons
Removal of old technology
Load balancing
Capacity balancing
Performance improvements (hotspot elimination)
Time to do some more thinking about filling out the detail. Incidentally, when new systems are deployed, a lot of though goes into how to deploy the new infrastructure, how it will work wth existing technology - how much time goes into planning how that system will be removed in the future?
Tuesday, 11 July 2006
More on Virtual Tape
Last week I talked about virtual tape solutions. On Monday HDS released the news that they're reselling Diligent's VTL solution. From memory, when I last saw this about 12 months ago, it was a software only solution for emulating tape, sure enough that's still the same. Probably the more useful thing though is ProtecTIER, which scans all incoming data and proactively compresses the incoming stream by de-duplicating the data. The algorithm relies on a cache map of previously scanned data held in memory on the product's server, with a claimed supported capacity of 1PB raw, or 25PB at the alleged 25:1 compression ratio.
As Hu Yoshida referenced the release on his blog, I asked the question on compression. 25:1 is a claimed average from customer experience, some customers have seen even greater savings.
So, my questions; what is performance like? What happens if the ProtecTIER server goes titsup (technical expression)? I seem to remember from my previous presentation on the product that the data was stored in a self referential form, allowing the archive index to be easily recreated.
If the compression ratio is true and performance is not compromised, then this could truly be a superb product for bringing disk based backup into the mainstream. Forget the VTL solution; just do straight disk based backups using something like Netbackup disk storage groups and relish in the benefits!!!
Monday, 10 July 2006
Performance End to End
Performance Management is a recurring theme in the storage world. As fibre channel SANs grow and become more complex, the very nature of a shared infrastructure becomes prone to performance bottlenecks. Worse still, without sensible design (e.g. things like not mixing development data in with production) production performance can be unnecessarily compromised.
The problem is, there aren't really the tools to manage and monitor performance to the degree I'd like. Here's why. Back in the "old days" of the mainframe, we could do end to end performance management - issue an I/O and you could break down the I/O transaction into the constituent parts; you could see the connect time (time the data was being transferred), disconnect time (time waiting for the data to be ready so it can be returned to the host) and other things which aren't quite as relevant now like seek time and rotational delay. This was all possible because the mainframe entity was a single infrastructure; there was a single time clock against which the I/O transaction could be measured - also the I/O protocol catered for collecting the I/O data response times.
SANs are somewhat different. Firstly the protocol doesn't cater for collecting in-flight performance statistics, so all performance measurements are based on observations from tracing the entire environment. The vendors will tell you they can do performance measurements and it is true, they can collect whatever the host, storage and SAN components offer - the trouble is, those figures are likely to be averages and either not possible or not easy to relate the figures to specific LUNs on specific hosts.
For you storage and fabric vendors out there, here's what I'd like; first I want to trace an entire I/O from host to storage and back; I want to know at each point in the I/O exchange what the response time was. I still want to total and average those figures. Don't forget replication - TrueCopy/SRDF/PPRC, I still want to know that part of the I/O.
One thought, I have a feeling fabric virtualisation products might be able to produce some of this information. After all, if they are receiving an I/O request for a virtual device and returning it to the host, the environment is there to map the I/O response to the LUN. Perhaps that exists today?
Wednesday, 5 July 2006
Virtual tape libraries
I previously mentioned virtual tape libraries. Two examples of products I've been looking at are Netapp's Nearstore Virtual Tape Library and ADIC's Pathlight products. Both effectively simulate tape drives and allow a virtual tape to be exported to a physical tape. Here are some of the issues as I see it:
1. How many virtual devices can I write to? Products such as Netbackup do well having lots of drives to write to; a separate drive is needed (for instance) for each retention period (monthly/weekly/daily) and for different storage pools. This can cause issues with the ability to make most effective use of drives, especially when multiplexing. So, the more drives, the better.
2. How much data can I stream? OK, it's great having lots of virtual drives, but how much data can I actually write? The Netapp product for instance can have up to 3000 virtual drives but can only sustain 1000MB/s throughput (equivalent to about 30 LTO2 drives).
3. How is compression handled? Data written to tape will usually be compressed by the drive, resulting in variable capacity on each tape, some data will compress well, some won't. A virtual tape system that writes to physical tape must ensure that the level of compression doesn't prevent a virtual tape being written to the physical tape (imagine getting such poor compression that only 80% of data could be written to the tape, what use is that). However the flip side to this is ensuring that all the tape capacity can be utilised - it's easy to simply write 1/2 full tapes to get around the compression issue.
4. How secure is my data? So you now have multiple terabytes of data on a single virtual tape unit. How is that protected? What RAID protection is there? How is the index of data on the VTL protected? Can the index be backed up externally? One of the great benefits of tape is the portability. Can I replicate my VTL to another VTL? If so, how is this managed?
5. What is the TCO? Probably one of the most important questions. Why should I buy a VTL when I could simply buy more tape drives and create an effective media ejection policy? The VTL must be cost effective. I'll touch on TCO another time.
6. The Freedom Factor. How tied will I be to this technology? The solution may not be appropriate, the vendor may go out of business. How quickly can I extricate myself from the product.
Tuesday, 4 July 2006
The Incredible Shrinking Storage Arrays
For those who don't relate to the title, check out IMDB....
You know how it is, you go to buy some more storage. You need, say 10TB. You get the vendor to quote - but how much to do you actually end up with? First of all, disk drives never have the capacity they purport to have even if you take into consideration the "binary" TB versus the "decimal" TB. Next there's the effect of RAID. That's a known quantity and expected, so I guess we can't complain about that one. But then there's the effect of carving up the physical storage into logical LUNs; this can easily result in 10% wastage. Plus there's more: EMC on the DMX-3 now uses the first 30 or so disks installed into the box to store a copy of cache memory in case of a power outage; OK, good feature but it carves into the host available space. Apologies to EMC there - you were the first vendor I thought to have a dig at.
Enterprise arrays are not alone in this attritious behaviour - Netapp filers for instance will "lose" 10% of their storage to inodes (the part which keeps track of your files) and another 20% reserved for snapshots - that's after RAID has been accounted for.
What we need is clear unambiguous statements from our vendors - "you want to buy 10TB of storage? Then it will cost you 20....."
Monday, 3 July 2006
Thin Provisioning and it's Monday and EMC are buying (again)
In the early 1990's StorageTek released Iceberg, a virtualised storage subsystem. Not only was it virtualised but it implemented compression and thin provisioning. For those who don't know, thin provisioning allows the overallocation of storage based on the fact that most disks don't use their entire allocation. Iceberg was great because the MVS operating system was well suited to this kind of storage management. An MVS disk (or volume) could only process a single I/O at any one time (although that has changed with PAVs). Iceberg allowed lots of partially filled virtual volumes, increasing the amount of parallel I/O. On Open Systems this would be a problem as there would be a lot of unused space; Iceberg only reserved the storage for files that were in use.
Thin provisioning is back again; companies like 3Par are bringing products to market that once again allow storage space to be overallocated. Is this a good thing? Well at first glance, the answer has to be yes; why waste money? After all, 10% of a $10m storage investment is still a cool million dollars. However there are some rather significant drawbacks to over-provisioning storage. Think what happens when a disk reaches 100% utilisation and can't store any more data. Any further attemps to write that device will fail. That could take down an application or perhaps a server. Hopefully growth on that one disk has been monitored and the rate of growth won't be a significant shock; either way, only users of that one device will be affected.
Now consider thin provisioning. When the underlying storage reaches 100% utilisation, then all disks carved out of that storage will receive write failures; worse still these write failures will occur on disks which don't appear to be full! Depending on the way the storage was allocated out then the impact of 100% utilisation could be widespread; if storage is accessed by multiple tiers of storage and different lines of business, then then one LOB could affect another; development allocations could affect production. It is likely that storage wouldn't be shared out in this way but the impact is obvious.
Thin provisioning can make storage savings; but it needs more careful management; I learned that 10 years ago with Iceberg.
EMC have been purchasing again; this time it's RSA Security. I can see the benefit of this; storage and security are becoming tightly linked as storage becomes the mainstay of the Enterprise. All I can say is I hope that the purchase improves the security features of ECC, Solutions Enabler and a number of other lightly protected EMC products.