Background
The
Universal Volume Manager (UVM) feature on the USP enables LUN virtualisation. To access external storage, storage ports on the USP are configured as "External" and connected either directly or through a fabric to the external storage. See the first diagram as an example of how this works.
As far as the external storage is concerned, the USP is a Windows host and the settings on the array should match this. Within Storage Navigator, each externally presented LUN appears as a RAID group. This RAID group can then be presented as a single LUN or if required, carved up into multiple individual LUNs.
The ability to subdivide external storage isn't often mentioned by HDS; it's usually assumed that external storage will be passed through the USP on a 1:1 basis and if the external storage is to be detached in the future then this is essential. However if a configuration is being built from scratch then external storage could be presented as larger LUNs and subdivided within the USP. This is highlighted in the second diagram.
At this point, external storage is being passed through the USP but the data still resides on the external array. The next step is to move the data onto LUNs within the USP itself. Here's the tricky part. The target LUNs in the USP need to be exactly the same size as the source LUNs on the external array. What's more, they need to be the same size as the way the USP views them - which is *not* necessarily the same as the size on the external storage itself. This LUN size issue occurs because of the way the USP represents storage in units of tracks. From experience, the best way to solve this problem was to actually present the LUN to the USP and see what size the LUN appears as. When I first used UVM, HDS were unable to provide a definitive method to calculate the size a LUN would appear within Storage Navigator.
The benefits of virtualisation for migration can fall down at this point. If the source array is particularly badly laid out, the target array will retain the multiple LUN sizes. In addition, a lot of planning needs to be performed to ensure the migration of the LUNs into the USP doesn't suffer from performance issues.
Data is migrated into the USP using Volume Migration,
ShadowImage (or
TSM). This clones the source LUN within the USP to a LUN on an internal RAID group. At this point, depending on the migration tool it may be necessary to stop the host to remap to the new LUNs. This completes the migration process. See the additional diagrams, which conceptualise migration with TSM.
Now, this example is simple; imagine the complexities if the source array is replicated. Replication has to be broken, potentially requiring an outage for the host. Replication needs to be re-established within the USP but data has to be fully replicated to the remote location before the host data can be confirmed as consistent for recovery. This process could take some time.
Summary
In summary, here are the points that must be considered when using USP virtualisation for migration:
- Configuring the external array to the USP requires licensing Universal Volume Manager.
- UVM is not free!
- Storage ports on the USP have to be reserved for connecting to the external storage.
- LUN sizes from the source array have to be retained.
- LUN sizes aren't guaranteed to be exactly the same as the source array.
- Once "externalised" LUNs are replicated into the USP using ShadowImage/TSM/VM.
- A host outage may be required to re-zone and present the new LUNs to the host.
- If the source array is replicated, this adds additional complication.
I'll be writing this blog up as a white paper on my consulting company's website at www.brookend.com. Once it's up, I'll post a link on the blog. If anyone needs help with this kind of migration, then please let me know!
3 comments:
Interesting piece. I think that an overview of some use cases might be valuable as well. Are you also going to write about migrating data from traditional volumes to thin provisioned volumes?
Cheers Tony,
Agreed, use cases would make sense. Haven't had a chance to work on this as I've been tied up on other stuff.
The whole thick->thin issue is a great one. I was in a forum before Christmas discussing this exact subject. Lots of interesting comments relating to the subject of TP but lots of nervousness about migrating into a TP environment rather than having a greenfield TP array.
Thx for the feedback, I'll be getting to it soon.
I liked this blog and wanted to make a few comments. As this thread has been saying, migrations are an increasing IT problem and it is good to see a description of how it can be done with a minimum of disruption and downtime. As a matter of disclosure, yes I do work for HDS.
Regarding subdividing external LUNs. Our best practice is to not subdivide external storage because it makes it simpler to isolate & monitor flows to the external virtualized storage. And as you point out, this also offers the customer a choice to de-virtualize the external storage if he or she wishes too - preserving future storage options.
It’s a small point, but the figures could show you can support multiple externally attached arrays on a single port just as you can with server side connections.
I see that you have highlighted the problem of having to migrate like-to-like LUNs and thus potentially perpetuating bad configurations. However, there is an easy fix to this problem. Tiered Storage Manager allows you to move online data non-disruptively between unlike LUNs to adjust disk configurations. Likewise with Hitachi Dynamic Provisioning you can reclaim unused storage and increase LUN sizes.
We agree that proper planning is required for any enterprise storage environments especially if it is replicated. But the incremental work for virtualizing an array, although non trivial, is not large compared to the basic work required in any case.
Taken as a whole, I think this write-up is useful for someone contemplating retiring or repurposing old storage arrays. Without a virtualization engine such as the USP doing migration is an exponentially more complex task fraught with both human and IT risks.
Post a Comment