Wednesday, 22 October 2008

Virtualisation: LeftHand VSA Appliance - Part One

I've been running the LeftHand Networks Virtual SAN Appliance for a while now. As I previously mentioned, I can see virtual storage appliances as a great new category, worthy of investigation for the flexibility of being able to provide functionality (replication, snapshots etc) without having to deploy appliance hardware.

This post is one of a number covering the deployment and configuration of VSA.


So, installation on VMware is remarkably easy. After downloading the installation material, a new virtual machine can be created from the VMware OVF file. I've included a couple of screenshots of the installation process. Choose a Datastore and you're done.




When the virtual appliance is started for the first time, you set the IP address and an administration password and that's it. The remainder of the configuration is managed through the Centralised Management Console, a separate piece of software installed on a Windows or Linux Management host. Presumably this could be a virtual machine itself, but in my configuration it is installed on my laptop.

From this point on, the configuration challenge begins! I like to test software by seeing how far I can get before having to resort to looking at the manual. Unfortunately I didn't get far as there's a restriction on having a single SCSI LUN assigned to the virtual appliance and it must be device SCSI(1:0). This needs to be done while the VMware host is down (D'oh!) and I think that's because although a hard disk can be added dynamically, a SCSI controller can't, so once the disk is added offline, further disks can be added to existing SCSI controllers (although I don't think they can be used) even if the VMware host is up and running.

Within the CMC GUI, RAID can now be enabled, which picks up the single device configured to the VSA appliance. RAID isn't real RAID but virtual so there's no underlying redundancy available. I was however, able to make my one data LUN a raw (RDM) device, so presumably in a real configuration the data LUN could be a hardware RAID configured device within the VMware server itself.

The final configuration step is to create a Management Group, cluster and volume, which can easily be achieved using the Management Group Wizard. See the screenshot of the completion of the build.
I've now got a 30GB LUN (thin provisioned) which I can access via iSCSI - once I've performed two more configuration steps. First, I need to create a Volume List. This is just a grouping of LUNs against which I can apply some security details. So, on the Management Group I've already defined, I create a Volume List and add the LUN. I then create an Authentication Group and associate it with my Volume List. At this point within the Authentication Group I can specify the iSCSI initiators which can access the LUNs and if necessary, configure CHAP protection.
From my iSCSI client (my laptop) I add the VSA target and then I can configure the LUN as normal through Computer Management->Disk Management.
Phew! This sounds complicated but in reality it isn't. The configuration tasks complete quickly and it's easy to see how the security and device framework is implemented.
In the next post, I'll dig down into what I've configured, talk about thin provisioning and performance, plus some of the other features the VSA offers.





2 comments:

Kelley Osburn said...

One of the most powerful uses for this product is to enable vmotion, HA, DRS, etc. across multiple ESX servers without the need for physical shared storage. To do this you would need to install VSA onto 2 or more ESX servers. Then you can cluster these VSA's together and using the Network RAID feature, synchronously replicate all blocks across the cluster. If an ESX machine were to fail the HA feature can bring those VM's up on another ESX server and the data is there fully intact and in fact in a mlountable state (in fact the Volume/LUN never goes down). We are selling this into a ton of remote/branch office environments where customers want server consolidation and the HA feature, but do not want or cannot afford, a physical shared storage device to enable it. Finally, the data in these remote offices using VSA can be asynch replicated back to a physical SAN in the main office or DR location using the built-in Remote Copy feature.

Chris M Evans said...

Kelley

What you've described was the main benefit I could see in creating virtual SANs. I've seen a number of customers with distributed offices where the IT/storage requirement in each branch is pretty consistent (file serving, some apps, AD, DNS etc) but the volume of data is low and they're currently wasting cycles writing tapes and sending them offsite (possibly losing them too). Are there plans to move above the 2TB threshold for VSAs?