Showing posts with label VSA. Show all posts
Showing posts with label VSA. Show all posts

Tuesday, 28 October 2008

Virtualisation: LeftHand VSA Appliance - Part Two

In my previous post covering LeftHand's Virtual Storage Appliance, I discussed deploying a VSA guest under VMware. This post discusses performance of the VSA itself.


Deciding how to measure a virtual storage appliance's performance wasn't particularly difficult. VMware provides performance monitoring through the Virtual Infrastructure Client and gives some nice pretty graphs to work with. So from the appliance (called VSA1 in my configuration) I can see CPU, disk, memory and network throughput.

The tricky part comes in determining what to test. Initally I configured an RDM LUN from my Clariion array and ran the tests against that. Performance was poor and when I checked out the Clariion I found it was running degraded with a single SP and therefore no write cache. In addition, I also used a test Windows 2003 VM on the same VMware server - D'oh! That clearly wasn't going to give fair results as the iSCSI I/O would be going straight through the hypervisor and potentially VSA1 and the test W2K3 box would contend for hardware resources.

So, on to test plan 2, using another VMware server with only one single W2K3 guest, talking to VSA1 on the initial VMware hardware. So far so good - separate hardware for each component and a proper network in between (which is gigabit). To run the tests I decided to use Iometer. It's free, easy to use and you can perform a variety of tests with sequential and random I/O at different block sizes.

The first test was for 4K blocks, 50% sequential read/writes to an internal VMFS LUN on a SATA drive. The following two graphics show the VMware throughput; CPU wasn't max'd out and sat at an average of 80%. Data throughput averaged around 7MB/s for reads and only 704KB/s for writes.



I'm not sure why write performance is so poor compared to reads however I suspect there's a bit of caching going on somewhere. That's evident from looking at the network traffic which shows an equivalent amount of write traffic as there is network traffic. The read traffic doesn't add up. There's more read traffic on VSA1 than expected, which is shown in the figures from Iometer. It indicates around 700KB/s for both reads and writes.
I performed a few other tests, including a thin provisioned LUN. That showed a CPU increase for the same throughput - no surprise there. There's also a significant decrease in throughput when using 32KB blocks compared to 4KB and 512 bytes.
So, here's the $64,000 dollar question - what kind of throughput can I expect per Ghz of CPU and per GB of memory? Because remember there's no supplied hardware here from LeftHand, just the software. Perhaps with a 2TB limit per VSA maybe the performance isn't that much of an issue but it would be good to know if there's a formula to use. This throughtput versus CPU versus memory is the only indicator I can see that could be used to compare future virtual SANs against each other and when you're paying for the hardware, it's a good thing to know!

Wednesday, 22 October 2008

Virtualisation: LeftHand VSA Appliance - Part One

I've been running the LeftHand Networks Virtual SAN Appliance for a while now. As I previously mentioned, I can see virtual storage appliances as a great new category, worthy of investigation for the flexibility of being able to provide functionality (replication, snapshots etc) without having to deploy appliance hardware.

This post is one of a number covering the deployment and configuration of VSA.


So, installation on VMware is remarkably easy. After downloading the installation material, a new virtual machine can be created from the VMware OVF file. I've included a couple of screenshots of the installation process. Choose a Datastore and you're done.




When the virtual appliance is started for the first time, you set the IP address and an administration password and that's it. The remainder of the configuration is managed through the Centralised Management Console, a separate piece of software installed on a Windows or Linux Management host. Presumably this could be a virtual machine itself, but in my configuration it is installed on my laptop.

From this point on, the configuration challenge begins! I like to test software by seeing how far I can get before having to resort to looking at the manual. Unfortunately I didn't get far as there's a restriction on having a single SCSI LUN assigned to the virtual appliance and it must be device SCSI(1:0). This needs to be done while the VMware host is down (D'oh!) and I think that's because although a hard disk can be added dynamically, a SCSI controller can't, so once the disk is added offline, further disks can be added to existing SCSI controllers (although I don't think they can be used) even if the VMware host is up and running.

Within the CMC GUI, RAID can now be enabled, which picks up the single device configured to the VSA appliance. RAID isn't real RAID but virtual so there's no underlying redundancy available. I was however, able to make my one data LUN a raw (RDM) device, so presumably in a real configuration the data LUN could be a hardware RAID configured device within the VMware server itself.

The final configuration step is to create a Management Group, cluster and volume, which can easily be achieved using the Management Group Wizard. See the screenshot of the completion of the build.
I've now got a 30GB LUN (thin provisioned) which I can access via iSCSI - once I've performed two more configuration steps. First, I need to create a Volume List. This is just a grouping of LUNs against which I can apply some security details. So, on the Management Group I've already defined, I create a Volume List and add the LUN. I then create an Authentication Group and associate it with my Volume List. At this point within the Authentication Group I can specify the iSCSI initiators which can access the LUNs and if necessary, configure CHAP protection.
From my iSCSI client (my laptop) I add the VSA target and then I can configure the LUN as normal through Computer Management->Disk Management.
Phew! This sounds complicated but in reality it isn't. The configuration tasks complete quickly and it's easy to see how the security and device framework is implemented.
In the next post, I'll dig down into what I've configured, talk about thin provisioning and performance, plus some of the other features the VSA offers.