I'm working on a DMX-3 installation at the moment. For those who aren't aware, EMC moved from the multi-cabinet Symmetrix-5 hardware (8830/8730 type stuff) to the DMX which was fixed size - DMX1000, 2000 and 3000 models of 1, 2 and 3 cabinet installations respectively. I never really understood this move; yes it may have made life easier for EMC Engineering and CEs as the equipment shipped with its final footprint configuration, only cache/cards and disk could be added; but you had to think and commit upfront to the size of the array you wanted, which might not be financially attractive. Perhaps the idea was to create a "storage network" where every bit of data could be moved around with the minimum of effort; unfortunately that never happened and is unlikely to in the short term.
Anyway, I digress; back to DMX-3. So, doing the binfile (configuration) for the new arrays, I notice we lose a small amount of data on each of the first set of disks installed. This is for vaulting. Track back to previous models; if power was lost or likely to be lost, an array would destage all uncommitted tracks to disk after blocking further I/O on FAs. Unfortunately in a maximum configuration of 10 cabinets of 240 disks, it simply wouldn't be possible to provide battery backup for all the hard drives to destage the data. A quick calculation shows a standard HDD consumes 12.5 watts of power (300GB model), so that's 3000W per cabinet or 30,000W for a full configuration. Imagine the batteries needed for this, just on that rare off chance that power is lost. Vaulting simplifies the battery requirements by creating a save area on disk to which the pending tracks are written. When the array powers back up, the contents of cache, the vault and disk are compared to return the array back to the pre-power loss position.
This is a much better solution than simply shoving more batteries in.
So moving on from this, I thought more about exactly what 12.5W per drive means. Imagine a full configuration of 10 cabinets of 240 drives (an unlikely prospect, I know) which requires 30,000W of power. In fact the power requirements are almost double that. This is a significant amount of energy and cooling.
Going back to solid state disks I mentioned some time ago, I'd expect that the power usage would come down considerably depending on the percentage of writes that go to NAND cache. NAND devices are already up to 16GB (so could be 5% of a current 300GB drive) and growing. If power savings of 50% can be achieved, then this could have a dramatic effect on datacentre design. Come on Samsung, give us some more details of your flashon drives and we can see what hybrid drives can do......
Wednesday, 30 August 2006
More on solid state disks
Monday, 28 August 2006
EMC and storage growth
I had the pleasure of a tour around the EMC manufacturing plant in Cork last week. Other than the obvious interest in attention to quality I experienced, the thing that struck me most was the sheer volume of equipment being shipped out the door. Now I know the plant ships to most of the world bar North America, however seeing all those DMX and Clariion units waiting to be shipped out was amazing.
It was assuring to see the volume of storage being shipped all over the world - especially from my position as a Storage Consultant. However it brings home more than ever the challenges going forward of managing ever increasing volumes of storage. I also spent some time to see the latest versions of EMC products such as ECC, SAN Advisor and SMARTS.
ECC has moved on. It used to be a monolithic ineffectual product in the original releases at version 5, however the product seems wholly more usable. I went back after the visit and looked at an ECC 5.2 SP4 installation. There were features I should have been using; I will be using ECC more.
SAN Advisor looks potentially good - it matches your installation to the EMC Support Matrix (which used to be a PDF and now is a tool) and highlights any issues of non-conformance. In a large environment SAN Advisor would be extremely useful, however not all environments are EMC only and multi-vendor support for the tool will be essential. Secondly, the interface seems a bit clunky, work needed there. Lastly, I'd want to add my own rules and want to make them complex - so for instance where I was migrating to new storage, I'd want to validate my existing environment against it and to highlight devices not conforming to my own internal support matrix.
SMARTS uses clever technology to perform root cause analysis of faults in an IP or Storage Network. The IP functionality looked good however at this stage I could see limited appeal on the SAN side.
All in all, food for thought and a great trip!
Monday, 14 August 2006
Convergence of IP and Fibre Channel
I'm working on Cisco fibre channel equipment at the moment. As part of a requirement to provide data mobility (data replication over distance) and tape backup over distance, I've deployed a Cisco fabric between a number of sites. Previously I've worked with McDATA equipment and Brocade (although admittedly not the latest Brocade kit) and there was always a clear distinction between IP technologies and FC.
Working with Cisco, the boundaries are blurred. There are a lot of integrated features which start to blur the distinctions between the two technologies. This I guess should be no surprise with the history of Cisco as the kings of IP networking. What is interesting is to see how this expertise is being applied to fibre channel. OK, so some things are certainly non-standard. For example, the Cisco implementation of VSANs is certainly not a ratified protocol. Connect a Cisco switch to another vendor's technology and VSANs will not work. However VSANs are useful for segmenting both resources and traffic.
As things move forward following the Brocade/McDATA takeover (is it BrocDATA or McCade?) the FC world is set to get more interesting. McDATA products were firmly rooted in the FC world (the implementation of IP was restricted to a separate box and not integrated into directors) - Brocade seem a bit more open to embracing integrated infrastructure. Keep watching, it's going to be fun....
Tuesday, 8 August 2006
Tuning Manager continued
OK
I've got Tuning Manager up and running now (this is version 5). I've configured the software to pick up data from 5 arrays, which are a mixture of USP, NSC and 9980. The data is being collected via two servers - each running the RAID agent.
After 5 days of collecting, here are my initial thoughts;
- The interface looks good; a drastic improvement on the previous version.
- The interface is quick; the graphs are good and the operation is intuitive.
- The presented data is logically arranged and easy to follow.
However there are some negatives;
- Installation is still a major hassle; you've got to be 100% sure the previous version is totally uninstalled or it doesn't work.
- RAID agent configuration (or rather I should call it creation) is cumbersome and has to be done at the command line; the installation doesn't install the tools directory onto your command line path, so you have to trawl to the directory (or install something like "cmdline here").
- The database limit is way too small; 16,000 objects simply isn't enough (an object is a LUN, path etc). I don't want to install lots of instances, especially when the RAID agent isn't supported under VMware.
Overall, so far this version is a huge improvement and makes the product totally usable. I've managed to set the graph ranges to show data over a number of days and I'm now spending more time digging into the detail; Tuning Manager aggregates data up to an hourly view. The next step is to use Performance Reporter to look at some realtime detail....
Brocade and McDATA; a marriage made in heaven?
So as everyone is probably aware, Brocade are buying McDATA for a shedload of shares; around $713m for the entire company. That values each share at $4.61, so no surprise McDATA shares are up and Brocade's are down. It looks like McDATA is the poorer partner and Brocade have just bought themselves a set of new customers.
I liked McDATA; the earlier products up to the 6140 product line were great. Unfortunately the i10K for me set their demise. It created a brand new product line with no backward compatibility. It was slow to market, OEM vendors took forever to certify it. Then there was the design - physical blade swap out for 4GB; no paddle replacement; a separate box for routing; all the product operating systems are different between the routing, legacy and i10K products. Moving forward, where's the smaller i10K model like a 128 or 64 port version?
The purchase of Sanera and CNT didn't appear to go well; the product roadmap lacked strategy.
So, time to talk to my new friends at Brocade, I bet they've got a smile on their face today...
Wednesday, 2 August 2006
Holiday is over
I've been off on holiday for a week (Spain as it happens), now I'm back. I'm pleased to say I didn't think of storage once. Not true actually; the cars we'd hired didn't have enough storage space for the luggage - why does that always happen?
So back to work and continuing on virtualisation. For those who haven't read the previous posts, I'm presenting AMS storage through a USP. I finished the presentation of the storage; the USP can see the AMS disks. These are all about 400GB each, to ensure I can present all the data I need.
Now, I'm presenting three AMS systems through one USP. A big issue is how cache should be managed. Imagine the write process; a write operation is received by the USP first and confirmed to the host after writing to USP cache. The USP then destages the write afterwards down to the AMS - which also has cache - and then to physical disk. Aside from the obvious question of "where is my data?" if there is ever a hardware or component failure, my current concern is managing cache correctly. The AMS has only 16GB of cache, that's a total of 48GB in all three systems. The USP has 128GB of cache, over twice the total of the AMS systems. It's therefore possible for the USP to accept a significant amount of data - especially if it is the target of TrueCopy, ShadowImage or HUR operations. When this is destaged, the AMS systems run the risk of being overwhelmed.
This is a significant concern. I will be using Tuning Manager to keep on top of the performance stats. In the meantime, I will configure some test LUNs and see what performance is like. The USP also has a single array group (co-incidentally also about 400GB) which I will use to perform a comparison test.
This is starting to get interesting...