Thursday, 28 September 2006

To Copy or not to Copy?

Sony have just announced the availability of their next generation of AIT; version 5. More on AIT here http://www.storagewiki.com/ow.asp?Advanced%5FIntelligent%5FTape. Speeds and feeds; 400GB native capacity, 1TB compressed and 24MB/s write speed. The write speed seems a little slow for me but the thing that scares me more are the capacity figures. 1TB - think about it 1TB! That's a shedload of data. Whilst that's great for density in the datacentre, it isn't good if you have a tape error. Imagine a backup failing after writing 95% of the data of a 700GB backup - or more of an issue, finding multiple read errors on a TB of data on a single cartridge.

No-one in their right mind would put 1TB of data onto a disk and hope the disk would never fail. So why do we do it with tape? Well probably because tape was traditionally used as a method of simply recovering data to a point in time. If one backup wasn't usable, you went back to the previous one. However, the world is a different place today. Increased regulation means backups are being used to provide data archiving facilities in the absence of proper application based archival. This means that every backup is essential as it indicates the state of data at a point in time. Data on tape is therefore so much more valuable than it used to be.

So, I would always create duplicate backups of those (probably production) applications which are most valuable and can justify the additional expense. That means talking to your customers and establishing the value of backup data.

Incidentally, you should be looking at your backup software. It should allow restarting backups after hardware failure. It should also allow you to easily recover the data in a backup from a partially readable tape. I mean *easily*, not oh, it can be done, but its hassle and you have to do a lot of work. Alternatively, look at D2D2T (http://www.storagewiki.com/ow.asp?Disk%5FTo%5FDisk%5FTo%5FTape).....

Monday, 25 September 2006

It's not easy being green

OK, not a reference to a song by Kermit the Frog, but a comment relating to an article I read on The Register recently (http://www.theregister.co.uk/2006/09/22/cisco_goes_green/). Cisco are attempting to cut carbon emissions by cutting back on travel, etc. Whilst this is laudible, Cisco would get more green kudos if they simply made their equipment more efficient. A 10% saving in power/cooling for all the Cisco switches in the world would make the reduction on corporate travel look like a drop in the ocean.

Sunday, 24 September 2006

Standards for Shelving

Just found this the other day: http://www.sbbwg.org/home/ - a group of vendors working to get a common standard for disk shelves. Will we see a Clariion shelf attached to a Netapp filer head?

Common Agent Standards

The deployment of multiple tools into a large storage environment does present problems. For example, EMCs ECC product claims to support HDS hardware and it does. However it didn't support the NSC product correctly until version 5.2 SP4. Keeping the agents up to date for each management product to get it to support all hardware is a nightmare. I haven't even discussed host issues. Simply having to deploy multiple agents to lots of hosts presents a series of problems; will they be compatible, how much resource will they all demand from the server, how often will they need upgrading, what level of access will be required?

Now the answer would be to have a common set of agents which all products could use. I thought that's what CIM/SMI was supposed to provide us, but at least 4 years after I read articles from the industry saying the next version of their products would be CIM compatible I still don't see it. For instance, looking at the ECC example I mentioned above, ECC has Symm, SDM, HDS and NAS agents to manage each of the different components. Why can't a single agent collect for each and any subsystem?

Hopefully, someone will correct me and point out how wrong I am, however in the meantime, I'd like to say what I want:

  1. A single set of agents for every product. This would also be a single set of hosts agents. None of this proxy agent nonsense where another agent has to sit in the way and manage systems on behalf of the system itself.
  2. A consistent upgrade and support model. All agents should work with all software, however if the wrong version of an agent is installed, then it simply reports back on the data available.
  3. The ability to upgrade any agent to introduce new features without direct dependence on software upgrades.

Thursday, 21 September 2006

Pay Attention 007...

Sometimes things you should spot just pass you by when you're not paying attention. So it is for me with N-Port ID Virtualisation. Taking a step back; mainframes had virtualisation in the early 90s. EMIF (ESCON Multi-Image Facility) allowed a single physical connection to be virtualised and shared across multiple LPARs (domains). When I was working on Sun multi-domain machines a few years ago, I was disappointed to see I/O boards couldn't share their devices between domains, so each domain needed dedicated physical HBAs. More recently I've been looking at having large servers with lots of storage using a data and tape connection - hopefully through the same physical connection, but other than using dual port HBAs, it wasn't easily possible without compromising quality of service. Dual port HBAs don't really solve the problems because I still have to pay for extra ports on the SAN.

Now Emulex have announced their support of N-Port ID Virtualisation (NPIV). See the detail here http://www.emulex.com/press/2006/0918-01.html. So what does it mean? For the uninitiated, when a fibre channel device logs into a SAN, it registers its physical address (WWN, World Wide Name) and requests a node ID. This ID is used to zone HBAs to target storage devices. NPIV allows a single HBA to request more than one node ID. This means if the server supports virtual domains (like VMware, XEN, or MS Virtual Server) then each domain can have a unique WWN and be zoned (protected separately). Also this potentially solves my disk/tape issue allowing me to have multiple data types through the same physical interface. Tie this with virtual SANs (like Cisco VSANs) and I can put quality of service onto each traffic type at the VSAN level. Voila!

I can't wait to see the first implementation; I really hope it does what it says on the tin.

Tuesday, 19 September 2006

RSS Update

I've always gone on and on about RSS and XML and how good technologies they are; XML as a data exchange technology and RSS to provide information feeds in a consistent format. My ideal world is to have all vendor information published via RSS. By that I mean not just the nice press releases and how they've sold their 50 millionth hard drive this week to help orphanages, but useful stuff like product news, security advisories and patch information.

Finally vendors are starting to realise this is useful. Cisco so far seems the best although IBM look good as do HP. McDATA and Brocade are nowhere, providing their information in POHF (Plain Old HTML Format). Hopefully all vendors of note will catch up, meanwhile I've started a list on my Wiki (http://www.storagewiki.com/ow.asp?Vendor%5FNews%5FFeeds), feel free to let me know if I've missed anyone.

Friday, 15 September 2006

A Mini/McDATA Adventure?

Apparently one in 6 cars sold by BMW is a mini. It is amazing to see how they have taken a classic but fading brand and completely re-invented it into a cool and desirable product. McDATA have just released an upgraded version of their i10K product, branded the i10K Xtreme. http://www.mcdata.com/about/news/releases/2006/0913.html This finally brings 4Gb/s speed and VSAN/LPAR technology allowing resources in a single chassis to be segmented for improved managability and security. I understand that the Brocade acquisition of McDATA is proceeding at a pace. It will be interesting to see how McDATAs new owners will treat their own McDATA "mini" going forward.