Friday, 30 May 2008

HP Give It Large

Yesterday afternoon I had an opportunity to meet with HP as part of an informal session to make contact with storage bloggers. HP are obviously interested in the possible benefits keeping the blogging community well informed could bring, however my blog is not to act as a mouthpiece for the HP marketing department and I'd suggest if you want to keep abreast of their technology releases, use this XML link.

What's more interesting is where HP storage is headed. Take for example their new Extreme Storage solution. A scalable NAS product which reaches the heady heights of 820TB in a single unit. Fantastic you may think, and I guess if you have a real need for this volume of data in a single unit, then it's the one for you.

However, apart from the obvious issues like whether your raised floor can actually take the weight of a fully configured device (and how do you cool this kind of beast), what troubles me more is how much data on a system like this is actually of any use.

Although the ExDS9100 is aimed at delivering storage for high performance solutions, I think there is a risk of arrays like this being deployed to defer the hard work of actually classifying and setting sensible deletion policies, which, let's face it, for most companies has sat as a task in the "too hard box" for as long as NAS storage has been around. It may well be that some customers see this product as a way to defer the inevitable and actually start managing their data.

Anyway, fair play to HP for entering the market and making use of their Polyserve acquisition and fair play to them for wanting to talk to the blogging community too. If I get any juicy nuggets of information (like whether HP have a position on cloud storage), you can be sure I'll share it here.


UrsaMajor said...

What's driving storage solutions like this is not employee-produced files - like MS Word docs that are two years old and should be deleted. It is the accumulation of two types of data, and classification and deletion won't help with either.

The first class is user-contributed data in Web sites - photo uploads, music, video, and email.
For example, Shutterfly's marketing campaign is "preserve your memories". Yahoo has "unlimited free email space". Which memories do you think that they should delete? Which emails, after they promised to keep them forever?

The second class is things like seismic data. As the price of oil keeps going up, oil companies are bringing *back* data that they had deleted before. A 100 Terabytes of seismic data that was not that interesting when oil as $30 a barrel is awfully interesting when it is $130. There are lots of data accumulation examples like that, in healthcare, genomics and bioscience, and other scientific fields. Not only does the data not diminish in value over time, but the new files are bigger and bigger as new tools generate images, or readings or soundings with more resolution.

So, while it might make sense for the average medium business to go clean up their home directories, those files are nott what's driving Petabyte sized data centers.

Carter George

Chris M Evans said...

Carter, obviously your background qualifies you entirely to make the comments you have however, I think you hit the nail on the head with your comment; almost all of the data types you've quoted are data at rest for 90% of the time, so why does this justify a high capacity, high performance storage system where the disks are always spinning? Surely these classes of data require a new approach, which is partly there with systems like those from Copan.

So excluding the niche customers, we get back to the financial, manufacturing and all the other "traditional" businesses who have lots of unstructured, unmanaged data and will use this product as an unofficial archive.

Stephen said...

I found this posting very interesting and would love to follow it up if only for the general discussion it raises.

Peak OIL and issues related to it is indeed going to make for some serious processing and capacity. Some of the storage vendors largest customers outside of the normal government and financial users are oil companies. One in Brazil has so many large enterprise arrays, its mind boggling. That also leads to other more complex and almost sinister (but necessary) needs for major processing systems which are directly related to lack of oil and global warming. I am talking about massive transactional systems right now that will keep growing.

At least I have another interesting blog to follow. I knew I got into storage for some reason or another...

Chris M Evans said...


How many arrays do you think your Brazil example has? It would be interesting to do a theoretical calculation on how much power they are wasting....

Stephen said...


I don't know and I did not ask but it apparently made most of the financial institutions around Wall Street look positively low end. I know that it was all USP based and there were a few more of them than I have or have had so I would think PB of Tier one storage. Perhaps with that sort of money involved, USP probably equates to everyones WMS/AMS but no one cares.