I've decided to move the blog over to Wordpress and there's a new direct URL too; http://www.thestoragearchitect.com. Please check me out in the new location. In addition, there's a new feed too; http://thestoragearchitect.com/feed/ - the feedburner feed stays the same and redirects. Please update your bookmarks!
Monday 9 February 2009
Thursday 5 February 2009
Personal Computing: The Whole Of Twitter In Your Hand
A quick check on Twitter this morning shows me they're up to message number 1,179,118,180 or just over the 1.1 billion mark. That's a pretty big number - or so it seems, but in the context of data storage devices, it's not that big. Let me explain...
Assume Twitter messages are all the full 140 characters long. That means, assuming all messages are being retained, that the whole of Twitter is approximately, 153GB in size. OK, so there will be data structures needed to store that data, plus space for all the user details, however I doubt whether the whole of Twitter exceeds 400GB. That fits comfortably on my Seagate FreeAgent Go!
If every message ever sent on Twitter can be stored on a single portable hard drive, then what on earth are we storing on the millions of hard drives that get sold each year?
I suspect the answer is simply that we don't know. The focus in data storage is to provide the facility to store more and more data, rather than rationalise what we do have. For example, a quick sweep of my hard drives (which I'm trying to do regularly) showed half a dozen copies of the Winzip installer, the Adobe Acrobat installer plus various other software products that are regularly updated, for example the 2.2.1 update of the iPhone software at 246MB!
What we need is (a) common sense standards for how we store our data (I'm working on those), (b) better search and indexing functionality that can make decisons based on the content of files - like the automated deletion of defunct software installers.
There's also one other angle and that's when network speeds become so fast that storing a download is irrelevant. Then our data can all be cloud-based and data cleansing becomes a value add service and someone else's problem!
Wednesday 4 February 2009
Enterprise Computing: Seagate Announces new Constellation Hard Drives
Seagate announced this week the release of their new Constellation hard drives. Compared to the Savvio range (which are high-performance, low form-factor), these drives are aimed at lower tier archiving solutions and will scale to 2TB.
Wednesday 28 January 2009
Storage Management: Aperi - It's all over
It looks like the open storage management project Aperi has finally been put to rest. See this link.
- It doesn't rely on generic standards for reporting, but gets the full detail on each platform.
- It uses element managers or management console/CLIs to retrieve data.
- It doesn't need additional servers or effort to deploy or manage.
- It normalises all data to provide a simple consistent framework for capacity reporting.
Now reporting is good, but management is hard by comparison. Reporting on hardware doesn't necessarily break it - SRM software which changes the array could - therefore it needs to know exactly how to interact with an array and therefore requires decent API access.
Vendors aren't going to give this out to each other, so here's a proposal:
Vendors fund a single organisation to develop a unified global SRM tool. They provide API access under licence which doesn't permit sharing of that API with competitors. As the product is licensed to end users, each vendor gets paid a fee per array per GB managed so thay have some financial recompense for putting skin into the game.
Anyone interested?
Monday 26 January 2009
Personal Computing: Mangatars
It seems to be all the rage to change your Twitter image to a Manga avatar or Mangatar. Well, here's mine.
Saturday 24 January 2009
Enterprise Computing: Using USP for Migrations
Thanks to Hu Yoshida for the reference to a previous post of mine which mentioned using virtualisation (USP, SVC, take your pick) for performing data migrations. As Hu rightly points out, the USP, USP-V, NSC55 and USP-VM can all be used to virtualise other arrays and migrate data into the USP as part of a new deployment. However nothing is ever as straightforward as it seems. This post will discuss the considerations in using a USP to virtualise and migrate data into a USP array from external sources.
In summary, here are the points that must be considered when using USP virtualisation for migration:
- Configuring the external array to the USP requires licensing Universal Volume Manager.
- UVM is not free!
- Storage ports on the USP have to be reserved for connecting to the external storage.
- LUN sizes from the source array have to be retained.
- LUN sizes aren't guaranteed to be exactly the same as the source array.
- Once "externalised" LUNs are replicated into the USP using ShadowImage/TSM/VM.
- A host outage may be required to re-zone and present the new LUNs to the host.
- If the source array is replicated, this adds additional complication.
Posted by Chris M Evans at 5:01 pm 3 comments
Tags: data migration, HDS, Universal Volume Manager, virtualisation
Friday 23 January 2009
Cloud Storage: Review - Dropbox
Over the last few weeks I've been using a product called Dropbox. This nifty little tool let's you sync up your files from anywhere and across multiple platforms. It's a perfect example of Cloud Storage in action.
Why This is Good
Drawbacks