Friday 28 September 2007

Storage Standards - Arrays

Storage Standards - Arrays

After a recent posting by Stephen I thought it would be good to discuss standards and the sort of standards I'd look to implement when designing an infrastructure from scratch. Obviously it isn't always possible to start from a clean slate and consequently many sites have what I describe as "partial implementations of multiple standards" where different people have put their stamp on an environment but not necessarily been able or wanted to go back and change the previous configuration. Still, let's assume we have a little flexibility and have a blank canvas, so to speak.

Objectives

I think it's worth having an idea of what you are trying to achieve by laying down standards in the first place. I think it boils down to a number of issues:

  • Risk: A messy configuration poses more risk to data loss. If Storage Admins aren't sure whether disks are in use and to which servers then devices can easily be re-used inadvertently or perhaps be neglected from replication and cause issues in a DR scenario. For me, reducing risk is my main reason for adherence to a set of rigourous standards (sounds like an airline, where they always tell you as you board that their main priority is your safety, I always thought it was to make money from me).

  • Efficiency: Storage is easy to overallocate and allocations today are very inefficient. We keep getting told this on the surveys that come out on a weekly basis. In almost no time at all we'll have more wasted data than atoms in the known universe or something like that. I think the figures get quoted as anywhere between 30-70% wastage. Either way, that's a lot of money being wasted on spinny stuff which doesn't need to be. With sensible standards, at least the storage can be accounted for and attributed to an owner (who can then be charged lots for the privilege), even if that owner isn't then efficient themselves. Efficiency at the host level is a whole separate argument to be discussed later.

  • Manageability: When the storage environment is easy to understand, it is easier to administer. I once worked in a site which had four EMC 8830 arrays (two local, two in a remote site) all connected to each other. The local arrays had SRDF'd gatekeepers! (Warning: That was an EMC-based joke; apologies to all the hardended HDS fans out there who don't get that one). Needless to say, locating storage which could be reused and determining hosts which had or did not have replication was a time consuming job. Each allocation took 3-4 times more than necessary and half my time was spent attempting to clean the environment up.

So now we know what the reasons are, perhaps we can look at some of the standard to apply. Unfortunately most standards will tend to be very generic, as local restrictions will constrain exactly how arrays are laid out.

  • Use RAID protection. This may seem a little obvious, however what I mean by this statement is that you should be reviewing your use of RAID and the RAID format used by tier. Lower tiered storage may be more suited to a higher RAID level but high performance data may need RAID 1/10. Either way, you need it and you should have a standard per tier.

  • Distribute data across as many physical disks as possible. This also may seem like common sense but achieving it isn't always that simple. As disks have become larger, then more LUNs can be carved from each RAID group. This can have a negative impact on performance if only a small number of array groups are in use. There are also issues for upgrading arrays; a single 4-drive array group can provide nearly a TB of storage using 300GB drives (much more with new 500 and 750 drives as they become the norm), so the physical restrictions of the hardware become more apparent. It can be a real challenge to balance cost versus performance if your organisation insists on only adding small increments of storage as they are needed.

  • Keep LUN sizes consistent. I don't like having a large number of differing LUN sizes. In fact I prefer to have just one if I can get away with it, however it isn't always that simple. LUN sizes should be defined in blocks (as most arrays use 512 byte blocks) and be the same even across storage from different vendors. This makes any type of migration easier to achieve. One tricky problem is choosing an appropriate LUN size. I think choosing an appropriate size or sizes borrows heavily from historical numbers but you should consider the size of your array groups (or physical disks) when planning LUN sizes. The more LUNs in a RAID group then the higher risk of contention at the physical level. Consider keeping any LUNs sizes as multiples of each other; as systems grow, LUNs can then be consolidated down to larger sizes.

  • Use a dispersed method for numbering LUNs. Both EMC and HDS (can't speak for IBM) will recommend an inital configuration in which consecutively numbered LUNs are taken from different array groups one at a time and then repeated until all data is allocated. This means whichever group of sequential LUNs numbers you choose they will automatically be balanced across array groups. I have worked in sites that have used both methods and I can say that sequential numbering adds a significant amount of extra workto the allocation process.

  • Don't dedicate array groups to specific LUN sizes. It may be nice to use one array group to create smaller LUNs for say a log device. This is *not* a good idea as you will end up creating an I/O bottleneck on those volumes. If you must have differing LUN sizes, create an even number from each array group.

  • Distribute physical disks across all back-end directors. This may seem obvious but it is possible to unevenly balance disks across back-end directors, especially if storage is purchased in increments and different drive sizes are used. Keep things consistent, distribute the disks of each size evenly across the available directors.

  • Distribute hosts across all front-end directors. There are two possible ways to distribute hosts, by capacity and by performance. You should decide which is more important for you and load balance accordingly. Continue to monitor both performance and allocations to ensure you are gettingt the best out of your FEPs.

  • Dedicate replication links from each front-end director card. I like to ensure I have plenty of ports for replication (I've seen one issue recently, vendor name withheld, where lack of processor power in terms of FEPs for replication caused a performance problem, which was resolved by having more replication links (but not necessarily more cross site bandwidth), so I reserve at least one port/port pair on each FED card for this purpose.

  • Dedicate specific FE ports for replicated and non-replicated storage. I prefer if possible to dedicate FEPs to production/replicated and non-replicated hosts in order to ensure that the same performance is available on both the primary and DR sites. If a DR host is also used as a UAT or test box, then place those disks on a separate FEP; that's what SAN is for!
  • Dedicate array groups for replicated and non-replicated storage. This seems to contradict some of the other statements however, from a logistical point of view and if enough storage is available, it can be attractive to reserve out certain array groups for replicated and non-replicated storage, ensuring that the same arrays and LDEV/LUN numbers are used on replicated boxes.
  • Allocate host storage from as many array groups as possible. When performing an allocation, try and spread the load across as many RAID groups as possible.
  • Make use of your LDEV/LUN ranges. On HDS systems as an example, I like to reseve out CU ranges for tiers, so 00-0F for tier 1, 10-1F for tier 2 and 30-3F for command devices, external devices etc.

This is by no means an exhaustive list and I'm sure I will remember more. If anyone has any suggestions on what they do, I'd be interested to hear!

Tuesday 25 September 2007

Storage Futures or is it Options?

One of the trickiest problems in the storage industry is managing demand. Internal customers seem to think that storage isn't physical and we just have tons of the virtual stuff we can pick out of the air as required. I don't think they expect to talk to the server teams and find they have dozens of servers sitting spinning just waiting for the next big project, but for some reason with storage they do.

So this lack of foresight causes demand problems as arrays have to be managed. Whilst we could simply add a new RAID group or bunch of disks when customer demand requires it, allocating all the new storage on the same RAID group, chances are performance would "suck". Really we want to add storage in bulk and spread the workload.

Similar problems occur when arrays cannot be expanded and a new footprint has to be installed (which can have significant lead time and ongoing costs, for instance fabric ports for connectivity). I can hear the bleating of many a datacentre manager now, asking why yet more storage is needed and where it will go in the already overcrowded datacentre.

The standard charging model is to have a fixed "price guide" which lists all the storage offerings. Internal customers are then charged on arrears for their storage usage. Some companies may have an element of forward planning but they are torturous processes and anyway, someone always comes along with an allegedly unforseen requirement.

Ideally, the answer is for all storage users to manage their own demand, estimating BAU (Business As Usual) growth and requirements for new products. Unfortunately, the penalties for lack of planning don't usually exist and poor practices perpetuate.

So now about offering futures (or options) on storage? Internal customers can purchase a right to buy (option) or an obligation to buy (future) storage for some time in the future, say 1, 3, 6 or 12 months ahead. In return they receive a discounted price. Storage hardware costs from vendors are always dropping so the idea of charging less in the future in order to gain more of an insight into demand is probably not an unreasonable concept.

Futures/Options could also work well with thin provisioning. Storage is pre-allocated on a virtual basis up front, then provided on the basis of futures contracts by adding more real storage to the array.

Now the question is, to use futures or options? Well, perhaps futures best represent BAU demand as this is more likely to be constant and easily measurable. Options fits project work where projects may or may not be instigated. Naturally futures would attract a higher discount than options would.

I think I need to make up a little spreadsheet to test this theory out...

Monday 24 September 2007

PSSST....Green Storage

HDS announced today a few amendments to the AMS/WMS range. The most interesting is the apparent ability to power down drives which are not in use a-la-Copan.

According to the press release above, the drives can be powered down by the user as necessary, which presents some interesting questions. Firstly, I guess this is going to be handled through a command device (which presumably is not powered down!) which will allow a specific RAID group to be chosen. Imagine choosing to power down a RAID group someone else is using! Presumably all RAID types can be supported with the power down mode.

One of the cardinal rules about hardware I learned years ago was never to power it off unless absolutely necessary; the power down/up sequence produces power fluctuations which can kill equipment. I'm always nervous about powering down hard drives. I've seen the Copan blurb on all the additional features they have in their product which ensures the minimum risk of drive loss. I'd like to see what HDS are adding to AMS/WMS to ensure power down doesn't cause data loss.

Finally, what happens on the host when an I/O request is issued for a powered down drive? Is the I/O simply failed? It would be good to see this explained as I would like to see how consistency is handled, especially in a RAID configuration.

However, any step forward which makes equipment run cooler is always good.

The announcement also indicated that 750GB SATA drives will be supported. More capacity, less cooling....

NTFS Update

I did some more work on my NTFS issue on Friday. As previously mentioned, I was seeing NTFS filesystems with large levels of fragmentation even after drives were compressed.

The answer turns out to be quite simple; Windows doesn't consolidate the free space blocks which accumulate as files are created and deleted. So, as a test I started with a blank 10GB volume and created a large file on it. Sure enough the allocation occurs in a small (2 or 3) number of extents. I then deleted the large file and created 10,000 small (5K) files and deleted those too. I then re-created the large file, which immediately was allocated in 100's of small fragments and needed defragmentation immediately. The large file was created using the freespace blocks freed up from the small files.

What's not clear from the standard fragmentation tool provided with Windows is that the free space created by the deletion of files is added to a chain of free space blocks. These free space blocks are never consolidated even if they are contiguous (i.e. as in this instance where I deleted all the files on the disk). This means even if you *delete* everything on a volume, then the free space is still fragmented and files will be created with instant fragmentation. The other thing to note is that the standard Windows defragmenter doesn't attempt to consolidate those segments when a drive is defragmented, it simply ensures that files are re-allocated contiguously. It also doesn't report that fact either.

I'm currently downloading Diskeeper, which allegedly does consolidate free space. I'm going to trial this and see how it affects my fragmentation problem.

Incidentally, I used one of Sysinternals' free tools to look at the map of my test drive. Sysinternals were bought by Microsoft in the summer of 2006, however you can find their free tools here. I used Diskview to give me a map of the drive and understand what was happening as I created and deleted files. What I would like, however is a tool which displays the status of free space fragments. I haven't found one of those yet.

So, now I have an answer, I just have to determine whether I think fragmentation causes any kind of performance issue on SAN-presented disks!

Friday 21 September 2007

Problems Problems

This week I've been working on two interesting (ish) problems. Well, one more interesting than the other, one a case of the vendor needing to think about requirements more.

Firstly, Tuning Manager (my old software nemesis) strikes again. Within Tuning Manager it is possible to track performance for all LUNs in an array. The gotcha I found this week is that the list of monitored LUNs represents only those allocated to hosts and is a static list which must be refreshed each time an allocation is performed!

This is just lack of thought on behalf of the developers not to provide a "track everything" option so it isn't necessary to keep going into the product, selecting the agent, refreshing the LUN list and tagging them all over again. No wonder allocations can take so long and be fraught with mistakes when Storage Admins have to include in their process the requirement to manually update the tuning product. I'm still waiting for confirmation that there isn't a way to automatically report on all LUNs. If there isn't then a product enhancement will be required to meet what I want. In the meantime, I'll have to ensure things are updated manually. So if you configured Tuning Manager and the LUN list when you first installed an array, have a quick look to see if you're monitoring everything or not.

I'm sure some of you out there will point out, with good reason, why HTnM doesn't automatically scan all LUNs, but from my perspective, I'm never asked by senior management to monitor a performance issue *before* it has occurred, so I always prefer to have monitoring enabled for all devices and all subsystems if it doesn't have an adverse affect on performance.

Second was an issue with the way NTFS works. A number of filesystems on our SQL Server machines show high levels of fragmentation, despite there being plenty of freespace on the volumes in question. This fragmentation issue seems to occur even when a volume is cleared and files are reallocated from scratch.

A quick trawl around the web found me various assertions that NTFS deliberately leaves file clusters between files in order to provide an initial bit of expansion. I'm not sure this is true as I can't find a trusted source to indicate this is standard behaviour. In addition I wonder if it the way in which some products allocate files; for instance if a SQL backup starts to create a backup file it has no real idea how big the file will become. NTFS (I assume) will choose the largest block of freespace available and allocate the file there. If another process allocates a file almost immediately, then it will get allocated just after the first file (which may only be a few clusters in size at this stage). Then the first file gets extended and "leapfrogs" the second file, and so on, producing fragmentation in both files.

I'm not sure if this is what is happening, but if this is the way NTFS is working then it would explain the levels of fragmentation we see (some files have 200,000+ fragments in a 24GB file). In addition, I don't know for definite that the fragmentation is having a detremental impact on performance (these are SAN connected LUNs). Everything is still speculation. I guess I need to do more investigation...

Saturday 15 September 2007

Pause for Thoughtput

I've just read a couple of Gary O's postings over at Thoughtput, the blog from Gear6.

In his article "Feeding the Virtual Machines", he discussed NAS and SAN deployment for a virtual environment and makes the bold claim:

"Most people tend to agree that NAS is easier and more cost effective than SANs for modern data center architectures."

I have to say that I for one don't. Anyone who's had to deploy hardware such as Netapp filers will know there's a minefield of issues around security, DNS and general configuration, which unless you know the products intimately are likely to catch you out. I'm not saying SAN deployments are easier, simply that both SAN and NAS deployments have their pro's and con's.

The second post, Shedding Tiers questions the need to tier storage in the first place and Gary makes the comment:

"If money were no object, people would keep buying fast drives"

Well, of course they would. I'd also be driving a Ferrari to work and living in Cannes with a bevvy of supermodels on each arm but unfortunately like most people (and businesses) I have champagne tastes and beer money...

Tiering is only done to save money as Gary rightly points out, but putting one great honking cache in front of all the storage seems a bit pointless. After all, that cache isn't free either and what happens if those hosts who are using lower tier storage don't need the performance in the first place?

I almost feel obliged to use BarryB's blogketing keyword.... :0)

Friday 14 September 2007

SAN Virtual Appliances

LeftHand, FalconStor, Arkeia and Datacore all now offer VMware appliance versions of their products. I'm in the process of downloading them now and I'm hoping to install over the next few days and do some testing. I've previously mentioned some VM NAS products which I've installed but not reported back on. I'll try to summarise all my findings together.

It seems that the market for virtual appliances (certainly in storage) is getting bigger. I think this is a good thing but I'm not sure that the virtualisation technology today provides capabilities to allow all vendors to virtualise their products. I suspect that the iSCSI brigade will get best benefit out of this wave of technology but fibre channel will not, as (from my experience) VM products don't directly pass through fibre channel hardware to the VM guests (I'm aware of how RDM works in a VMware environment but I don't think pass-through of target devices is sufficient).

Will IBM produce an SVC Virtual Appliance? I doubt it, but products such as Invista should be perfect candidates for virtualising as they don't sit in the data path and the controller parts aren't critical to performance. So EMC, show us your commitment to Invista and make 3.0 the virtual version!

Wednesday 12 September 2007

Green Poll

Here are the results from the green poll (13 votes only :-( )



Q: Is the discussion of green storage just hype?



  • 54% - Yes it is hype, vendors are riding the bandwagon
  • 15% - No, it is an important issue and vendors are solving it
  • 15% - I'm not sure still deciding
  • 15% - No, it is an important issue and vendors are not solving it
Highly unscientific due to the poll size but I have to agree that a lot of the discussion is smoke and mirrors.

While we're on the subject I had a look at Western Digital's "green" hard drives. They are claiming with a little bit of clever code, they can reduce the power demands of their higher end SATA range. Here's a clip of the specific new features taken from their literature:


IntelliPower™ — A fine-tuned balance of spin speed, transfer rate and cache size
designed to deliver both significant power savings and solid performance.
IntelliSeek™ — Calculates optimum seek speeds to lower power consumption,
noise, and vibration.
IntelliPark™ — Delivers lower power consumption by automatically unloading the
heads during idle to reduce aerodynamic drag.


The use of these techniques is claimed to reduce idle time to 4.0W and average read/write to 7.5W per drive. I've had a look at other manufacturers and this is a saving of about 4W per drive. WD make plenty of statements as to how much this represents in cost and no doubt it is a good thing that manufacturers are thinking in this way, however it does make me think we should be examining exactly what data we're storing on disk if we are happy with a just large saving in idle time. If this data is not inactive then obviously the power savings are less, but there's no free lunch here and if data is active then a drive is going to use power. SATA drives may be able to compromise on performance but I can't imagine anyone purchasing nice fast 15K drives will want to compromise in any way. (While I think of it, developing a tiered storage strategy should include evaluating the "cost" of accessing the data in power terms)

Tuesday 11 September 2007

USP-VM

Hitachi has announced (10th September) the availability of a new storage array, the USP-VM. At first glance this appears to be the USP-V equivalent of the NSC55 as it has very similar characteristics in terms of cache cards, FEPs etc. Unfortunately HDS have provided links to specification pages not all of which include the USP-VM references. Bit sloppy that.

I've previously deployed a number of NSC55's and within 6 months wondered whether they were the right decision. They weren't as scalable as I needed and there were a few features (such as BED ports and FED ports sharing the same interface card) which were a bit of a concern (imagine losing a FEP and having to take 1/2 of all your BE paths offline to fix the problem). I'm always reminded of the DMX1000/2000/3000 range when I think of the NSC model as these EMC arrays weren't expandable and of course a DMX1000 quickly filled up....

Hu describes the USP-VM as "Enterprise Modular" in his blog entry. This may be a bit generous as (a) I doubt the USP-VM will be priced as low as modular storage and (b) I don't think it will support the whole range of disks available in a modular array. I say "think" as the link to the capacity page for the USP products doesn't yet include the USP-VM.....

Friday 7 September 2007

Virtualisation Update

Thanks to everyone who commented on the previous post relating to using virtualisation for DR. I'm looking forward to Barry's more contemporaneous explanation of the way SVC works.

I guess I should have said I understand Invista is stateless - but I didn't - so thank's to 'zilla for pointing it out.

So here's another issue. If SVC and USP cache the data (which I knew they did) then what happens if the virtualisation appliance fails? I'm not just thinking about a total failure but a partial failure or another issue which compromises the data in cache?

I was always worried that a problem with a USP virtualising solution was understanding what would happen if a failure occurred in the data path. Where is the data? What is the consistency position? A datacentre power down could be a good example. What is the data status as the equipment is powered back up?

Using Virtualisation for DR




It's good to see virtualisation and the various products being discussed again at length. Here's an idea I had some time ago for implementing remote replication by using virtualisation. I'd be interested to know whether it is possible (so far no-one from HDS can answer the question on whether USP/UVM can do this, but read on).




The virtualisation products make a virtue out of allowing heterogenous environments to be presented as a unified storage infrastructure. This even means carving LUNs/LDEVs presented from an array into consituent parts to make logical virtual volumes at the virtualisation level. Whilst this can be done, it isn't a requirement and in fact HDS sell the USP virtualisation on the basis that you can virtualise an existing array through the USP without destroying the data, then use the USP to move the data to another physical LUN. Presumably the 1:1 mapping can be achieved on Invista and SVC (I see no reason why this wouldn't be the case). Now, as the virtualisation layer simply acts as a host (in USP's case a Windows one - not sure what the others emulate) then it is possible (but not usually desirable) to present storage which is being virtualised to both the virtual device and a local host, by using multiple paths from the external array.


If the virtualisation engine is placed in one site and the external storage in another, then the external storage could be configured to be accessed in the remote site by a DR server. See example 1.

Obviously this doesn't gain much over a standard solution using TC/SRDF other than perhaps the ability to asynchronously write to the remote storage, making use of the cache in the virtualisation engine to provide good response times. So, the second picture shows using a USP as an example, a 3 datacentre configuration where there are two local USP's providing replication between each other but the secondary LUNs in the "local DR site" are actually located on external storage in a remote datacentre. This configuration gives failover between the local site pair and also access to a third copy of data in a remote site (although technically, the third copy doesn't actually exist).



Why do this? Well, if you have two closely sited locations with existing USPs where you want to retain synchronous replication and don't want to pay for a 3rd copy of data then you get a poor man's 3DC solution without paying for that third data copy.

Clearly there are some drawbacks; you are dependent on comms links to access the secondary copy of data and in a DR scenario performance may be poor. In addition, as the DR USP has to cache writes, it may not be able to destage them to the external storage in a timely fashion to prevent cache overrun due to the latency on writing to the remote external copy.

I think there's one technical question which determines whether this solution is technically feasible and that is; how do virtualisation devices destage cached I/O to their external disks? There are two options I see; firstly they destage using an algorithm which minimises the amount of disk activity or they destage in order to ensure integrity of data on the external disk in case of a failure of the virtualisation hardware itself. I would hope the answer would be the latter rather than the former here, as if the virtualisation engine suffered some kind of hardware failure, I would want the data on disk to still have write order integrity. If this is the case then my designs presented here should mean that the remote copy of data would still be valid in case of loss of both local sites, albeit as an async copy slightly out of date.

Can IBM/HDS/EMC answer the question of integrity?

Wednesday 5 September 2007

Invista

There's been a few references to Invista over the last couple of weeks, notably from Barry discussing the "stealth announcement".



I commented on Barry's blog that I felt Invista had been a failure, due to the number of sales. I'm not quite sure why this is so, as I think that virtualisation in the fabric is utimately the right place for the technology. Virtualisation can be implemented at each point in the I/O path - the host, fabric and array (I'll exclude application virtualisation as most storage managers don't manage the application stack). We already see this today; hosts use LVMs to virtualise the LUNs they are presented; Invista virtualises in the fabric; SVC from IBM sits in that middle ground between the fabric and the array and HDS and others enable virtualisation at the array level.



But why do I think fabric is best? Well, host-based virtualisation is dependent on the O/S and LVM version. Issues of support will exist as the HBAs and host software will have supported levels to match the connected arrays. It becomes complex to match multiple O/S, vendor, driver, firmware and fabric levels across many hosts and even more complex when multiple arrays are presented to the same host. For this reason and for issues of manageability, host-based virtualisation is not a scalable option. As an example, migration from an existing to a new array would require work to be completed on every server to add, lay out and migrate data.



Array-based replication provides a convenient stop-gap in the marketplace today. Using HDS's USP as an example, storage can be virtualised through the USP, appearing just as internal storage within the array would. This provides a number of benefits. Firstly driver levels for the external storage are now irrelevant (only requiring USP support, regardless of the connected host), the USP can be used to improve/smooth performance to the external storage, the USP can be used for migration tasks from older hardware and external storage can be used to store lower tiers of data, such as backups or PIT copies.



Array-based replication does have drawbacks; all externalised storage becomes dependent on the virtualising array. This makes replacement potentially complex. To date, HDS have not provided tools to seamlessly migrate away from one USP to another (as far as I am aware). In addition, there's the problem of "all your eggs in one basket"; any issue with the array (e.g. physical intervention like fire, loss of power, microcode bug etc) could result in loss of access to all of your data. Consider the upgrade scenario of moving to a higher level of code; if all data was virtualised through one array, you would want to be darn sure that both the upgrade process and the new code are going to work seamlessly...



The final option is to use fabric-based virtualisation and at the moment this means Invista and SVC. SVC is an interesting one as it isn't an array and it isn't a fabric switch, but it does effectively provide switching capabilities. Although I think SVC is a good product, there are inevitably going to be some drawbacks, most notably those similar issues to array-based virtualisation (Barry/Tony, feel free to correct me if SVC has a non-disruptive replacement path).



Invista uses a "split path" architecture to implement virtualisation. This means SCSI read/write requests are handled directly by the fabric switch, which performs the necessary changes to the fibre channel headers in order to redirect I/O to the underlying target physical device. This is achieved by the switch creating virtual initiators (for the storage to connect to) and virtual targets for the host to be mapped to. Because the virtual devices are implemented within the fabric, if should be possible to make the virtual devices accessible from any other fabric connected switch. This poses the possibility of placing the virtualised storage anywhere within a storage environment and then using the fabric to replicate data (presumably removing the need for SRDF/TrueCopy).

Other SCSI commands which inquire on the status of LUNs are handled by the Invista controller "out of band" by an IP connection from the switch to the controller. Obviously this is a slower access path but not as important in performance terms as the actual read/write activity.

I found a copy of the release notes for Invista 2.0 on Powerlink. Probably about the most significant improvement was that of clustered controllers. Other than that, the 1.0->2.0 upgrade was disappointing.

So why isn't Invista selling? Well, I've never seen any EMC salespeople mention the product never mind pushing it. Perhaps customers just don't get the benefits or expect the technology to be too complex, causing issues of support and making DR an absolute minefield.

If EMC are serious about the product you'd have expected them to be shoving it at us all the time. Maybe Barry could do for Invista what he's been doing in his recent posts for DMX-4?

Monday 3 September 2007

PDAs

Slightly off topic, I know, but I mentioned in the last post (I think) that I'd killed my PDA. I had the iPAQ hx4700 with the VGA screen which could be rotated to work in landscape rather than portrait mode. It was a big machine and slightly cumbersome but the screen made up for everything else as I could watch full motion videos on the train to work!

But it died as it didn't survive a fall onto a concrete floor. Not the first time this had happened but previously I'd been lucky.

So I needed a new PDA and fast. I was hoping HP had improved upon the 4700 model with something more swish in a smaller design and with that same great screen.

It wasn't to be. All the models are either hybrid phone devices or use the dull QVGA format which I had on an PDA over 5 years ago! I don't want a phone hybrid as I think people look like t****rs when they use them as a phone.

I decided to look further afield and see what other machines were out there. I had previously been a fan of Psion, who made the best PDA/organisers ever (Psion 5mx in case you're asking), but their demise effectively forced me down the iPAQ route. There are other PDA vendors out there but I'd never heard of most of them and I didn't see any good reviews.

In the end I went for the rather dull hx2700 which I received today. It is dull, dull, dull, dull.

It is me or have HP put all their innovation into phone/PDA hybrids? Is this what people expect of a PDA these days? Maybe that's why HP have ignored the non-phone models; the lack of decent competition doesn't help too.

Thank goodness for competition in the storage market or my job would be as boring as my time management!