Friday, 31 October 2008

Pillar Crumbles


I picked this up last night on Mike Workman's blog over at Pillar. Looks like they're suffering the downturn. Storagezilla thinks this could be 30% of the workforce. I'm sure this is going to be one of many bad news stories from the storage industry we hear over the next few months.


I've never understood the point of Pillar's offering. Differentiating performance tiers based on the specific place on a disk seems a dead end idea. Firstly, disks might appear to be random access devices but if you're accessing one cylinder on a drive you can't be accessing another at the same time. You need some pretty clever code to ensure that lower tiered I/O requests don't overwhelm tier 1 requests in this kind of shared environment. In addition, everyone says SSDs are the future. Once these devices are mainstream, Pillar's tiering model is defunct (unless SSDs have some performance variant across different parts of the silicon!) as there's no differential in performance across an SSD device.


For me, a Compellent type architecture still seems best - granular access to each storage tier (including SSD) with dynamic relocation of data.


** disclaimer - I have no affiliation with Pillar or Compellent **

2 comments:

Jim McKinstry said...

Pillar’s QoS involves MUCH more than “tiering data on the disk”. While it’s true that Pillar will place data on certain areas of a platter to optimize the performance, the true performance gains come from the caching and queuing algorithms implemented in the Slammers. Pillar believes that not all applications are created equally so all their IO’s should not be treated equally. By using QoS, combined with the Axiom’s distributed RAID technology, Pillar is able to guarantee that applications with higher priorities will have the performance they need. As capacity is added and new applications are brought online Pillar can accurately predict the impact of those new IO’s and capacity to the overall performance of the system. QoS is far more than tiering data on the disk.

As for Compellent, sure, SSD is fast but what happens when random parts of your data are scattered across SSD, FC and SATA drives? Performance becomes a crap shoot. Imagine the impact on performance when just part of your data is on SSD and part on SATA. The Applications will, essentially, be slowed down to the SATA speed. When your data is haphazardly scattered across different tiers that are as extreme as SSD and SATA then you will suffer. The chain is only as strong as its weakest link. The performance of a LUN is only as fast as its slowest disk.

Chris, I'd be happy to setup a briefing for you to review Pillar's solutions and architecture and compare and contrast the Axiom to the competitions offerings.

Jim McKinstry
Director, Marketing
Pillar Data Systems
jmckinstry@pillardata.com

Chris M Evans said...

Jim, I guess the SSD/SATA performance issue is a subjective one - it depends where/how the data is distributed. Yes, it could be a problem, but on the other hand it might not, if Compellent's algorithms are efficient enough. I'm just speculating this as I don't have any in-depth knowledge of the product (which is always dangerous).

Happy to take you up on your offer of a briefing - it would be good to add Pillar into the details I'm putting together on different array technologies.