Thursday, 30 November 2006

A rival to iSCSI?

I've said before, somethings just pass you by, and so it has been with me and Coraid. This company uses ATA-over-Ethernet, a lightweight IP storage protocol to connect hosts to their storage products.

It took me a while before I realised how good this protocol could be. Currently iSCSI uses TCP/IP, which encapsulates SCSI commands within TCP packets. This creates an overhead but does allow the protocol to be routed over any distance. AoE doesn't use TCP, it uses its own protocol so doesn't suffer the TCP overhead, but can't be routed. This isn't that much of an issue as there are plenty of storage networks which are locally deployed.

Ethernet hardware is cheap. I found a 24 port GigE switch for less than £1000 (£40 a port). NICs can be had for less than £15. Ethernet switches can be stacked providing plenty of redundancy options. This kills fibre channel on cost. iSCSI can use the same hardware; AoE is more efficient.

AoE is already being bundled with Linux, drivers are available for other platforms. Does this mean the end of iSCSI? I don't think so; mainly because AoE is so different and also more importantly it isn't routable. However what AoE does offer is another alternative which can use standard cheap, readily available products. From what I've read, AoE is simple to configure too; certainly with less effort than iSCSI. I hope it has a chance.

Wednesday, 22 November 2006

Brocade on the up

Brocade shares were up 10.79% today. I know they posted good results but this is a big leap for one day. I wish I hadn't sold my shares now! To be fair, I bought at $4 and sold at $8 so I did OK - and it was some time ago.

So does this bode well for the McDATA merge? I hope so. I've been working on Cisco and the use of VSANs and I'm struggling to see what real benefit I can get out of them. Example: I could use VSANs to segregate by: Line of Business; Storage Tier; Host Type (Prod/UAT/DEV). Doing this immediately gives me 10's of combinations. Now the issue is I can only assign a single storage port to one VSAN - so I have to decide; do I want to segment my resources to the extent I can't use an almost unallocated UAT storage port when I'm desperate for connectivity for production? At the moment I see VSANs as likely to create more fragmentation than anything else. There's still more thinking to do.

I'd like to hear from anyone who has practical standards for VSANs. It would be good to see what best practice is out there.

Tuesday, 14 November 2006

Backup Trawling

I had a thought. As we back up more and more data (which we must be, as the amount of storage being deployed increases at 50-100% per year, depending on who you are) then, there must be more of an issue finding the data which needs to be restored. Are we going to need better tools to trawl the backup index to find what we want?

Virtually There

VMware looks like it is going to be a challenge; I'm starting to migrate standard storage tools into a VMware infrastructure. First of all, there's a standard VMware build for Windows (2003). Where should the swap file go? How should we configure the VMFS that it sits on? What about the performance issues in aligning the Windows view of the (EMC) storage with the physical device so we get best performance? What about Gatekeeper support? and so on.

It certainly will be a challenge. First thing that I need to solve, is if I have a LUN presented as a raw device to my VM guest and that LUN ID is over 128, although the ESX server can see it, when ESX exports it directly to the Windows guest, it can't be seen. I've a theory that it could be the generic device driver on Windows 2003 that VMware uses. I can't prove it (yet) and as yet no-one on the VMware forum has answered the question. Either they don't like me (lots of other questions posted after mine have been answered) or people don't know....

Monday, 13 November 2006

Continuous Protection

I've been looking at Continuous Data Protection products today, specifically those which don't need the deployment of an agent and can use Cisco's SANTap technology. EMC have released RecoverPoint (a rebadged product from Kashya) and Unisys have SafeGuard.

SANTap works by creating a virtual initiator, duplicating all of the writes to a standard LUN (target) within a Cisco fabric. This duplicate data stream is directed at the CDP appliance which stores and forwards it to another appliance located at a remote site where it is applied to a copy of the original data.

As the appliances track changed blocks and apply them in order, they allow recovery to any point in time. This potentially is a great feature; imagine being able to back out a single I/O operation or to replay I/O operations onto a vanilla piece of data.

Whilst this CDP technology is good, it introduces a number of obvious questions on performance, availability etc, but for me the question is more about where this technology is being pitched. EMC already has replication in the array in the form of SRDF which comes in many flavours and forms. Will RecoverPoint complement these or will, in time, RecoverPoint be integrated into SRDF as another feature?

Who can say, but at this stage I see another layer of complication and decision on where to place recovery features. Perhaps CDP will simply offer another tool in the amoury of the Storage Manager.