Friday 16 February 2007

iSCSI Part 3 (or is it part 4)

I've done the last part of my iSCSI evaluation. Previously I looked at the protocol itself and the ways to secure data. Transmission needs to be secured by either IPsec for a shared network or a dedicated network infrastructure. The last part of the jigsaw I checked out was how to validate the client; basic iSCSI authentication uses simply the name of the iSCSI initiator, which is nothing more than a plain-text field.

The standard iSCSI method is to use CHAP. This requires both the client and the target to provide a username and password for login to each other. I tested it out on my Netapp Simulator/Windows environment and of course it works. What I'm not sure about is how effective this is as a security method. It is necessary to retain password information for both client and target and store it elsewhere; there's no 3rd party authentication authority. Perhaps I'm being a little paranoid.

So there it is. I now know iSCSI. I have to say I like it. It's simple. It works. It is easy to implement. Security could be better, and could certainly be made more easy to manage, but perhaps that is related to the two implementations I've used (i.e. Netapp and Windows).

So what are the iSCSI best practices? I'd say:


  1. Implement CHAP for client/target security.
  2. Implement IPsec to encrypt your iSCSI traffic.
  3. Place your devices on a dedicated network.
  4. Use dedicated network cards where possible.

I hope to do a practical implementation of iSCSI soon. I can really see it as a practical alternative to fibre channel.

Thursday 15 February 2007

Is ASNP another SNIA?

I received this email this week for the very first ASNP UK Chapter meeting (ASNP is a US based user association for storage people):

<------ START ------>
Dear ASNP Chapter Members of the UK,

Due to low registration numbers, we have decided to cancel the February 23rd ASNP Chapter meeting which was to be sponsored by QLogic.

My apologies for any inconvenience this may cause.

Meanwhile, we are working on a quick survey that we hope you will answer as we’re hoping to find out what type of meetings you’d like to be a part of going forward.

Please keep your eyes open for that survey and let us know!


<----- END----->

Until November I was UK Chair. I found it difficult to work out what ASNP was for; I couldn't see it collectively lobbying storage software/hardware vendors with issues or for new features, in fact all I could see was an organisation which occasionally offered vendor sponsored "training" or information sessions. I even attended the very first ASNP Summit in Long Beach in June 2004; despite registering, when I got there, no-one could find my registration details; everything was extremely US-focused.

If an organisation exists to further the interests of its members, then there should be a clear statement of what those aims are; otherwise why bother to exist at all?

Will ASNP continue? Probably. I notice though that they still don't charge for membership. Unless they start having more to offer, I can't see how they ever can.

Tuesday 13 February 2007

Long term data retention

I spent some time earlier this week talking to COPAN. They produce high density storage systems, but not the sort of arrays you'd use for traditional data storage. Their product is pitched at the long term persistent storage market.

I'm sure you can read the website if you're interested however, I hadn't really thought through what this kind of technology could deliver. There are some fundamental issues the storage of "persistent" data needs to be sorted. For instance; how do you validate the data on your disk will still be there when you come to read it 12 months later? (disk Aerobics is the answer apparently; regular validation of disk content).

So, the target market for COPAN is long term data archive. They want you to keep your data on disk. Personally, I think if the price is right, then backup data on disk is a sensible proposition. Today's network connectivity and encryption technology means data doesn't need to be physically moved any longer. In fact, I'd suggest that removing the need to physically move data is the way forward. Disk-based data is inherently more reliable and accessible. COPAN (and others) have plenty of features that can make disk-based backup work.

Don't move the media. Just move the data. Sounds like a good strapline.

Thursday 8 February 2007

Snow!


Totally off-subject but it snowed today in the UK. Most people hate it, I love it. We had this snowman built before breakfast.....

Wednesday 7 February 2007

Write Acceleration




Reading Sangod's recent post on write acceleration, I couldn't help writing a response, as I've been looking at this whole subject recently.

First of all, I don't disagree with the concept of synchronous. Yes, the I/O must be confirmed (key word there) at the remote and local site before the host is given acknowledgement of the I/O being complete. Typically, enterprise arrays will cache a user I/O, issue a write to the remote array (which will also be cached), acknowledge the I/O to the host and destage at some later stage.

Sangod mentioned two techniques; cranking up buffer credits and acknowledgement spoofing. Buffer credits are the standard way in which the fibre channel protocol manages flow control. As each FC device passes data (for example an HBA to a switch) then the device can keep sending packets of data whilst the receiving device issues the R_RDY signal back. Buffer credits are essential as FC packets take a finite amount of time to travel a fibre optic cable. The longer the distance between devices, then the more packets which must be "on the line" in order to fully utilise the link. The rule of thumb is 1 buffer credit for each km of distance on a 2Gb/s connection. So if you don't have enough buffer credits, then you don't make best use of the bandwidth you have available. Having lots of data in transit does not compromise integrity as nothing has been confirmed to the originating device.

Moving on, there are devices which can perform write acceleration by reducing the SCSI overhead by acknowledgement spoofing. I've taken the liberty of loaning a couple of graphics from Cisco to explain how a SCSI transaction works.

When a SCSI initiator (source device) starts a write command, it issues a FCP_CMND_WRT to the target device. This is confirmed by the target with an FCP_XFER_RDY. The initiator then issues data transfer (FCP_DATA) repeatedly until all the data is transferred. The target confirms successful receipt of all of the data with FCP_RSP "Status Good". The initial preamble as the write request is started can be reduced by the switch connected to the initiator issuing an immediate FCP_XFER_RDY, allowing the initiator to immediately start sending data. The data transfer and the FCP_CMND_WRT operate in parallel, saving the time of this part of the exchange. No integrity is risked as nothing is confirmed by the target until all data is received.
I see no issue with this kind of spoofing as the initiator is not being told about the completion of the I/O before it has actually happened. What I do see as a concern is that the target may not be able to accept the data and therefore if the write fails, the source needs to be able to roll back if that happens.
In terms of getting best performance, multiple replication groups (e.g. multiple RA groups) would make best use of any WA technology. Cisco publish stats which show that. So WA could be good and safe.