There's been a few references to Invista over the last couple of weeks, notably from Barry discussing the "stealth announcement".
I commented on Barry's blog that I felt Invista had been a failure, due to the number of sales. I'm not quite sure why this is so, as I think that virtualisation in the fabric is utimately the right place for the technology. Virtualisation can be implemented at each point in the I/O path - the host, fabric and array (I'll exclude application virtualisation as most storage managers don't manage the application stack). We already see this today; hosts use LVMs to virtualise the LUNs they are presented; Invista virtualises in the fabric; SVC from IBM sits in that middle ground between the fabric and the array and HDS and others enable virtualisation at the array level.
But why do I think fabric is best? Well, host-based virtualisation is dependent on the O/S and LVM version. Issues of support will exist as the HBAs and host software will have supported levels to match the connected arrays. It becomes complex to match multiple O/S, vendor, driver, firmware and fabric levels across many hosts and even more complex when multiple arrays are presented to the same host. For this reason and for issues of manageability, host-based virtualisation is not a scalable option. As an example, migration from an existing to a new array would require work to be completed on every server to add, lay out and migrate data.
Array-based replication provides a convenient stop-gap in the marketplace today. Using HDS's USP as an example, storage can be virtualised through the USP, appearing just as internal storage within the array would. This provides a number of benefits. Firstly driver levels for the external storage are now irrelevant (only requiring USP support, regardless of the connected host), the USP can be used to improve/smooth performance to the external storage, the USP can be used for migration tasks from older hardware and external storage can be used to store lower tiers of data, such as backups or PIT copies.
Array-based replication does have drawbacks; all externalised storage becomes dependent on the virtualising array. This makes replacement potentially complex. To date, HDS have not provided tools to seamlessly migrate away from one USP to another (as far as I am aware). In addition, there's the problem of "all your eggs in one basket"; any issue with the array (e.g. physical intervention like fire, loss of power, microcode bug etc) could result in loss of access to all of your data. Consider the upgrade scenario of moving to a higher level of code; if all data was virtualised through one array, you would want to be darn sure that both the upgrade process and the new code are going to work seamlessly...
The final option is to use fabric-based virtualisation and at the moment this means Invista and SVC. SVC is an interesting one as it isn't an array and it isn't a fabric switch, but it does effectively provide switching capabilities. Although I think SVC is a good product, there are inevitably going to be some drawbacks, most notably those similar issues to array-based virtualisation (Barry/Tony, feel free to correct me if SVC has a non-disruptive replacement path).
Invista uses a "split path" architecture to implement virtualisation. This means SCSI read/write requests are handled directly by the fabric switch, which performs the necessary changes to the fibre channel headers in order to redirect I/O to the underlying target physical device. This is achieved by the switch creating virtual initiators (for the storage to connect to) and virtual targets for the host to be mapped to. Because the virtual devices are implemented within the fabric, if should be possible to make the virtual devices accessible from any other fabric connected switch. This poses the possibility of placing the virtualised storage anywhere within a storage environment and then using the fabric to replicate data (presumably removing the need for SRDF/TrueCopy).
Other SCSI commands which inquire on the status of LUNs are handled by the Invista controller "out of band" by an IP connection from the switch to the controller. Obviously this is a slower access path but not as important in performance terms as the actual read/write activity.
I found a copy of the release notes for Invista 2.0 on Powerlink. Probably about the most significant improvement was that of clustered controllers. Other than that, the 1.0->2.0 upgrade was disappointing.
So why isn't Invista selling? Well, I've never seen any EMC salespeople mention the product never mind pushing it. Perhaps customers just don't get the benefits or expect the technology to be too complex, causing issues of support and making DR an absolute minefield.
If EMC are serious about the product you'd have expected them to be shoving it at us all the time. Maybe Barry could do for Invista what he's been doing in his recent posts for DMX-4?
Wednesday, 5 September 2007
Invista
Posted by Chris M Evans at 2:20 pm
Tags: barry burke, barry whyte, DMX-4, Invista, storage anarchist, usp, virtualisation
Subscribe to:
Post Comments (Atom)
7 comments:
>(Barry/Tony, feel free to correct me if SVC has a non-disruptive replacement path).
SVC does indeed have a non-disruptive upgrade path. As we support any hardware model in a cluster / IO group, you simply stop one node in an IO group. The cache on the remaining node in the IO group goes write through. Power it off remove it, replace it with a new hardware, use the front panel to change the WWN back to the old node's WWN (no zoneing changes thus needed) and power it up. The node then 'appears' to the cluster as the old node and re-joins the cluster and cache returns to normal on the IO group. Repeat for all nodes in the cluster.
I think you hit the nail on the head here. My view is that EMC 'have' to have Invista in their portfolio, however they don't want to impact their sales and licensing revenue from their core products. Its there and if anyone asks about it they will tell, but its definitely a customer pull rather than sales push product.
Hi guys ...
I don't want to speak for EMC here, but -- as a very close bystander, I think there's another view.
First, the notion that we're somehow "protecting" our core products is misguided at best. If that's the case, then we wouldn't be doing RecoverPoint (replication in the fabric), partnering with Cisco and Brocade on switch-level encryption (also in the fabric), and many more examples. Why would EMC spend many millions of dollars on R+D for switch-based storage technology unless we were serious?
Second, I am of the personal opinion that "storage virtualization" has been way oversold by the industry. It has its use cases, but it does not solve global warming (as had been suggested), nor does it eliminate the need for storage management (as has also been suggested).
It's just another tool in the toolbag. And -- thankfully -- we're being successful at helping the EMC sales force to pull out the right tool for the right job, rather than blindly shoving square pegs into round holes.
I would agree with Chris that -- ultimately -- many forms of storage functionality belong in the network, virtualization (or volume mgmt) being one of many.
I would not agree that the product has not been successful. We have many happy customers, and -- for them -- the product does exactly what we (and they) want it to do.
I would argue that happy customers might be more important than bragging rights on unit counts. We all know how to play that game, don't we?
Thanks for the discussion, guys!
Chuck, thanks for the comments. One thing I found unclear with Recoverpoint (which I had a presentation on not long after EMC bought Kaysha) was the confusion as to where the product sat with respect to SRDF and Invista. It wasn't clear if RP would complement or compete with SRDF however on closer analysis it was clear RP had nowhere near the throughput and bandwidth of SRDF and other features (such as replication via IP rather than FC) would limit its desirability in certain customers. Perhaps I didn't make it clear but one of the things I think that can confuse is to understand where in the stack a vendor sees each of their products and what market niche they are attacking. I think EMC are reasonably good at this however with RP, SRDF and Invista it isn't too clear what the strategic direction is (if there is one). Maybe you or Barry could expand on the thinking behind EMC's intentions to harmonise the three products and way they're invisaged as fitting into the stack.
Barry, thanks for the correction, I did wonder as I wrote whether SVC would have that functionality.
Chris I think I covered a bit of the Invista/RP thinking over here.
http://storagezilla.typepad.com/storagezilla/2007/09/the-future-is-v.html
Here's the quote:
"EMC aquired Kashya to get it's hands on the CDP and Remote Replication functionality and what they got was a product whose support of intelligent switches made it a perfect fit for offering Fabric based services. With Invista you don't need to use array based replication but you can if you wish, if you've made an investment why throw it out? But if you haven't or want CDP and Continuous Remote Replication running with Invista in your Fabric RecoverPoint allows customers to replicate or protect whatever they want regardless of if it's virtualized storage or physical storage.
I see these (Volume mobility, storage virtualization, CDP, CRR, and so on) as heterogeneous services you can add to your SAN and make available as you see fit when you see fit. That's my take on it though the people paid to think deep thoughts about this stuff might see it differently"
---
I see a logical fit with Invista as RP offers long distance replication but I don't see RP clashing with SRDF. SRDF is SRDF. It does what it does in the market at which it's aimed.
While RP is not aimed at the same market if you want bi-directional replication between Symm & CX or between EMC and non EMC arrays, (Customers use it to replicate between non-EMC arrays), RecoverPoint would be EMC's solution there.
I'll regret asking this but should I put together a post on RecoverPoint technology or is it just positioning is a sticking point?
After re-reading your comment it's clear it's not a technology thing you're looking for. Apologies! ;)
Hi Chris -- so, the question is basically, where does all this stuff fit?
Rather than answer this on a product-by-product level, the answer is architectural.
EMC is trying to put more storage functionality in the storage network, rather than the end points (arrays and servers).
Visible examples today include Invista (storage virtualization aka volume mgmt), RecoverPoint (local and remote replication), and the wire-speed encryption technology we announced with Cisco last May, but is not available yet.
On the NAS front, you've seen us move functionality from Celerra (NAS endpoint) to RainFinity (network-based file virtualization).
Understandably, this is a long journey from a product perspective. To the best of our knowledge, no one is taking this quite as seriously as we are.
So, given this long-term direction, any comparison / positioning etc. has to be very dependent on the use case we're talking about, and also measured at a particular point in time.
How I'd compare, for example, RecoverPoint and SRDF in 2007 will be very different as to how I'd compare them in 2009 or 2010.
You'd have to agree that putting advanced storage functionality in intelligent networks is a bit ambituous.
As an example, I was looking at your various network topologies, and I saw ways to do this with a classical approach, as well as perhaps with newer approaches.
Each would have its pros and cons. Neither would be perfect.
Let me know if you want to go deeper -- thanks!
Post a Comment