Erik Ableson 3 minute read
May 18, 2016

Why the reticence to trying scale out storage?

I’ve been running into a few projects where I’ve been working with different companies that are in the process of doing a storage refresh and for some reason I’m seeing some fairly strong push back against considering some of the newer scale-out storage solutions.

I find this rather interesting, as the advantages offered by a scale-out solution with the potentially longer life-span of the software layer when you hit the end of life part of the hardware cycle are so much more interesting than continuing on with a traditional storage architecture with the attendant fork-lift upgrades.

In some environments there is a significant sunk cost issue with an existing mature Fiber Channel environment that has to be taken into account. But this can be mitigated by playing to the scale out architecture’s advantages and starting small and growing over time, assuming you’re not going to get killed on any maintenance charges on your existing storage systems. The other missing piece that does come into play on some systems are physical servers that are only FC attached and are not natively compatible with a scale-out system in which case you need some kind of gateway into the storage cluster.

Moving to scale-out means moving to Ethernet and for storage systems this generally means 10GbE Ethernet, so there is a non-trivial cost in switch investments, but again this opens the door for many other potential optimizations where your servers are now simply dual-attached 10GbE and you separate the networks via VLAN, reducing and simplifying the long-term datacenter architecture requirements.

For those already using iSCSI or NFS as their primary storage protocols, the Ethernet storage network is already in place and well segmented so there shouldn’t be any serious issues on that front.

At the end of the day, in a worst case scenario if you’re really not happy with the system, you’ll replace it in 5 years just like you did every other storage system you’ve ever bought. The next replacement may also be scale-out storage using a different software stack in which case you can leverage your commodity servers that are supplying the storage as long as they’re still maintainable. Or you can move back to iSCSI, NFS or SMB.

From this perspective I can only see upside in looking at scale-out solutions.

Try it, you might like it!


The one major pushback point that I find to be pertinent is the question about which vendors amongst the startups will actually be around in 5-10 years. This is definitely a tough question. I really like a lot of the innovative solutions out there ([Hedvig], [Coho Data], [Kaminario] etc.) but we don’t yet know if they will be able to survive in the cutthroat storage market in the long term. The usual exit for this kind of technology of being bought by one of the bigger players is looking less and less likely given that they have pretty much all made a choice in this arena with the exception of HP which currently only has their aging LeftHand scale-out solution.

But this still leaves us with the choices from the historic players if you prefer to stick to an existing brand with solutions like [ScaleIO] (EMC) and [SolidFire] (NetApp).

And of course there are the more tightly coupled solutions like [VSAN], and the hyperconverged players like [SimpliVity], [Nutanix] and [Scale Computing]

So there’s something for everyone in this market if you look around a bit.