Tag Archive for Latency

Violin Memory have cool technology, but do they have a future? I hope so!

Since visiting Violin Memory at Storage Field Day 8, it has taken me a while to get around to writing this post, and I guess the reason is because I am both frustrated, and a little bit sad.

They were the first guys who (for me at least) truly tamed the beast that is flash storage, packed it up into a blisteringly fast product with insanely low latencies, and released it into the big wide world.

An organisation I worked at took that product and threw thousands of desktop users at it, a number of very busy SQL TempDBs, and some other frankly evil workloads. It didn’t even blink an eye! With Violin Storage on the back, I have seen RDS desktop customers stress test the platform with up to 100 users in a single VM!

The thing that makes me sad is that to date, Violin have not yet turned a profit.

This is not some upstart company out of the boondocks! This is a mature company, founded in 2005, who provide their product to some of the biggest enterprises in the world. Yet so far they have not managed to make a penny!

Anyway, there are other guys in the industry who are far better at financial analysis than me (for example Justin Warren did a post on this very subject just last week). So I will leave it to them try to work out why this is the case, because I like to talk tech!

Violin Memory

Looking at the latest incarnation, once again Violin have “evolutionized” [sic] something which is technically very impressive.

  • The original Violin OS has been given the boot and has been replaced by something they call Concerto OS, which blends elements of the original software with some updated features. In development is also support for a multi-controller configuration beyond the current dual, though this is obviously not GA yet.
  • The new dual controller Flash Storage Platform supports FC, iSCSI and Infiniband, as well as RDMA and ROCE.
  • It is all packaged in a 3U appliance with up to 64 redundant flash modules.
  • Thanks to their own custom backplane design, it is capable of 10-12GB/sec of throughput!
  • At 100% sustained writes they have measured 400,000 IOPSat RAID5, which is more than many of their competitors can achieve with 100% reads!It should be noted that this was on the performance model, which does not support dedupe.

All in all the new solution just screams FAST! In fact, I’m surprised they didn’t paint a red stripe down the side of the chassis!

low latency

So why on earth are Violin not ruling the AFA world right now? As a frickin cool technology with hyper speed storage, they deserve to be up there at the very least!

If I had to hazard a guess, it’s because for most companies, good is good enough.

With the smorgasbord of All Flash Arrays available today, if you don’t need latency measured in microseconds and massive IOPS/bandwidth, then you have a huge array of choices (pardon the pun). At that point features, price and support become more important than straight line drag racer performance.

If Violin want to compete with the general market whilst servicing their high-speed clients, then they need to concentrate on continuing to developing a wider range of data services and providing entry-level options to consume their products. The last thing you want to do is lose business just because you were missing a check box in an RFP…

If Violin can stop burning cash and break even, then perhaps they have a future. I for one, hope so!

If you want to catch Violin’s presentation from SFD8, check them out here:
http://techfieldday.com/appearance/violin-memory-presents-at-storage-field-day-8/

They will also be presenting again at SFD9 this week, and I’m looking forward to finding out what they plan to do next!

Further Reading
Some of the other SFD8 delegates have their own takes on the presentation we saw. Check them out here:

Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 8 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.

Software Defined Storage Virtualisation – How useful is that then?

Ignoring the buzzword bingo post title, storage virtualisation is not a new thing (and for my American cousins, yes, it should be spelt with an s! 🙂 ).

NetApp have for example been doing a V-Series controller for many years which could virtualise pretty much any storage you stick in the back of it. It would then present it as NFS and layer on all of the standard ONTAP features.

The big advantage then was that you can use the features which might otherwise be missing from your primary or secondary storage tiers, as well as being able to mix and match different tiers of storage from the same platform.

In a previous role, we had an annual process to full backup and restore a 65TB Oracle database from one site to another over a rather slow link, using an ageing VTL that could just about cope with incrementals and not much more on a day to day basis. End to end this process took a month!

Then one year we came up with a plan to used virtualised NFS storage to do compressed RMAN backups, replicate the data using snap mirror and restore on the other side. It took us 3 days; an order of magnitude improvement!

That was 4 years ago, when the quantity of data globally was about 4x less than it is now; the problem of data inertia is only going to get worse as the worlds storage consumption doubles roughly every two years!

What businesses need is the flexibility to use a heterogeneous pool of storage of different tiers and vendors in different locations to move our data around as required to meet our current IT strategy, without having to change paths to data or take downtime (especially on non virtualised workloads which don’t have the benefits of Storage vMotion etc). These tiers need to provide the consistent performance defined by individual application requirements.

It’s for this reason that I was really interested in the presentation from Primary Data at Storage Field Day 8. They were founded just two years ago, came out of stealth at VMworld 2015, and plan to go GA with their first product in less than a month’s time. They also have some big technical guns in the form of their Chief Scientist, the inimitable Steve Wozniak!

One of the limitations of the system I used in the past was that it was ultimately a physical appliance, with all the usual drawbacks thereof. Primary Data are providing the power to abstract data services based on software only, presented in the most appropriate format for the workload at hand (e.g. for vSphere, Windows, Linux etc), so issues with data gravity and inertia are effectively mitigated. I immediately see three big benefits:

  • Not only can we decouple the physical location of the data from it’s logical representation and therefore move that data at will, we can also very quickly take advantage of emerging storage technologies such as VVOLs.
    Some companies who shall remain nameless (and happen to have just been bought by a four letter competitor) won’t have support for VVOLs for up to another 12 months on some of their products, but with the “shim” layer of storage virtualisation from Primary Data, we could do it today on virtually any storage platform whether it is VVOL compliant or not. Now that is cool!
  • By virtualising the data plane and effectively using the underlying storage as object storage / chains of blocks, they enable additional data services which may either not be included with the current storage, or may be an expensive add-on license. A perfect example of this is sync and async replication between heterogenous devices.
    Perhaps then you could spend the bulk of your budget on fast and expensive storage in your primary DC from vendor A, then replicate to your DR site asynchronously onto cheaper storage from vendor B, or even a hyper-converged storage environment using all local server media. The possibilities are broad to say the least!
  • The inclusion of policy based Quality of Service from day one. In Primary Data parlance, they call them SLOs – Service Level Objectives for applications with specific IOPS, latency etc.
    QoS does not even exist as a concept on many recent storage devices, much to the chagrin of many service providers for example, so being able to retrofit it would protect the ROI on existing spend whilst keeping the platform services up to date.

There are however still a few elements which to me are not yet perfect. Access to SMB requires a filter driver in Windows in front of the SMB client, so the client thinks it’s talking to an SMB server but it’s actually going via the control plane to route the data to the physical block chains. A bit of a pain to retrofit to any large legacy environment.

vSphere appears to be a first class tenant in the Primary Data solution, with VASA and NFS-VAAI supported out of the “virtual” box, however it would be nice to have Primary Data as a VASA Client too, so it could read and then surface all capabilities from the underlying storage straight through to the vSphere hosts.

You will still have to do some basic administration on your storage back end to present it through to Primary Data before you can start carving it up in their “Single Pane of Glass”. If they were to create array plugins which would allow you to remote manage many common arrays this would really make that SPoG shine! (Yes, I have a feverish unwavering objection to saying that acronym!)

I will certainly be keeping an eye on Primary Data as they come to market. Their initial offering would have solved a number of issues for me in previous roles if it had been available a few years earlier, and I can definitely see opportunities where it would work well in my current infrastructure. I guess it now becomes up to the market to decide whether they see the benefits too!

Further Reading
Some of the other SFD8 delegates have their own takes on the presentation we saw. Check them out here:

Ray Lucchesi – Primary data’s path to better data storage presented at SFD8

Dan Frith – Primary Data  Because we all want our storage to do well

Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 8 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.

%d bloggers like this: