Tag Archive for ARM

Scale-Out. Distributed. Whatever the Name, it’s the Future of Computing

Scale Out

We are currently living in the fastest period of innovation in the technology space which there has probably ever been. New companies spring up every week with new ideas, some good, some bad, some just plain awesome and unexpected!

One of the most common trends I have seen in this however was described in a book I read recently, “The Second Machine Age” by Erik Brynjolfsson & Andrew McAfee. This trend is that the majority of new ideas are (more often than not) unique recombinations of old ones.

Take for example the iPhone. It was not the first smart phone. It was not the first mobile phone, the first touch screen, or the first device to run installable apps. However, Apple recombined an existing set of technologies into a very compelling product.

We also reached a point a while back where clock speeds of CPUs are no longer increasing, and even CPUs are scaling horizontally. Workloads are therefore typically being designed to scale horizontally instead of vertically, taking advantage of the increased compute resources available whilst avoid being locked to vertically scaling clock speeds.

Finally, another trend we have seen in the industry of late is inexpensive and low power CPUs from ARM, being used in all sorts of weird and wonderful places; often providing solutions to problems we didn’t even know we had. Up until now, their place has generally been confined outside of the data centre. I am, however, aware of a number of companies now working on bringing them to the enterprise in a big way!

So, in this context of recombination, imagine then if you could provide a scale-out storage architecture where every single spindle had its own compute directly attached. Then combine many of these “nano-servers” together in a scale-out JBOD form factor on subscription pricing, all managed from a Meraki-style cloud portal… well that’s exactly what Igneous Systems have designed!

Igneous Systems Nano-Servers

One of the coolest things about scaling out like this, is that instead of a small number of large fault domains based around controllers, you actually end up with many tiny fault domains instead. The loss of any one controller or drive is basically negligible within the system and replacements can be sorted at the convenience of the administrators, rather than panicking about replacement of components asap. Igneous claim that you can also scale fairly linearly, avoiding the traditional bottlenecks of a dual controller (or similar) system. It will be interesting to see some performance benchmarks as they become available!

It’s still early days, so they are doing code deployments at some pretty high rates, around every 2 weeks, and to be honest I think there is a bit of work to be done around clarity of their SLAs, but in general it looks like a very interesting platform, particularly when pricing is claimed to be as low as half the price of Amazon S3.

Now as you might expect from a massively distributed solution, the entry point is not small, typically procured in 212TiB chunks, so don’t expect to use it for your SMB home drives! If however you have petabyte-scale data volumes and are looking for an on-prem(ises!) S3 compatible datastore, then its certainly worth looking at Igneous.

The future in the scale-out space is certainly bright, now if only I could get people to refactor their single-threaded applications!… 🙂

Further Info

You can catch the full Igneous session at the link below – it certainly was unexpected and interesting, for sure!

Igneous Systems Presents at Tech Field Day 12

Further Reading

Some of the other TFD delegates had their own takes on the presentation we saw. Check them out here:

Disclaimer: My flights, accommodation, meals, etc at Tech Field Day 12 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services.

Tech Startup Spotlight – Hedvig


After posting this comment last week, I thought it might be worth following up with a quick post. I’ll be honest and say that until Friday I hadn’t actually heard of Hedvig, but I was invited along by the folks at Tech Field Day to attend a Webex with this up and coming distributed storage company, who have recently raised $18 million in their Series B funding round, having only come out of stealth in March 2015.

Hedvig are a “Software Defined Storage” company, but in their own words they are not YASS (Yet Another Storage Solution). Their new solution has been in development for a number of years by their founder and CEO Avinash Lakshman; the guy who invented Cassandra at Facebook as well as Amazon Dynamo, so a chap who knows about designing distributed systems! It’s based around a software only distributed storage architecture, which supports both hyper-converged and traditional infrastructure models.

It’s still pretty early days, but apparently has been tested to up to 1000 nodes in a single cluster, with about 20 Petabytes, so it would appear to definitely be reasonably scalable! 🙂 It’s also elastic, as it is designed to be able to shrink by evacuating nodes, as well as add more. When you get to those kind of scales, power can become a major part to your cost to serve, so it’s interesting to note that both x86 and ARM hardware are supported in the initial release, though none of their customers are actually using the latter as yet.

In terms of features and functionality, so far it appears to have all the usual gubbins such as thin provisioning, compression, global deduplication, multi-site replication with up to 6 copies, etc; all included within the standard price. There is no specific HCL from a hardware support perspective, which in some ways could be good as it’s flexible, but in others it risks being a thorn in their side for future support. They will provide recommendations during the sales cycle though (e.g. 20 cores / 64GB RAM, 2 SSDs for journalling and metadata per node), but ultimately it’s the customer’s choice on what they run. Multiple hypervisors are supported, though I saw no mention of VAAI support just yet.

The software supports auto-tiering via two methods, with hot blocks being moved on demand, and a 24/7 background housekeeping process which reshuffles storage at non-busy times. All of this is fully automated with no need for admin input (something which many admins will love, and others will probably freak out about!). This is driven by their philosophy or requiring as little human intervention as possible. A noteworthy goal in light of the modern IT trend of individuals often being responsible for concurrently managing significantly more infrastructure than our technical forefathers! (See Cats vs Chickens).

Where things start to get interesting though is when it comes to the file system itself. It seems that the software can present block, file and object storage, but the underlying file system is actually based on key-value pairs. (Looks like Jeff Layton wasn’t too far off with this article from 2014) They didn’t go into a great deal of detail on the subject, but their architecture overview says:

“The Hedvig Storage Service operates as an optimized key value store and is responsible for writing data directly to the storage media. It captures all random writes into the system, sequentially ordering them into a log structured format that flushes sequential writes to disk.”

Supported Access Protocols
Block – iSCSI and Cinder
File – NFS (SMB coming in future release)
Object – S3 or SWIFT APIs

Working for a service provider, my first thought is generally a version of “Can I multi-tenant it securely, whilst ensuring consistent performance for all tenants?”. Neither multi-tenancy of the file access protocols (e.g. attaching the array to multiple domains for different security domains per volume) nor storage performance QoS are currently possible as yet, however I understand that Hedvig are looking at these in their roadmap.

So, a few thoughts to close… Well they definitely seem to be a really interesting storage company, and I’m fascinated to find out more as to how their key-value filesystem works in detail.  I’d suggest they’re not quite there yet from a service provider perspective, but for private clouds in the the enterprise market, mixed hypervisor environments, and big data analytics, they definitely have something interesting to bring to the table. I’ll certainly be keeping my eye on them in the future.

For those wanting to find out a bit more, they have an architectural white paper and datasheet on their website.

%d bloggers like this: