Tag Archive for storage

Now that’s what I call… Tech Predictions 2017

predictions

At this time of year, it is customary to look back at the past 12 months and make some random or not-so-random guesses as to what will happen over the coming 12. As such, what could be more fitting for my final post of 2016?!

Here’s a few of my personal best, worst, and easy guess candidates for 2017…

Tekhead Predictable Tech Predictions 2017

Easy Guesses

Come on Alex, even Penfold could have predicted these!

  • AWS will continue to dominate the cloud market, though the rate at which they deploy new features will start to slow (over 1000 a year is pretty unsustainable!). Their revenues will continue to grow at gangbuster rates, however their market share will be slightly eroded as people experiment more with their competitors too.
  • Microsoft Azure will grow massively (not quite 100% but not far off it). Their main growth will probably be in hosting enterprises and typical line of business applications as people move their legacy junk into the cloud. The recent announcements of the Single Instance VM SLA of 99.9% will definitely accelerate this as customers will feel less include to refactor their applications for cloud.
  • Distributed everything!
  • Docker will start to become more mainstream production and less Dev/Test.
  • Google will kill off at least one popular service with multiple millions of users.
  • The homelab market will reduce as people do more and more of their studying in the cloud.
  • Podcasting will become the new blogging (if it hasn’t already!)
  • DellEMC will continue to hack off bits of its anatomy to pay back that cheeky little $67Bn debt.
  • I continue to use memes as a crutch to make my otherwise lifeless articles marginally more interesting!obvious
Best Guesses

Its on the cards… maybe?

  • Google will continue to be ignored by most enterprises for Cloud IaaS. They will gain some reasonable growth in the web application space after another mass marketing activity to developers, ISVs and hosters.
  • Oracle grows Cloud revenues 50% or more but market share remains small. Their growth is mainly driven by IaaS revenue as customers begin to move their workloads to be closer to their data in the Oracle PaaS and SaaS services.
  • There will be no major storage company IPO in 2017, i.e. over $200m.
  • Many storage startups will run out of funding and die on the vine (depressing I know!). Their IP will be snapped up by the old guard storage companies in the proceeding fire sales…
    fire-sale
  • 3D XPoint will begin to creep into storage arrays by the end of the year, fuelling another storage VC funding bubble for at least another 12 months for any company who claims to have an innovative way to use it.
  • A major cloud provider suffers a global outage.
Worst Guesses

These probably won’t happen, but if any of them do, I’ll claim smugly that I knew they were always going to!

  • Pure Storage will make an acquisition of a storage startup to create their third product line, perhaps a secondary storage company (i.e. not just all flash) along the lines of Cohesity.
  • Cisco will buy a storage company. They will be more successful at integrating it than they were with Whiptail! (Which wouldn’t be difficult… 😮 )
  • Spanning a single application over multiple clouds becomes a real possibility, as one or more startups come out of stealth to provide innovative ways to span clouds. Nobody buys into it, except maybe for DR.
  • Tekhead.it becomes the most read blog in the world in 2017
  • Cats take over the planet and dogs are forced to form a rebel alliance which is ultimately victorious when a chihuahua takes out the entire cat leadership in one go, with a stolen reaper drone.Cats vs Dogs
  • Jonah Hill wins Strictly Come Dancing, narrowly defeating Frankie Boyle and Charlie Brooker in the final.
And finally…

Here’s wishing you all an awesome, fun and prosperous 2017!

Scale-Out. Distributed. Whatever the Name, it’s the Future of Computing

Scale Out

We are currently living in the fastest period of innovation in the technology space which there has probably ever been. New companies spring up every week with new ideas, some good, some bad, some just plain awesome and unexpected!

One of the most common trends I have seen in this however was described in a book I read recently, “The Second Machine Age” by Erik Brynjolfsson & Andrew McAfee. This trend is that the majority of new ideas are (more often than not) unique recombinations of old ones.

Take for example the iPhone. It was not the first smart phone. It was not the first mobile phone, the first touch screen, or the first device to run installable apps. However, Apple recombined an existing set of technologies into a very compelling product.

We also reached a point a while back where clock speeds of CPUs are no longer increasing, and even CPUs are scaling horizontally. Workloads are therefore typically being designed to scale horizontally instead of vertically, taking advantage of the increased compute resources available whilst avoid being locked to vertically scaling clock speeds.

Finally, another trend we have seen in the industry of late is inexpensive and low power CPUs from ARM, being used in all sorts of weird and wonderful places; often providing solutions to problems we didn’t even know we had. Up until now, their place has generally been confined outside of the data centre. I am, however, aware of a number of companies now working on bringing them to the enterprise in a big way!

So, in this context of recombination, imagine then if you could provide a scale-out storage architecture where every single spindle had its own compute directly attached. Then combine many of these “nano-servers” together in a scale-out JBOD form factor on subscription pricing, all managed from a Meraki-style cloud portal… well that’s exactly what Igneous Systems have designed!

Igneous Systems Nano-Servers

One of the coolest things about scaling out like this, is that instead of a small number of large fault domains based around controllers, you actually end up with many tiny fault domains instead. The loss of any one controller or drive is basically negligible within the system and replacements can be sorted at the convenience of the administrators, rather than panicking about replacement of components asap. Igneous claim that you can also scale fairly linearly, avoiding the traditional bottlenecks of a dual controller (or similar) system. It will be interesting to see some performance benchmarks as they become available!

It’s still early days, so they are doing code deployments at some pretty high rates, around every 2 weeks, and to be honest I think there is a bit of work to be done around clarity of their SLAs, but in general it looks like a very interesting platform, particularly when pricing is claimed to be as low as half the price of Amazon S3.

Now as you might expect from a massively distributed solution, the entry point is not small, typically procured in 212TiB chunks, so don’t expect to use it for your SMB home drives! If however you have petabyte-scale data volumes and are looking for an on-prem(ises!) S3 compatible datastore, then its certainly worth looking at Igneous.

The future in the scale-out space is certainly bright, now if only I could get people to refactor their single-threaded applications!… 🙂

Further Info

You can catch the full Igneous session at the link below – it certainly was unexpected and interesting, for sure!

Igneous Systems Presents at Tech Field Day 12

Further Reading

Some of the other TFD delegates had their own takes on the presentation we saw. Check them out here:

Disclaimer: My flights, accommodation, meals, etc at Tech Field Day 12 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services.

VulcanCast Follow Up – A few thoughts on 60TB SSDs

So last week I was kindly invited to share a ride in Marc Farley‘s car (not as dodgy as it sounds, I promise!).

The premise was to discuss the recent announcements around Seagate’s 60TB SSD, Samsung’s 30TB SSD, their potential use cases, and how on earth we can protect the quantities of data which will end up on these monster drives?!

Performance

As we dug into a little in the VulcanCast, many use cases will present themselves for drives of this type, but the biggest challenge is that the IOPS density of the drives not actually very high. On a 60TB drive with 150,000 read IOPS (and my guess but not confirmed is ~100,000 or fewer write IOPS), the average IOPS per GB is actually only a little higher than that of SAS 15K drives. When you start adding deduplication and compression into the mix, if you are able to achieve around 90-150TB per drive, you could easily be looking at IOPS/GB performance approaching smaller 10K SAS devices!

Seagate 60TB SSD Vulcancast flash is fastThe biggest benefit of course if that you achieve this performance in a minuscule footprint by comparison to any current spindle type. Power draw is orders of magnitude lower than 10/15K, and at least (by my estimates) at least 4x lower than using NL-SAS / SATA at peak, and way more at idle. As such, a chunk of the additional cost of using flash for secondary tier workloads, could be soaked up by your space and power savings, especially in high-density environments.

In addition, the consistency of the latency will open up some interesting additional options…

SAS bus speeds could also end up being a challenge. Modern storage arrays often utilise 12GB SAS to interconnect the shelves and disks, which gives you multiple SAS channels over which to transfer data. With over half a PB of usable storage in just a dozen drives, which could be 1PB with compression and dedupe, and that’s a lot of storage to stick on a single channel! In the long term, faster connectivity methods such as NVMe will help, but in the short-term we may even have to see some interesting scenarios with one controller (and channel) for every few drives, just to ensure we don’t saturate bandwidth too easily.

Seagate 60TB SSD Vulcancast Big DataUse Cases

For me, the biggest use cases for this type of drive are going to be secondary storage workloads which require low(ish) latency, a reasonable number of predominantly Read IOPS, and consistent performance even when a little bit bursty. For example:

  • Unstructured data stores, such as file / NAS services where you may access data infrequently, possibly tiered with some faster flash for cache and big write bursts.
  • Media storage for photo and video sites (e.g. facebook, but there are plenty of smaller ones such as Flickr, Photobox, Funky Pigeon, Snapfish, etc. Indeed the same types of organisations we discussed at the Storage Field Day roundtable session on high performance object storage. Obviously one big disadvantage here, would be the inability to dedupe / compress very much as you typically can’t expect high ratios for media content, which then has the effect of pushing up the cost per usable GB.
  • Edge cache nodes for large media streaming services such as NetFlix where maximising capacity and performance in a small footprint to go in other providers data centres is pretty important,whilst being able to provide a consistent performance for many random read requests.

For very large storage use cases, I could easily see these drives replacing 10K and if the price can be brought down sufficiently, for highly dedupable (is that a word?) data types, starting to edge into competing with NL SAS / SATA in a few years.

Data Protection

Here’s where things start to get a little tricky… we are now talking about protecting data at such massive quantities, failure of just two drives within a short period, has the potential to cause the loss of many hundreds of terabytes of data. At the same time, adding additional drives for protection (at tens of thousands of dollars each) comes with a pretty hefty price tag!

Seagate 60TB SSD Vulcancast data protectionUnless you are buying a significant number of drives, the cost of your “N+1”, RAID, erasure coding, etc is going to be so exorbitant, you may as well buy a larger number of small drives so you don’t waste all of that extra capacity. As such, I can’t see many people using these drives in quantities of less than 12-24 per device (or perhaps per RAIN set in a hyper-converged platform), which means even with a conservatively guestimated cost of $30k per drive, you’re looking at the best part of $350-$700k for your disks alone!

Let’s imagine then, the scenario where you have a single failed drive, and 60TB of your data is now hanging in the balance. Would you want to replace that drive in a RAID set, and based on the write rates suggested so far, wait 18-24 hours for it to resync? I would be pretty nervous to do that myself…

In addition, we need to consider the rate of change of the data. Let’s say our datastore consists of 12x60TB drives. We probably have about 550TB or more of usable capacity. Even with a rate of change of just 5%, we need to be capable of backing up 27TB from that single datastore per night just to keep up with the incrementals! If we were to use a traditional backup solution against something like this, to achieve this in a typical 10-hour backup window will generate a consistent 6Gbps, never mind any full backups!

Ok, let’s say we can achieve these kinds of backup rates comfortably. Fine. Now, what happens if we had failure of a shelf, parity group or pool of disks? We’ve probably just lost 250+TB of data (excluding compression or dedupe) which we now need to restore from backup. Unless you are comfortable with an RTO measured in days to weeks, you might find that the restore time for this, even over a 10Gbps network, is not going to meet your business requirements!!!

This leaves us with a conundrum of wondering how we increase the durability of the data against disk failures, and how do we minimise the rebuild time in the event of data media failure, whilst still keeping costs reasonably low.

Seagate 60TB SSD VulcancastToday, the best option seems to me to be the use of Erasure Coding. In the event of the loss of a drive, the data is then automatically rebuilt and redistributed across many or all of the remaining drives within the storage device. Even with say 12-24 drives in a “small” system, this would mean data being rebuilt back up to full protection in 30-60 minutes, instead of 18-24 hours! That said, this assumes the connectivity on the array bus / backplane is capable of handling the kinds of bandwidth generated by the rebuilds, and that this doesn’t have a massive adverse impact on the array processors!

The use of “instant restore” technologies, where you can mount data direct from the backup media to get up and running asap, then move the data transparently in the background also seems to me to be a reasonable mitigation. In order to maintain a decent level of performance, this will likely also drive the use of flash more in the data protection storage tiers as well as production.

The Tekhead Take

Whatever happens, the massive quantities of data we are beginning to see, and the drives we plan to store them on are going to need to lead us to new (as yet, not even invented) forms of data protection. We simply can’t keep up with the rates of growth without them!

VulcanCast

Catch the video here:

The video and full transcript are also available here:
Huge SSDs will force changes to data protection strategies – with @alexgalbraith

StorageOS – An array based on containers? It’s like storage for millenials!

Last week I managed to catch up with the guys from StorageOS, a new container-based storage company, headquartered in London. I found out about them at a London Storage Beers event a few weeks ago, and my first question was, what the hell is container-based storage, and how does it work?!

They started from the premise (yes that’s actually the correct use of the word premise!), that if you want to build a storage system FOR containers, what better way to do it than to build it FROM containers. StorageOS therefore offer what they describe as “full enterprise storage array functionality, delivered by software, on a pay-as-you-go basis”. They also plan to offer a free-forever Developer tier, which includes everything except HA functionality which you would obviously need for production usage!

StorageOS Announcement

So the good news is, today (Monday 20th June 2016) StorageOS are announcing the release of their Beta at DockerCon, so you can now download and test out their new storage platform.

The StorageOS Stack

The StorageOS Stack

 

You can deploy this StorageOS software anywhere from bare metal to containers:

StorageOS - It's software, so it runs anywhere!

It’s software, so it runs anywhere!

Appliances for some of the larger clouds are in the works, but will not be available on day zero.

They can then consume any back-end storage, from SSD, HDDs and virtual drives, to EBS volumes, object stores, etc. You then pool all of capacity from all devices into a capacity pool, which is deduped, encrypted, and available across all nodes, and carve out volumes to present to systems like Docker through their own native Docker driver, or (slightly oddly) iSCSI / FC!!! They even have VAAI support in development!

Overall, I think it’s a pretty interesting product. At first look it feels a bit like a traditional array in a container package, much like if you containerised an enterprise app, then just utilised as a traditional array with some container plugins, instead of being very targeted and container-specific. StorageOS do have an OS driver to let you mount their volumes direct from containers, but there are other things out there today which do that anyway (e.g. Flocker).

I would say their messaging is a little inconsistent at the moment, and adding things like FC integration early on feels a bit odd if they’re positioning themselves as a container play. They do however state clearly that they’re targeting enterprises and want to make the on-boarding process as simple and friction-less as possible. I do worry that this “all things to all people” approach could be a wee bit risky at this early stage, and being more laser focused in the short to medium term would allow them to differentiate more.

StorageOS Cloud

The founders were very specific when they stated that they were building a clustered array with synchronous remote replicas, not a distributed storage array. Async replication is coming, which will be critical to maintaining performance in a hybrid cloud or multi-cloud setup. I really like the fact that you can stretch the same hybrid storage environment between your on-premises and cloud infrastructure using a single storage solution. This same solution can actually be used to span multiple public clouds as well, providing a resilient storage solution between say AWS and Azure, all of which is deduped and encrypted of course! This could be very interesting indeed, as customers look to protect their workloads from large public outages!

Finally, the StorageOS software is built (as you would expect these days) with APIs at the heart of everything. Even the modern GUI is really just based on API calls to the back end.

The Tekhead Take

Anyway, enough gabbing… It’s still early days, but the storage experience of the founders is certainly solid! Who better than ex-storage admins to provide a product that works well for storage admins?! I’d say there’s a good chance of this becoming a pretty cool product in the future, so definitely one to watch!

You can find a link to their website and beta sign up here:
http://storageos.com/index.php/product/

StorageOS hipster-approved storage

%d bloggers like this: