Archive for Architecture

Scale-Out Doesn’t Just Mean Applications

Scale Out

A couple of months ago I wrote a post entitled Scale-Out. Distributed. Whatever the Name, it’s the Future of Computing.

Taking the concept a step further, I recently started thinking about other elements in IT which are moving in that direction; not just applications and storage, but underlying infrastructure and management elements too.

Then it dawned on me that this really is not a new thing… we’ve been taking this approach for years! Technologies like VMware vSphere, have enabled us to become trusting, almost presumptuous, that we can add resources as we need them; increasing the shared pool transparently and enabling us to continue to service requirements, whilst eliminating downtime. (You can even use them to scale up on-the-fly if you really have to!)

The current breed of infrastructure engineers and startups have grown up in this era and the great thing is that this has now become part of their DNA! Typically, no longer are solutions designed from scratch to be scale-up in nature; hitting some artificial limit in capacity or having to scale specific elements of a solution to avoid nasty bottlenecks.

Instead, infrastructure is being designed to scale-out natively; distributed architectures, balancing workloads and metadata evenly across platforms. This has the added benefit, of course, of making them more resilient to failure of individual components.Distributed Systems

Backup isn’t Sexy, but it’s Necessary

One great example of this new architecture paradigm (drink!), is Rubrik, a startup in the backup space who we met at Tech Field Day 12. Their home-grown distributed file system, distributed metadata, built in off-site replication and global namespace, provide a massively scalable and resilient backup system.

All of the roles from a traditional backup solution (such as backup proxies/media servers/metadata servers, etc) are now rolled into a single, scale-out platform. As I seem to find myself saying more and more often these days, KISS personified!kiss - Keep it simple stupid EFS

With shrinking IT teams, I commonly find that companies are willing to trade budget for time savings. Utilising a simple, policy-driven management interface and enabling off-site replication to be done over-the-wire, has a lot of benefits to operational time!

As an added bonus, it can even replicate out to S3, Blob and NFS targets, to give even more options for off-site replication. Of course, a big fat pipe to the internet will cost you more each month; though you’re probably investing in that anyway, to meet your employee’s peak lunchtime demand for facebook and youtube! 🙂

Much like any complex machine, under the hood, Rubrik is pretty impressive. There is a masterless cluster management solution, multi-tier flash and disk for performance, and a clever redirect-on-write snapshot chain algorithm, which minimises capacity utilisation whilst providing very granular restores.

The key thing here, though, is we don’t really care; we are a consumer society who just wants things to work, as we have more exciting things than backup to worry about!

rubrik

TLDR;

We have enough complexity in IT these days without having to worry about backup. I would say that the simple to manage, scale-out solution from Rubrik is certainly worth considering as part of any PoC or RFP! 🙂

Further Info

You can catch the full Rubrik session at the link below:
Rubrik Presents at Tech Field Day 12

Further Reading

Some of the other TFD delegates had their own takes on the presentation we saw. Check them out here:

Disclaimer: My flights, accommodation, meals, etc at Tech Field Day 12 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services.

What’s your definition of Cloud DR, and how far down do the turtles go?

Dr Evil Disaster Recovery

WARNING – Opinion piece! No Cloud Holy Wars please!

DR in IT can mean many different things to different people. To a number of people I have spoken to in the past, it’s simply HA protection against the failure of a physical host (yikes!)! To [most] others, it’s typically protection against failure of a data centre. As we discovered this week, to AWS customers, a DR plan can mean needing to protect yourself against a failure impacting an entire cloud region!

But how much is your business willing to pay for peace of mind?

When I say pay, I don’t just mean monetarily, I also mean in terms of technical flexibility and agility as well.

What are you protecting against?

What if you need to ensure that in a full region outage you will still have service? In the case of AWS, a great many customers are comfortable that the Availability Zone concept provides sufficient protection for their businesses without the need for inter-region replication, and this is perfectly valid in many cases. If you can live with a potential for a few hours downtime in the unlikely event of a full region outage, then the cost and complexity of extending beyond one region may be too much.

That said, as we saw from the failure of some AWS capabilities this week, if we take DR in the cloud to it’s most extreme, some organisations may wish to protect their business against not only a DC or region outage, but even a global impacting incident at a cloud provider!

This isn’t just technical protection either (for example against a software bug which hits multiple regions); what if a cloud provider goes under due to a financial issue? Even big businesses can disappear overnight (just ask anyone who used to work for Barings Bank, Enron, Lehman Brothers, or even 2e2!).

Ok, it’s true that the likelihood of your cloud provider going under is pretty teeny tiny, but just how paranoid are your board or investors?

Cloud DR

Ultimate Cloud DR or Ultimate Paranoia?

For the ultimate in paranoia, some companies consider protecting themselves against the ultimate outage, by replicating between multiple clouds. In doing so, however, they must stick to using the lowest common denominator between clouds to avoid incompatibility, or indeed any potential for the dreaded “lock-in”.

At that point, they have then lost the ability to take advantage of one of the key benefits of going to cloud; getting rid of the “undifferentiated heavy lifting” as Simon Elisha always calls it. They then end up less agile, less flexible, and potentially spend their time on things which fail to add value to the business.

What is best for YOUR business?

These are all the kinds of considerations which the person responsible for an organisation’s IT DR strategy needs to consider, and it is up to each business to individually decide where they draw the line in terms of comfort level vs budget vs “lock-in” and features.

I don’t think anyone has the right answer to this problem today, but perhaps one possible solution is this:

No cloud is going to be 100% perfect for every single workload, so why not use this fact to our advantage? Within reason, it is possible to spread workloads across two or more public clouds based on whichever is best suited to those individual workloads. Adopting a multi-cloud strategy which meets business objectives and technical dependencies, without going crazy on the complexity front, is a definite possibility in this day and age!

(Ok, perhaps even replicating a few data sources between them, for the uber critical stuff, as a plan of last resort!).

The result is potentially a collection of smaller fault domains (aka blast radii!), making the business more resilient to significant outages from major cloud players, as only some parts of their infrastructure and a subset of applications are then impacted, whilst still being able to take full advantage of the differentiating features of each of the key cloud platforms.replication photocopierOf course, this is not going to work for everyone, and plenty of organisations struggle to find talent to build out capability internally on one cloud, never mind maintaining the broad range of skills required to utilise many clouds, but that’s where service providers can help both in terms of expertise and support.

They simply take that level of management and consulting a little further up the stack, whilst enabling the business to get on with the more exciting and value added elements on top. Then it becomes the service provider’s issue to make sure they are fully staffed and certified on your clouds of choice.

*** Full Disclosure *** I work for a global service provider who does manage multiple public clouds, and I’m lucky enough to have a role where I get to design solutions across many types of infrastructure, so I am obviously a bit biased in this regard. That doesn’t make the approach any less valid! 🙂

The Tekhead Take

Whatever your thoughts on the approach above are, it’s key to understand what the requirements are for an individual organisation, and where their comfort levels lie.

An all-singing, all-dancing, multi-cloud, hybrid globule of agnostic cloudy goodness is probably a step too far for most organisations, but perhaps a failover physical host in another office isn’t quite enough either…

I would love to hear your thoughts? Don’t forget to comment below!

VulcanCast Follow Up – A few thoughts on 60TB SSDs

So last week I was kindly invited to share a ride in Marc Farley‘s car (not as dodgy as it sounds, I promise!).

The premise was to discuss the recent announcements around Seagate’s 60TB SSD, Samsung’s 30TB SSD, their potential use cases, and how on earth we can protect the quantities of data which will end up on these monster drives?!

Performance

As we dug into a little in the VulcanCast, many use cases will present themselves for drives of this type, but the biggest challenge is that the IOPS density of the drives not actually very high. On a 60TB drive with 150,000 read IOPS (and my guess but not confirmed is ~100,000 or fewer write IOPS), the average IOPS per GB is actually only a little higher than that of SAS 15K drives. When you start adding deduplication and compression into the mix, if you are able to achieve around 90-150TB per drive, you could easily be looking at IOPS/GB performance approaching smaller 10K SAS devices!

Seagate 60TB SSD Vulcancast flash is fastThe biggest benefit of course if that you achieve this performance in a minuscule footprint by comparison to any current spindle type. Power draw is orders of magnitude lower than 10/15K, and at least (by my estimates) at least 4x lower than using NL-SAS / SATA at peak, and way more at idle. As such, a chunk of the additional cost of using flash for secondary tier workloads, could be soaked up by your space and power savings, especially in high-density environments.

In addition, the consistency of the latency will open up some interesting additional options…

SAS bus speeds could also end up being a challenge. Modern storage arrays often utilise 12GB SAS to interconnect the shelves and disks, which gives you multiple SAS channels over which to transfer data. With over half a PB of usable storage in just a dozen drives, which could be 1PB with compression and dedupe, and that’s a lot of storage to stick on a single channel! In the long term, faster connectivity methods such as NVMe will help, but in the short-term we may even have to see some interesting scenarios with one controller (and channel) for every few drives, just to ensure we don’t saturate bandwidth too easily.

Seagate 60TB SSD Vulcancast Big DataUse Cases

For me, the biggest use cases for this type of drive are going to be secondary storage workloads which require low(ish) latency, a reasonable number of predominantly Read IOPS, and consistent performance even when a little bit bursty. For example:

  • Unstructured data stores, such as file / NAS services where you may access data infrequently, possibly tiered with some faster flash for cache and big write bursts.
  • Media storage for photo and video sites (e.g. facebook, but there are plenty of smaller ones such as Flickr, Photobox, Funky Pigeon, Snapfish, etc. Indeed the same types of organisations we discussed at the Storage Field Day roundtable session on high performance object storage. Obviously one big disadvantage here, would be the inability to dedupe / compress very much as you typically can’t expect high ratios for media content, which then has the effect of pushing up the cost per usable GB.
  • Edge cache nodes for large media streaming services such as NetFlix where maximising capacity and performance in a small footprint to go in other providers data centres is pretty important,whilst being able to provide a consistent performance for many random read requests.

For very large storage use cases, I could easily see these drives replacing 10K and if the price can be brought down sufficiently, for highly dedupable (is that a word?) data types, starting to edge into competing with NL SAS / SATA in a few years.

Data Protection

Here’s where things start to get a little tricky… we are now talking about protecting data at such massive quantities, failure of just two drives within a short period, has the potential to cause the loss of many hundreds of terabytes of data. At the same time, adding additional drives for protection (at tens of thousands of dollars each) comes with a pretty hefty price tag!

Seagate 60TB SSD Vulcancast data protectionUnless you are buying a significant number of drives, the cost of your “N+1”, RAID, erasure coding, etc is going to be so exorbitant, you may as well buy a larger number of small drives so you don’t waste all of that extra capacity. As such, I can’t see many people using these drives in quantities of less than 12-24 per device (or perhaps per RAIN set in a hyper-converged platform), which means even with a conservatively guestimated cost of $30k per drive, you’re looking at the best part of $350-$700k for your disks alone!

Let’s imagine then, the scenario where you have a single failed drive, and 60TB of your data is now hanging in the balance. Would you want to replace that drive in a RAID set, and based on the write rates suggested so far, wait 18-24 hours for it to resync? I would be pretty nervous to do that myself…

In addition, we need to consider the rate of change of the data. Let’s say our datastore consists of 12x60TB drives. We probably have about 550TB or more of usable capacity. Even with a rate of change of just 5%, we need to be capable of backing up 27TB from that single datastore per night just to keep up with the incrementals! If we were to use a traditional backup solution against something like this, to achieve this in a typical 10-hour backup window will generate a consistent 6Gbps, never mind any full backups!

Ok, let’s say we can achieve these kinds of backup rates comfortably. Fine. Now, what happens if we had failure of a shelf, parity group or pool of disks? We’ve probably just lost 250+TB of data (excluding compression or dedupe) which we now need to restore from backup. Unless you are comfortable with an RTO measured in days to weeks, you might find that the restore time for this, even over a 10Gbps network, is not going to meet your business requirements!!!

This leaves us with a conundrum of wondering how we increase the durability of the data against disk failures, and how do we minimise the rebuild time in the event of data media failure, whilst still keeping costs reasonably low.

Seagate 60TB SSD VulcancastToday, the best option seems to me to be the use of Erasure Coding. In the event of the loss of a drive, the data is then automatically rebuilt and redistributed across many or all of the remaining drives within the storage device. Even with say 12-24 drives in a “small” system, this would mean data being rebuilt back up to full protection in 30-60 minutes, instead of 18-24 hours! That said, this assumes the connectivity on the array bus / backplane is capable of handling the kinds of bandwidth generated by the rebuilds, and that this doesn’t have a massive adverse impact on the array processors!

The use of “instant restore” technologies, where you can mount data direct from the backup media to get up and running asap, then move the data transparently in the background also seems to me to be a reasonable mitigation. In order to maintain a decent level of performance, this will likely also drive the use of flash more in the data protection storage tiers as well as production.

The Tekhead Take

Whatever happens, the massive quantities of data we are beginning to see, and the drives we plan to store them on are going to need to lead us to new (as yet, not even invented) forms of data protection. We simply can’t keep up with the rates of growth without them!

VulcanCast

Catch the video here:

The video and full transcript are also available here:
Huge SSDs will force changes to data protection strategies – with @alexgalbraith

Does a Serverless Brexit mean goodbye to infrastructure management problems?

Last week I was able to get myself along to the London CloudCamp event at the Crypt on the Green, for an evening the theme of “We’ve done cloud, what’s next?”. For those of you unfamiliar with the event, CloudCamp is an “unconference” where early adopters of Cloud Computing technologies exchange ideas. As you can probably guess from the theme title, many of the discussions were around the concept of “serverless” computing.

So, other than being something which seems to freak out my spell check function, what is “serverless” then?

I think Paul Johnston of movivo summed it up well, as “scaling a single function / object in your code instead of an entire app”, which effectively means a microservices architecture. In practical terms, it’s really just another form of PaaS, where you upload your code to a provider (such as AWS Lambda), and they take care of managing all of the underlying infrastructure including compute, load balancing, scaling, etc, on your behalf.

The instances then simply act upon events (i.e. they are event driven), which could be anything from an item hitting a queue, to a user requesting a web page, and when not required, they are not running. AWS currently supports a limited subset of languages, specifically Node.js, Java, and Python.

serverless introduction

There are of course other vendors who provide similar platforms, including Google Cloud Functions, IBM Bluemix OpenWhisk, etc. They tend to support a similarly small pool of languages, however some are more agnostic and will even allow you to upload Docker containers as well. Iron.io also allows you to do serverless using your own servers, which seems a bit of an oxymoron! 🙂

Anyway, the cool thing about serverless is that you can therefore “vote to leave” your managed or IaaS infrastructure (yes, I know, seriously tenuous connection!), and just concentrate on writing your applications. This is superb for developers who don’t necessarily have the skills or the time to manage an IaaS platform once it has been deployed.

Serverless Introduction - Tenuous doesn't even come close!

The Case for Remain

Much like the Brexit vote however, it does come with some considerations and challenges, and you may not get exactly what you expected when you went to the polling booth! For example:

  • You may believe you are now running alone, but you are ultimately still dependent on actual servers! However, you no longer have access to those servers, so basic things like logging and performance monitoring suddenly become a lot trickier.
  • Taking this a step further, testing and troubleshooting becomes more challenging. When a fault occurs, how can you trace exactly where it occurred? This is further exacerbated if you are integrating with other SaaS and PaaS platforms, such as Auth0 (IAM), Firebase (DB), etc. This is already a very common architectural pattern for serverless designs.
    You therefore need to start introducing centralised logging and error trapping systems which will allow you to see what’s actually going on, which of course sounds a lot like infrastructure management again!
  • It’s still early days for serverless, so things like documentation and support are a lot more scarce. If you plan to be an early serverless adopter, you had better know your technical onions!
  • As with any microservices architecture, with great flexibility, comes great complexity! Instead of managing just a handful of interacting services, you could now be managing many hundreds of individual functions. You can understand each piece easily, but looking at the big picture is not so simple!Serverless and Microservices Complexity
  • Another level of complexity is in billing of course. Serverless services such as AWS Lambda charge you per 100ms of compute time, and per 1 million requests. If you are paying for a server and some storage, even in a cloud computing model, it’s reasonably easy to understand how much your bill will be at the end of the month.
    Paying for transactions and processing time however is could potentially provide a few nasty surprises, especially if you come under heavy load or even a DoS attack.
  • Finally, the biggest and most obvious concern about serverless is vendor lock-in. Indeed this is potentially the ultimate lock-in as once you pick a vendor and write your application specific to their cloud, moving that bad boy is going to mean some major refactoring and re-writes!
    As long as that vendors pricing is competitive, this shouldn’t matter too much (after all, every single vendor is lock-in to some varying degree), but if that vendor manages to take the lions share of the market they could easily change that pricing and you are almost powerless to react (at least not without significant additional investment).
The Case for Leave

If you understand and mitigate (or ignore!) the above however, serverless can be quite a compelling use case. For example:

  • From an environmental perspective, you will probably never find a more efficient or greener computing paradigm. It minimises the number of extraneous operating systems, virtual or physical machines required, as this is truly multi-tenant computing. Every serverless host could undoubtedly be run at 70-90% utilisation, rather than the 10-50% you typically see in most enterprise DCs today! If you could take every workload in the world and switch it to serverless overnight, based on those efficiency levels, how many data centres, how much power and how many thousands of tonnes of metals could you save? Greenpeace should be refactoring their website as we speak!Serverless Computing is green!
  • Although you do have to introduce a number of tools to help you track what is actually going on with your environment, you can move away from doing a whole load of the mundane management tasks such as patching, OS management etc, and move up the stack to spend your resources on more productive and creative activities; actually adding business value (Crazy idea! I thought in IT we just liked patching for a living?)!
  • The VM sprawl we have today would be reduced as workloads are rationalised. That said, you just end up with replacing this with container or function sprawl, which is even harder to manage! 🙂
  • You gain potentially massive scalability for your applications. Instead of scaling entire applications, you just scale the bottleneck functions, which means your application becomes more efficient overall. Definitely time to read The Goal by Goldratt and understand the Theory of Constraints before you go down this route!
  • Finally you can potentially see significant cost savings. If there are no requests, then there is no charge! If you were running some form of event driven application or trigger, instead of paying tens or hundreds of pounds per month for a server, you might only be paying pennies! Equate this to dev/test platforms which might only be needed to run workloads for a few hours a day, or production platforms which only need to process transactions when customers are actually online, it really starts to add up, even more than auto-scaling IaaS platforms.
    Taking that a step further, if you have are running a startup, why pay hundreds or thousands a month for compute you “might” need but which often sits idle, over-throwing your functions into a scalable platform which will only charge you for actual use! I know where I would be putting my money if I were a VC…

Serverless Computing is hot!

Closing Thoughts

Serverless is a really interesting technology move for the industry which (as always) comes with it’s own unique set of benefits and challenges. I can’t see it ever being the defacto standard for everything (for the same reasons we still use mainframes and physical servers today), however there are plenty of brilliant use cases for it. If devs and startups are comfortable with the vendor lock-in and other risks, why wouldn’t they consider using it?

%d bloggers like this: