Tag Archive for Azure

What’s your definition of Cloud DR, and how far down do the turtles go?

Dr Evil Disaster Recovery

WARNING – Opinion piece! No Cloud Holy Wars please!

DR in IT can mean many different things to different people. To a number of people I have spoken to in the past, it’s simply HA protection against the failure of a physical host (yikes!)! To [most] others, it’s typically protection against failure of a data centre. As we discovered this week, to AWS customers, a DR plan can mean needing to protect yourself against a failure impacting an entire cloud region!

But how much is your business willing to pay for peace of mind?

When I say pay, I don’t just mean monetarily, I also mean in terms of technical flexibility and agility as well.

What are you protecting against?

What if you need to ensure that in a full region outage you will still have service? In the case of AWS, a great many customers are comfortable that the Availability Zone concept provides sufficient protection for their businesses without the need for inter-region replication, and this is perfectly valid in many cases. If you can live with a potential for a few hours downtime in the unlikely event of a full region outage, then the cost and complexity of extending beyond one region may be too much.

That said, as we saw from the failure of some AWS capabilities this week, if we take DR in the cloud to it’s most extreme, some organisations may wish to protect their business against not only a DC or region outage, but even a global impacting incident at a cloud provider!

This isn’t just technical protection either (for example against a software bug which hits multiple regions); what if a cloud provider goes under due to a financial issue? Even big businesses can disappear overnight (just ask anyone who used to work for Barings Bank, Enron, Lehman Brothers, or even 2e2!).

Ok, it’s true that the likelihood of your cloud provider going under is pretty teeny tiny, but just how paranoid are your board or investors?

Cloud DR

Ultimate Cloud DR or Ultimate Paranoia?

For the ultimate in paranoia, some companies consider protecting themselves against the ultimate outage, by replicating between multiple clouds. In doing so, however, they must stick to using the lowest common denominator between clouds to avoid incompatibility, or indeed any potential for the dreaded “lock-in”.

At that point, they have then lost the ability to take advantage of one of the key benefits of going to cloud; getting rid of the “undifferentiated heavy lifting” as Simon Elisha always calls it. They then end up less agile, less flexible, and potentially spend their time on things which fail to add value to the business.

What is best for YOUR business?

These are all the kinds of considerations which the person responsible for an organisation’s IT DR strategy needs to consider, and it is up to each business to individually decide where they draw the line in terms of comfort level vs budget vs “lock-in” and features.

I don’t think anyone has the right answer to this problem today, but perhaps one possible solution is this:

No cloud is going to be 100% perfect for every single workload, so why not use this fact to our advantage? Within reason, it is possible to spread workloads across two or more public clouds based on whichever is best suited to those individual workloads. Adopting a multi-cloud strategy which meets business objectives and technical dependencies, without going crazy on the complexity front, is a definite possibility in this day and age!

(Ok, perhaps even replicating a few data sources between them, for the uber critical stuff, as a plan of last resort!).

The result is potentially a collection of smaller fault domains (aka blast radii!), making the business more resilient to significant outages from major cloud players, as only some parts of their infrastructure and a subset of applications are then impacted, whilst still being able to take full advantage of the differentiating features of each of the key cloud platforms.replication photocopierOf course, this is not going to work for everyone, and plenty of organisations struggle to find talent to build out capability internally on one cloud, never mind maintaining the broad range of skills required to utilise many clouds, but that’s where service providers can help both in terms of expertise and support.

They simply take that level of management and consulting a little further up the stack, whilst enabling the business to get on with the more exciting and value added elements on top. Then it becomes the service provider’s issue to make sure they are fully staffed and certified on your clouds of choice.

*** Full Disclosure *** I work for a global service provider who does manage multiple public clouds, and I’m lucky enough to have a role where I get to design solutions across many types of infrastructure, so I am obviously a bit biased in this regard. That doesn’t make the approach any less valid! 🙂

The Tekhead Take

Whatever your thoughts on the approach above are, it’s key to understand what the requirements are for an individual organisation, and where their comfort levels lie.

An all-singing, all-dancing, multi-cloud, hybrid globule of agnostic cloudy goodness is probably a step too far for most organisations, but perhaps a failover physical host in another office isn’t quite enough either…

I would love to hear your thoughts? Don’t forget to comment below!

Now that’s what I call… Tech Predictions 2017

predictions

At this time of year, it is customary to look back at the past 12 months and make some random or not-so-random guesses as to what will happen over the coming 12. As such, what could be more fitting for my final post of 2016?!

Here’s a few of my personal best, worst, and easy guess candidates for 2017…

Tekhead Predictable Tech Predictions 2017

Easy Guesses

Come on Alex, even Penfold could have predicted these!

  • AWS will continue to dominate the cloud market, though the rate at which they deploy new features will start to slow (over 1000 a year is pretty unsustainable!). Their revenues will continue to grow at gangbuster rates, however their market share will be slightly eroded as people experiment more with their competitors too.
  • Microsoft Azure will grow massively (not quite 100% but not far off it). Their main growth will probably be in hosting enterprises and typical line of business applications as people move their legacy junk into the cloud. The recent announcements of the Single Instance VM SLA of 99.9% will definitely accelerate this as customers will feel less include to refactor their applications for cloud.
  • Distributed everything!
  • Docker will start to become more mainstream production and less Dev/Test.
  • Google will kill off at least one popular service with multiple millions of users.
  • The homelab market will reduce as people do more and more of their studying in the cloud.
  • Podcasting will become the new blogging (if it hasn’t already!)
  • DellEMC will continue to hack off bits of its anatomy to pay back that cheeky little $67Bn debt.
  • I continue to use memes as a crutch to make my otherwise lifeless articles marginally more interesting!obvious
Best Guesses

Its on the cards… maybe?

  • Google will continue to be ignored by most enterprises for Cloud IaaS. They will gain some reasonable growth in the web application space after another mass marketing activity to developers, ISVs and hosters.
  • Oracle grows Cloud revenues 50% or more but market share remains small. Their growth is mainly driven by IaaS revenue as customers begin to move their workloads to be closer to their data in the Oracle PaaS and SaaS services.
  • There will be no major storage company IPO in 2017, i.e. over $200m.
  • Many storage startups will run out of funding and die on the vine (depressing I know!). Their IP will be snapped up by the old guard storage companies in the proceeding fire sales…
    fire-sale
  • 3D XPoint will begin to creep into storage arrays by the end of the year, fuelling another storage VC funding bubble for at least another 12 months for any company who claims to have an innovative way to use it.
  • A major cloud provider suffers a global outage.
Worst Guesses

These probably won’t happen, but if any of them do, I’ll claim smugly that I knew they were always going to!

  • Pure Storage will make an acquisition of a storage startup to create their third product line, perhaps a secondary storage company (i.e. not just all flash) along the lines of Cohesity.
  • Cisco will buy a storage company. They will be more successful at integrating it than they were with Whiptail! (Which wouldn’t be difficult… 😮 )
  • Spanning a single application over multiple clouds becomes a real possibility, as one or more startups come out of stealth to provide innovative ways to span clouds. Nobody buys into it, except maybe for DR.
  • Tekhead.it becomes the most read blog in the world in 2017
  • Cats take over the planet and dogs are forced to form a rebel alliance which is ultimately victorious when a chihuahua takes out the entire cat leadership in one go, with a stolen reaper drone.Cats vs Dogs
  • Jonah Hill wins Strictly Come Dancing, narrowly defeating Frankie Boyle and Charlie Brooker in the final.
And finally…

Here’s wishing you all an awesome, fun and prosperous 2017!

Cohesity Announces Cloud Integration Services

With the release of v2.0 of their OASIS platform, as presented as Storage Field Day 9 recently, Cohesity’s development team have continued churn out new features and data services at a significant rate. It seems that they are now accelerating towards the cloud (or should that be The Cloud?) with a raft of cloud integration features announced today!

There are three key new features included as part of this, called CloudArchive, CloudTier and CloudReplicate respectively, all of which pretty much do exactly what it says on the tin!

CloudArchive is a feature which allows you to archive datasets to the cloud (duh!), specifically onto Google Nearline, Azure, and Amazon S3. This would be most useful for things like long term retention of backups without taking up space on your primary platform.

CohesityCloudFeatures.png

CloudTier extends on-premises storage, allowing you to use cloud storage as a cold tier, moving your least used blocks out. If you are like me, you like to understand how these things work down deep in the guts! Mohit Aron, Founder & CEO of Cohesity, kindly provided Tekhead.it with this easy to understand explanation on their file and tiering system:

NFS/SMB files are mapped to objects in our system – which we call blobs. Each blob consists though of small pieces – which we call chunks. Chunks are variable sized – approximately ranging from 8K-16K. The variable size is due to deduplication – we do variable length deduplication.

The storage of the chunks [is] done by a completely different component. We group chunks together into what we call a chunkfile – which is approximately 8MB in size. When we store a chunkfile on-prem, it is a file on Linux. But when we put it in the cloud, it becomes an S3 object.

Chunkfiles are the units of tiering – we’ll move around chunkfiles based on their hotness.

So there you have it folks; chunkfile hotness is the key to Cohesity’s very cool new tiering technology! I love it!

chunkfilehotness

With the chunkfiles set at 8mb this seems like a sensible size for moving large quantities of data back and forth to the cloud with minimal overhead. With a reasonable internet connection in place, it should still be possible to recall a “cool” chunk without too much additional latency, even if your application does require it in a hurry.

You can find out more information about these two services on a new video they have just published to their youtube channel.

The final feature, which is of most interest to me is called CloudReplicate, though this is not yet ready for release and I am keen to find out more as information becomes available. With CloudReplicate, Cohesity has made the bold decision to allow customers to run a software only edition of their solution in your cloud of choice, with native replication from their on premises appliances, paving the way to true hybrid cloud, or even simply providing a very clean DR strategy.

This solution is based on their native on-premises replication technology, and as such will support multiple replication topologies, e.g. 1-to-many, many-to-1, many-to-many, etc, providing numerous simple or complex DR and replication strategies to meet multiple use cases.

Cohesity-CloudReplicate.png

It could be argued that the new solution potentially provides their customers with an easy onramp to the cloud in a few years… I would say that anyone making an investment in Cohesity today is likely to continue to use their products for some time, and between now and then Cohesity will have the time to significantly grow their customer base and market share, even if it means enabling a few customers to move away from on-prem down the line.

I have to say that once again Cohesity have impressed with their vision and speedy development efforts. If they can back this with increase sales to match, their future certainly looks rosy!

Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Storage Field Day 9 were provided by Tech Field Day, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.

%d bloggers like this: