Tag Archive for architecture

Docker – State of the Nation (aka Observations of a Brit)

Docker Logo

It may surprise you to learn that Docker is actually quite old now (at least in Startup terms!), having released the first version of their very cool software in March 2013!

Throughout that time, Docker (the company) have moved at a fairly rapid pace in terms of feature and etween

ug releases, with an average of a point release about every quarter and minor releases every month (or more)!

Whilst sitting here awaiting my flight to VMworld Europe 2017, where there are MANY MANY MANY (MANY!) sessions on Docker, Photon, Kubernetes, etc on the session schedule, I am prompted to consider Docker’s rise to popularity, and finish off a post I begun a few months back after Tech Field Day 12!

Well come on Galbraith… get on with it then!

My experience in UK IT industry over the last (nearly) 15 years has taught me a few things, one of which is that whenever new technologies begin serious adoption in the US, it usually becomes popular in the UK within 2-3 years. That said, this number has been squeezed down a little in the past few years as companies move towards more agile development and deployment methods. Fail fast is becoming the mantra of many more organisations, though some people I speak to still wake up with night sweats at even the thought!

The first time a customer asked me about Docker in the UK was over 3 years ago, yet in all that time, people I talk to outside of the social media bubble many of us live in have been virtually silent about it; that is until now. Docker is becoming a weekly conversation topic now with a lot of organisations I talk to, with a many people wanting to jump on board the band wagon. The switch from an operating system-centric view of the world, to a more application and service-oriented (or should that be microservice-oriented) view of the world is becoming far more prevalent in my experience.Docker Swarm

Drivers to Docker Adoption

So what is it about this Docker stuff which seems to be catching the attention of people I talk to? A few common themes I hear are:

Automation of code deployment pipelines (CI/CD) to increase business agility
I think this is probably the number one driver to Docker adoption for people I talk to. Automation of CI/CD pipelines has become so common now, it is almost becoming the norm. Yes, it is tricky to do this with more traditional applications, but it certainly isn’t impossible. Using containers as the delivery mechanism for your application provides very consistent and repeatable outcomes. I mean, you can even get Oracle DB in a container now?!?!

That said, once you dockerise your applications there are many further challenges you will run into, including something as simple as how to apply your current security tooling, policies and proceedures to these new environments.

Maturity of the platform
The Docker code base and third party ecosystem has finally reached a point of maturity where many of the networking and storage issues of the past are beginning to reduce to within acceptable risk boundaries.

Improved cross-industry support
Following this maturity model, a swathe of vendors have put their names behind the Docker ecosystem; from VMware to Openstack, AWS to Azure, Google to Cloud Foundary, everyone is getting on board! You no longer have to buy support direct from Docker (the company), but can instead get it from your cloud vendor, along with a managed orchestration tier too, such as Docker Swarm, Kubernetes or Mesos!

Because Cloud
Yes, you can Dockers your existing applications for use on premises, but many organisations I speak to are using Docker as a method to allow their developers to write code on premises, test in their dev environments on prom or in the cloud, then deploy in a consistent fashion to their brand spanking new Production cloud platforms. PaaS solutions such as Azure WebApps and AWS Elastic Beanstalk are becoming a good option for customers who just want to write code, but for those who want that little bit more control, Docker gives them flexibility and consistency.to the cloud

CIO/CTO CV Padding
I hate to play the cynic, but I think there is definitely a significant percentage of CIOs/CTOs who are doing “digital transformations through containerisation and cloud” specifically to pad out their CVs and help them get a better gig.

This is otherwise known as a “Resume-driven IT Strategy”!

I am aware of one CIO who deliberately went to a cloud platform, even though it was significantly more expensive than a traditional managed hosting solution of a similar spec, when their business case and steady workload drew few, if any discernible benefits from the use of cloud.
CIO CV Padding When I hear people refer to technologies such as VMware vSphere as “Legacy” it really drives home to me the shift we are all going through, yet again, in the industry. This is another reason though which CIOs/CTOs/Heads of IT tell me they want cloud and containers. That said, I still struggle to find a single person who doesn’t have at least one physical server in their infrastructure, so just like the mainframe before it, I don’t think the hypervisor is going away any time yet!

The Tekhead Take

As expected the lag of a couple of years from the US to the UK in adoption of containers was apparent, but now is most definitely the time! Despite both positive and negative reasons for integrating it, Docker has become the part of the information technology zeitgeist in the UK…

Want to Know More?

I was fortunate enough to meet with the product team from Docker at Tech Field Day 12 towards the end of last year. It was a really interesting session which covered many of the enterprise networking and security features recently introduced to the platform, along with Docker’s new support offerings. I highly recommend checking it out!

Docker Presents at Tech Field Day 12

Some of the other TFD12 delegates had their own thoughts on the session and Docker as a whole. You can find them here:

Disclaimer/Disclosure: My flights, accommodation, meals, etc, at Tech Field Day 12 were provided by Tech Field Day / Gestalt IT, but there was no expectation or request for me to write about any of the vendors products or services and I was not compensated in any way for my time at the event.

Amazon AWS Tips and Gotchas – Part 10 – EFS (Elastic File System)

Continuing in this series of blog posts taking a bit of a “warts and all” view of a few Amazon AWS features, below are a handful more tips and gotchas when designing and implementing solutions on Amazon AWS. This week, we talk about the latest feature of AWS, EFS (aka Elastic File System).

For the first post in this series with a bit of background on where it all originated from, see here:
Amazon #AWS Tips and Gotchas – Part 1

For more posts in this series, see here:
Index of AWS Tips and Gotchas

20. Amazon AWS Tips and Gotchas – Part 10 – EFS (Elastic File System)

A big challenge when designing highly available web infrastructures is historically how to provide a centralised content store for static content without wasting resources.

A classic model for this is a pair of web / file servers with either rsync or Gluster to replicate the content between them. In Windows world, this would be something like either a WSFC (failover cluster) or perhaps something evil like a DFS replicated share. This means that not only are you wasting money on multiple virtual machines / instances just to serve file content, but you also add significant risk and complexity in the replication and failover between these machines.

Enter, AWS EFS!AWS EFSAt a simple level, EFS is basically an NFS (v4.1) share within the AWS cloud, which is replicated across all AZs in any one region. No need for managing and replicating between instances, or indeed paying for EC2 instances just to create file shares! Great!

As this is still a relatively immature product, there are still a few “features” to be aware of:

  1. There is no native EFS backup solution (yet!). I’m sure this will come very soon. As we have Re:invent coming up, it wouldn’t surprise me if something came out then. In the meantime, your main methods would be either to use Data Pipeline to backup to another EFS store or potentially mount EFS and backup through an EC2 instance using your own tools or scripts. I would be concerned about backing up EFS to EFS (if in the same region), as this is putting all your eggs in one basket. Hopefully, AWS will provide other target options in the future.
  2. There is no native encryption of EFS data as yet. If you need this right now, you could achieve it by simply pre-encrypting the data in your application first, before it is written to EFS. Alternatively, just hold your breath as AWS have already stated that:
    “Amazon EFS does not currently provide the option to encrypt data at rest, but we will offer this option soon”.AWS EFS Meme
  3. If you have less than about 100GB, then due to the way the performance burst credits work you may not get the performance you need. The more you buy, the more performance you get, so don’t short change your app for the sake of a few dollars!

    “Amazon EFS uses a credit system to determine when file systems can burst. Each file system earns credits over time at a baseline rate that is determined by the size of the file system, and uses credits whenever it reads or writes data”
    .

    In early testing, it has been seen that very small filesystems can lead to IO starvation and performance issues. I would recommend you start with 100GB as a minimum (subject to your workload requirements of course!). This is still pretty cheap at only about $30-33 a month; a lot less than even a pair of EC2 instances, never mind the complexity reduction benefits. KISS!

    Of course, the more caching you can do on that content, e.g. using CloudFront as a CDN, the lower the IO requirements on your EFS store.

    For more info on performance see here:
    Amazon EFS Performance

    kiss - Keep it simple stupid EFS

  4. And finally… being NFS based, this is obviously primarily aimed at Linux solutions. It would be nice to think that AWS will release an SMB version in the future… we can but hope!

Thanks to my learned colleague Tom Ellis for the tip! As he says, “The size needs to be determined by the throughput needs, and not the storage capacity needs. “

Find more posts in this series here:
Index of AWS Tips and Gotchas

Amazon AWS Tips and Gotchas – Part 9 – Scale-Up Patching

Continuing in this series of blog posts taking a bit of a “warts and all” view of a few Amazon AWS features, below is another tip for designing and implementing solutions on Amazon AWS. In this case, Scale-Up Patching of Auto-Scaling Groups (ASGs) and a couple of wee bonuses about Dark Launch techniques.

For the first post in this series with a bit of background on where it all originated from, see here:
Amazon #AWS Tips and Gotchas – Part 1

For more posts in this series, see here:
Index of AWS Tips and Gotchas

19. AWS Tips and Gotchas – Part 9 – Scale-Up Patching in ASGs

Very quick tip on Auto Scaling Groups this week, courtesy of an awesome session I attended at the AWS User Group UK (London) last week on DevOps, presented by Chris Turvil from The Trainline.

Assuming you need to just do a code release to an existing farm of servers running in an ASG, and you aren’t planning anything complex such as a DB schema update, you can use a technique called “Scale-Up Patching”. I hadn’t heard the term before, but it’s actually incredibly simple, but very effective! There are a couple of methods you might use, depending on how you deliver your code, but the technique is the same; make your new code or image live, double the minimum size of your ASG, then halve it! Job done!AWS Scale-Up Patching with ASGs (Auto-Scaling Groups)So how does this work?

If you have looked into the detail of ASGs, assuming you have roughly even instances spread over multiple AZs then when an ASG shrinks / scales down, the oldest EC2 instances are killed first. For more detail on the exact rules, see here.

If you double the size of your current number of instances, all of the new instances will be deployed with your new code version. This leaves you with a farm of 50% vOld and 50% vNew. When you then tell the ASG to scale to the original size, it will obviously kill off all of the vOld instances, leaving your entire farm upgraded. If you found an issue and had to roll back, you simply rinse and repeat the same exercise! How brilliant is that?!

This process will work exactly the same regardless of whether you deploy your code via updated AMIs each time, or simply post-boot using a user-data script which pulls your source from a bucket, repo, or similar. Either way, the result is the same and infinitely repeatable!

The one counter to this which a colleague of mine brought up, is that you are explicitly depending on a specific feature of AWS always functioning in the same way and not changing in the future. An alternative might be to deploy in a blue-green setup with independent ELBs and instances. You then simply failover using Route53, either all in one go or using weighted routing for a canary release process. Funnily enough, AWS released a white paper on exactly that subject a couple of months ago:
Blue/Green Deployments on AWS Whitepaper

They also cover the scale-up patching method in detail from page 17 of the whitepaper.

Brucie Bonus One – Deployment Dictionary

Incidentally, you can actually deploy said code, without it actually going live immediately, by using methods called “Dark Launch Techniques”. As the name suggests, this separates code deployment from feature launches. You pre-release your code into production, but you simply don’t toggle it on for anyone (or everyone) at first. You can then either toggle it on for everyone, or even better, smaller canary groups. Web-scale companies such as Netflix, Facebook and Google have been doing this for many years!

This process then completely avoids the panic-inducing impact of deploying a large new code release whilst simultaneously having that code go live and ramping up utilisation at the same time!

devops Dark Launch Meme

Combining dark launch methods with scale-up patching or blue/green deployments should lead to a few less grey hairs in the long run, that’s for sure!

For more info, see the following overview:
What is a dark launch in terms of continuous delivery of software?

Brucie Bonus Two – Environment Manager

Lastly, a bit of interesting news which also came from The Trainline is that they have open sourced their own internal deployment tool, they call Environment Manager.

With an AngularJS front end, and a Node.js back end, it’s a home-grown continuous deployment tool which includes a self-service portal, REST APIs, and a number of operational governance features. The governance elements include a feature which prevents rogue developers deploying anything which hasn’t already been defined in the central service catalogue.

The Tramline Environment Manager Architecture

You can check out Environment Manager on GitHub:
https://trainline.github.io/environment-manager

Want More AWS Tips and Gotchas?

Find more posts in this series here:
Index of AWS Tips and Gotchas

Amazon AWS Tips and Gotchas – Part 10 – EFS (Elastic File System)

Amazon AWS Tips and Gotchas – Part 8 – AWS EC2 Reserved Instances

Continuing in this series of blog posts taking a bit of a “warts and all” view of a few Amazon AWS features, below are a handful more tips and gotchas when designing and implementing solutions on Amazon AWS, including AWS EC2 Reserved Instances.

For the first post in this series with a bit of background on where it all originated from, see here:
Amazon #AWS Tips and Gotchas – Part 1

For more posts in this series, see here:
Index of AWS Tips and Gotchas

AWS Tips and Gotchas – Part 8

Reserved Instances are a great way to save yourself some money for instances you know you will require for a significant period of time (from 12-36 months). One really cool fact which AWS don’t announce enough, in my opinion, is that reserved instances can actually be shared across consolidated billing accounts!

If you wanted to, you could purchase all of your reserved instances from your primary consolidated billing accounts, however, it doing this has some potentially unexpected results:

  1. Reserved instances don’t just provide you with a better price, they also provide you with guaranteed ability to spin up an instance of your chosen type, regardless of how busy the AZ in question actually is.
    If there is an AZ outage, other AWS customers will scramble to spin up additional instances in other AZs in the same region, either manually or via ASGs, and this has the potential to starve the compute resources for one or more instance types!
    Yes, that’s right, even AWS do not have an infinite compute resources!AWS Infinity Reserved InstancesBy using reserved instances, you are still guaranteed to be able to run yours regardless of available capacity for on-demand instances. They are truly reserved.
    If however, you centralise your reserved instances into your CB account, you will get the reservation pricing benefits at the top of the account tree, but you don’t get the capacity reservations as these are account specific.
  2. Reserved instances are specific to individual Availability Zones, so ensure you spread these evenly across your AZs to avoid wasting them (you are of course designing your apps to be resilient across AZs, right?) and give you maximum reserved coverage in the unlikely event of a full AZ outage.
  3. And finally… Reserved instances are a commercial tool applied after-the-fact, not against a specific instance. When using consolidated billing for reserved instances, the reservations are therefore effectively split evenly across all accounts. If you actually want to report back to each business unit / account owner on their billing including reserved instance, this could be tricky.

Find more posts in this series here:
Index of AWS Tips and Gotchas

Amazon AWS Tips and Gotchas – Part 9 – Scale-Up Patching

%d bloggers like this: