Tag Archive for containers

Does a Serverless Brexit mean goodbye to infrastructure management problems?

Last week I was able to get myself along to the London CloudCamp event at the Crypt on the Green, for an evening the theme of “We’ve done cloud, what’s next?”. For those of you unfamiliar with the event, CloudCamp is an “unconference” where early adopters of Cloud Computing technologies exchange ideas. As you can probably guess from the theme title, many of the discussions were around the concept of “serverless” computing.

So, other than being something which seems to freak out my spell check function, what is “serverless” then?

I think Paul Johnston of movivo summed it up well, as “scaling a single function / object in your code instead of an entire app”, which effectively means a microservices architecture. In practical terms, it’s really just another form of PaaS, where you upload your code to a provider (such as AWS Lambda), and they take care of managing all of the underlying infrastructure including compute, load balancing, scaling, etc, on your behalf.

The instances then simply act upon events (i.e. they are event driven), which could be anything from an item hitting a queue, to a user requesting a web page, and when not required, they are not running. AWS currently supports a limited subset of languages, specifically Node.js, Java, and Python.

serverless introduction

There are of course other vendors who provide similar platforms, including Google Cloud Functions, IBM Bluemix OpenWhisk, etc. They tend to support a similarly small pool of languages, however some are more agnostic and will even allow you to upload Docker containers as well. Iron.io also allows you to do serverless using your own servers, which seems a bit of an oxymoron! 🙂

Anyway, the cool thing about serverless is that you can therefore “vote to leave” your managed or IaaS infrastructure (yes, I know, seriously tenuous connection!), and just concentrate on writing your applications. This is superb for developers who don’t necessarily have the skills or the time to manage an IaaS platform once it has been deployed.

Serverless Introduction - Tenuous doesn't even come close!

The Case for Remain

Much like the Brexit vote however, it does come with some considerations and challenges, and you may not get exactly what you expected when you went to the polling booth! For example:

  • You may believe you are now running alone, but you are ultimately still dependent on actual servers! However, you no longer have access to those servers, so basic things like logging and performance monitoring suddenly become a lot trickier.
  • Taking this a step further, testing and troubleshooting becomes more challenging. When a fault occurs, how can you trace exactly where it occurred? This is further exacerbated if you are integrating with other SaaS and PaaS platforms, such as Auth0 (IAM), Firebase (DB), etc. This is already a very common architectural pattern for serverless designs.
    You therefore need to start introducing centralised logging and error trapping systems which will allow you to see what’s actually going on, which of course sounds a lot like infrastructure management again!
  • It’s still early days for serverless, so things like documentation and support are a lot more scarce. If you plan to be an early serverless adopter, you had better know your technical onions!
  • As with any microservices architecture, with great flexibility, comes great complexity! Instead of managing just a handful of interacting services, you could now be managing many hundreds of individual functions. You can understand each piece easily, but looking at the big picture is not so simple!Serverless and Microservices Complexity
  • Another level of complexity is in billing of course. Serverless services such as AWS Lambda charge you per 100ms of compute time, and per 1 million requests. If you are paying for a server and some storage, even in a cloud computing model, it’s reasonably easy to understand how much your bill will be at the end of the month.
    Paying for transactions and processing time however is could potentially provide a few nasty surprises, especially if you come under heavy load or even a DoS attack.
  • Finally, the biggest and most obvious concern about serverless is vendor lock-in. Indeed this is potentially the ultimate lock-in as once you pick a vendor and write your application specific to their cloud, moving that bad boy is going to mean some major refactoring and re-writes!
    As long as that vendors pricing is competitive, this shouldn’t matter too much (after all, every single vendor is lock-in to some varying degree), but if that vendor manages to take the lions share of the market they could easily change that pricing and you are almost powerless to react (at least not without significant additional investment).
The Case for Leave

If you understand and mitigate (or ignore!) the above however, serverless can be quite a compelling use case. For example:

  • From an environmental perspective, you will probably never find a more efficient or greener computing paradigm. It minimises the number of extraneous operating systems, virtual or physical machines required, as this is truly multi-tenant computing. Every serverless host could undoubtedly be run at 70-90% utilisation, rather than the 10-50% you typically see in most enterprise DCs today! If you could take every workload in the world and switch it to serverless overnight, based on those efficiency levels, how many data centres, how much power and how many thousands of tonnes of metals could you save? Greenpeace should be refactoring their website as we speak!Serverless Computing is green!
  • Although you do have to introduce a number of tools to help you track what is actually going on with your environment, you can move away from doing a whole load of the mundane management tasks such as patching, OS management etc, and move up the stack to spend your resources on more productive and creative activities; actually adding business value (Crazy idea! I thought in IT we just liked patching for a living?)!
  • The VM sprawl we have today would be reduced as workloads are rationalised. That said, you just end up with replacing this with container or function sprawl, which is even harder to manage! 🙂
  • You gain potentially massive scalability for your applications. Instead of scaling entire applications, you just scale the bottleneck functions, which means your application becomes more efficient overall. Definitely time to read The Goal by Goldratt and understand the Theory of Constraints before you go down this route!
  • Finally you can potentially see significant cost savings. If there are no requests, then there is no charge! If you were running some form of event driven application or trigger, instead of paying tens or hundreds of pounds per month for a server, you might only be paying pennies! Equate this to dev/test platforms which might only be needed to run workloads for a few hours a day, or production platforms which only need to process transactions when customers are actually online, it really starts to add up, even more than auto-scaling IaaS platforms.
    Taking that a step further, if you have are running a startup, why pay hundreds or thousands a month for compute you “might” need but which often sits idle, over-throwing your functions into a scalable platform which will only charge you for actual use! I know where I would be putting my money if I were a VC…

Serverless Computing is hot!

Closing Thoughts

Serverless is a really interesting technology move for the industry which (as always) comes with it’s own unique set of benefits and challenges. I can’t see it ever being the defacto standard for everything (for the same reasons we still use mainframes and physical servers today), however there are plenty of brilliant use cases for it. If devs and startups are comfortable with the vendor lock-in and other risks, why wouldn’t they consider using it?

StorageOS – An array based on containers? It’s like storage for millenials!

Last week I managed to catch up with the guys from StorageOS, a new container-based storage company, headquartered in London. I found out about them at a London Storage Beers event a few weeks ago, and my first question was, what the hell is container-based storage, and how does it work?!

They started from the premise (yes that’s actually the correct use of the word premise!), that if you want to build a storage system FOR containers, what better way to do it than to build it FROM containers. StorageOS therefore offer what they describe as “full enterprise storage array functionality, delivered by software, on a pay-as-you-go basis”. They also plan to offer a free-forever Developer tier, which includes everything except HA functionality which you would obviously need for production usage!

StorageOS Announcement

So the good news is, today (Monday 20th June 2016) StorageOS are announcing the release of their Beta at DockerCon, so you can now download and test out their new storage platform.

The StorageOS Stack

The StorageOS Stack

 

You can deploy this StorageOS software anywhere from bare metal to containers:

StorageOS - It's software, so it runs anywhere!

It’s software, so it runs anywhere!

Appliances for some of the larger clouds are in the works, but will not be available on day zero.

They can then consume any back-end storage, from SSD, HDDs and virtual drives, to EBS volumes, object stores, etc. You then pool all of capacity from all devices into a capacity pool, which is deduped, encrypted, and available across all nodes, and carve out volumes to present to systems like Docker through their own native Docker driver, or (slightly oddly) iSCSI / FC!!! They even have VAAI support in development!

Overall, I think it’s a pretty interesting product. At first look it feels a bit like a traditional array in a container package, much like if you containerised an enterprise app, then just utilised as a traditional array with some container plugins, instead of being very targeted and container-specific. StorageOS do have an OS driver to let you mount their volumes direct from containers, but there are other things out there today which do that anyway (e.g. Flocker).

I would say their messaging is a little inconsistent at the moment, and adding things like FC integration early on feels a bit odd if they’re positioning themselves as a container play. They do however state clearly that they’re targeting enterprises and want to make the on-boarding process as simple and friction-less as possible. I do worry that this “all things to all people” approach could be a wee bit risky at this early stage, and being more laser focused in the short to medium term would allow them to differentiate more.

StorageOS Cloud

The founders were very specific when they stated that they were building a clustered array with synchronous remote replicas, not a distributed storage array. Async replication is coming, which will be critical to maintaining performance in a hybrid cloud or multi-cloud setup. I really like the fact that you can stretch the same hybrid storage environment between your on-premises and cloud infrastructure using a single storage solution. This same solution can actually be used to span multiple public clouds as well, providing a resilient storage solution between say AWS and Azure, all of which is deduped and encrypted of course! This could be very interesting indeed, as customers look to protect their workloads from large public outages!

Finally, the StorageOS software is built (as you would expect these days) with APIs at the heart of everything. Even the modern GUI is really just based on API calls to the back end.

The Tekhead Take

Anyway, enough gabbing… It’s still early days, but the storage experience of the founders is certainly solid! Who better than ex-storage admins to provide a product that works well for storage admins?! I’d say there’s a good chance of this becoming a pretty cool product in the future, so definitely one to watch!

You can find a link to their website and beta sign up here:
http://storageos.com/index.php/product/

StorageOS hipster-approved storage

Docker Part 3 – HOWTO Create a Simple Python Web App in Docker

Docker Logo

If you’ve been following this series (last part here), we now have docker installed, but what do we do next? Create our first containers of course!

I think we need to make it a bit more interesting though as just creating containers is a bit meaningless, in real life we’re actually going to do something with them. The scenario is that we want a few copies of our simple python web application. To achieve this we need to use a few simple docker commands:

  • Create a new container
  • Install an application inside of it
  • Store it as an image
  • Duplicate it more than once and make these available to other clients
  • Test each instance to ensure they are unique and accessible

The good thing here is that all of the above steps are repeatable with whatever application you wish to install inside your containers. This is just a simple way to help get your head around the concepts and commands.

We start by creating our first empty Ubuntu container. The –i connects us to the shell of the container (interactive).

$ sudo docker run -i -t --name="firstcontainer" ubuntu:14.04 /bin/bash


Then in this case we need to install the python and web.py dependencies INSIDE of the container. This could be modified for any required dependencies or apps.

$ apt-get update
$ apt-get install -y python python-webpy


Within the container, create a new python script:

$ sudo mkdir /home/test1
$ sudo vi /home/test1/app.py


The contents of the script are:

#!/usr/bin/python
import web,sys
urls = (
 '/', 'index'
 )
app = web.application(urls, globals())
class index:
 def GET(self):
 argumentone = sys.argv[2]
 greeting = "Hello World, the test message is " + argumentone
 return greeting
if __name__ == '__main__' :
 app = web.application(urls, globals())
 app.run()


Exit the container, back to the Native OS:

$ exit


Confirm the name of your container (the last container run):

$ sudo docker ps –l
 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
 f711ff0fd695 ubuntu:14.04 /bin/bash 32 minutes ago Exit 0 firstcontainer


Create a new image from your docker called testpython1

$ sudo docker commit firstcontainer testpython:0.1


Confirm you can see the image and get the image ID:

$ sudo docker images
 REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
 testpython 0.1 fcb365f7591b 2 minutes ago 247.8 MB


Finally, start up 3 instances of your web application:

$ sudo docker run -d -p 8081:8081 fcb365f7591b python /home/test1/app.py 8081 "instance1"
$ sudo docker run -d -p 8082:8082 fcb365f7591b python /home/test1/app.py 8082 "instance2"
$ sudo docker run -d -p 8083:8083 fcb365f7591b python /home/test1/app.py 8083 "instance3"


Open a browser on your network and connect to http://dockerserverip:8081
Try the same for the other two port numbers. Note we now have a system running 3 separate containers which could then be load balanced using a third party tool, or even run completely different content. Cool huh?

Next, how to mount a drive into your container…

VMworld Europe 2015 Day Three Roundup

Day three was quite simply Cloud Native Apps day for me!

I began in the morning with an internal partner briefing with some of the guys in the CNA team. Needless to say this was really interesting and for me it was a total nerdgasm! I did get a real sense that VMware are certainly not planning to get left behind in this new era, in fact far from it as some of their future plans will push the boundaries of what is already bleeding edge today. For the Pratchett fans amongst you, I would suggest that we are indeed living in Interesting Times!

Immediately following this I legged it down to Hall 8 for the CNA panel session, hosted by VMware CTO Joe Baguley, and featuring some regular faces from the London VMUG including Robbie Jerrom and Andy Jenkins. One of the interesting discussions which came up was about DevOps. DevOps is a nice vision, but developers today understand code, point them at a faulty storage array and they will look at you blankly… There is a skills gap there!

If the entire world is expected to become more DevOps focussed, Infrastructure will have to become a hell of a lot easier, or everything will need to just move to the public cloud. The reverse holds true of course, point most infra guys at something much more complex than a PowerShell / Bash / Perl script and you’re asking for trouble.

A true DevOps culture will require people with a very particular set of skills. Skills they have acquired over a very long career. Skills that make them a nightmare for… (ok I’ll stop now!).

Next was a wee session on the performance of Docker on vSphere. This actually turned out to be a stats fest, comparing the relative performance of Docker running on native tin and virtualised. The TLDR for the session was that running docker in a VM provides a minimal overhead to most things. Slightly more impact on network latency than other resources, but depending on the scale out nature of the solution it can actually perform better than native due to optimal NUMA scheduling.

Consider requirements over performance when looking at how to roll out your container platform. If you are running to performance margins of sub 5-10% on any resource then you have under-designed your infrastructure!

The final session of the day (INF5229) was actually probably my favourite of the whole week. If this is released on youtube I recommend you catch it above any other session! Ben Corrie (Lead Engineer on Project Bonneville) took us through a clear and detailed explanation of the differences between running Docker on Linux inside of a standard VM compared to running vSphere Integrated Containers and Photon.

After a quick overview of some of the basics, Ben then proceeded to do several live demos using a one day old build, inside of his Mac Mini test lab (with he appropriate nod given to Mr William Lam of course)! I’m convinced he must have slaughtered many small animals to the gods of the Demos, as the whole thing went off without a hitch! Perhaps Bill Gates could have done with his help back in 1998!

Most importantly, Ben showed that via the use of vSphere Integrated Containers, you are no longer limited to simply containerising Linux, and the same process can be applied to virtually any OS, with his example being MS-DOS running Doom in a container!!! When cloning Windows VMs, the same technology will be used as last year, which enables the ability to generate a new SID and do a domain join almost instantly.

It’s also worth noting that this is not based on the notoriously compromised TPS, and is all new code. Whether that makes it more secure of course, is anyone’s guess! 🙂

MS-DOS Container under Docker and VIC, running Doom!

MS-DOS Container under Docker and VIC, running Doom!

Once the sessions were all done for the day I wandered down to the Solutions Exchange for the annual “Hall Crawl”, where I was admiring Atlantis Computing CTO Ruben Spruijt’s Intel NUC homelab, running in a hyper converged configuration. The only negative I would suggest is that his case is the wrong way round!

IMG_0103

The day finished off with the VMworld party, and a great performance from Faithless on the main stage. As a Brit, this was a great choice, but I did see a few confused faces from many of our EU counterparts, at least until Insomnia started playing!

Day Three QotD

Robbie Jerrom produced Quote of the Day for me on the CNA panel (which was where my Quote of the Event came from, but more of that later). It is very simple but succinct in getting across a relatively complex subject:

A micro service does one thing, really well.

 

%d bloggers like this: