Tag Archive for docker

Docker Part 3 – HOWTO Create a Simple Python Web App in Docker

Docker Logo

If you’ve been following this series (last part here), we now have docker installed, but what do we do next? Create our first containers of course!

I think we need to make it a bit more interesting though as just creating containers is a bit meaningless, in real life we’re actually going to do something with them. The scenario is that we want a few copies of our simple python web application. To achieve this we need to use a few simple docker commands:

  • Create a new container
  • Install an application inside of it
  • Store it as an image
  • Duplicate it more than once and make these available to other clients
  • Test each instance to ensure they are unique and accessible

The good thing here is that all of the above steps are repeatable with whatever application you wish to install inside your containers. This is just a simple way to help get your head around the concepts and commands.

We start by creating our first empty Ubuntu container. The –i connects us to the shell of the container (interactive).

$ sudo docker run -i -t --name="firstcontainer" ubuntu:14.04 /bin/bash

Then in this case we need to install the python and web.py dependencies INSIDE of the container. This could be modified for any required dependencies or apps.

$ apt-get update
$ apt-get install -y python python-webpy

Within the container, create a new python script:

$ sudo mkdir /home/test1
$ sudo vi /home/test1/app.py

The contents of the script are:

import web,sys
urls = (
 '/', 'index'
app = web.application(urls, globals())
class index:
 def GET(self):
 argumentone = sys.argv[2]
 greeting = "Hello World, the test message is " + argumentone
 return greeting
if __name__ == '__main__' :
 app = web.application(urls, globals())

Exit the container, back to the Native OS:

$ exit

Confirm the name of your container (the last container run):

$ sudo docker ps –l
 f711ff0fd695 ubuntu:14.04 /bin/bash 32 minutes ago Exit 0 firstcontainer

Create a new image from your docker called testpython1

$ sudo docker commit firstcontainer testpython:0.1

Confirm you can see the image and get the image ID:

$ sudo docker images
 testpython 0.1 fcb365f7591b 2 minutes ago 247.8 MB

Finally, start up 3 instances of your web application:

$ sudo docker run -d -p 8081:8081 fcb365f7591b python /home/test1/app.py 8081 "instance1"
$ sudo docker run -d -p 8082:8082 fcb365f7591b python /home/test1/app.py 8082 "instance2"
$ sudo docker run -d -p 8083:8083 fcb365f7591b python /home/test1/app.py 8083 "instance3"

Open a browser on your network and connect to http://dockerserverip:8081
Try the same for the other two port numbers. Note we now have a system running 3 separate containers which could then be load balanced using a third party tool, or even run completely different content. Cool huh?

Next, how to mount a drive into your container…

Installing Docker on Amazon AMI Quick Fix

Docker Logo

I was installing and playing with Docker on an AWS EC2 instance this evening, using the default amazon AMI [specifically Amazon Linux AMI 2015.09.1 (HVM)] and came across a stupidly simple issue.

Docker containers would not start and were showing the following error:

Cannot connect to the Docker daemon. Is 'docker -d' running on this host?

Checking processes I don’t see docker running:

$ ps -ef | grep docker
ec2-user  2518  2485  0 22:20 pts/0    00:00:00 grep --color=auto docker

After looking at a similar issue I had with Ubuntu a year or so ago, I realised (duh!) the Docker service was simply not running, even though it had installed fine.

A quick start of the service fixed this:

$ sudo service docker start

And now…

$ ps -ef | grep docker
root      7119     1  1 22:30 pts/0    00:00:07 /usr/bin/docker daemon --default-ulimit nofile=1024:4096
ec2-user  7539  2429  0 22:41 pts/0    00:00:00 grep --color=auto docker
$ sudo docker info
Containers: 1
Images: 4
Server Version: 1.9.1
Storage Driver: devicemapper
 Pool Name: docker-202:1-263816-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 107.4 GB
 Backing Filesystem: xfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 291.9 MB
 Data Space Total: 107.4 GB
 Data Space Available: 6.695 GB
 Metadata Space Used: 892.9 kB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.147 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.93-RHEL7 (2015-01-28)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.1.10-17.31.amzn1.x86_64
Operating System: Amazon Linux AMI 2015.09
CPUs: 1
Total Memory: 995.6 MiB


This has now continued to work fine through reboots, so hopefuly saves someone spending more than a few seconds troubleshooting!

Top 5 Posts of 2015 on Tekhead.org

This is just a very quick note to say thank you, everyone, for your awesome support and continued readership over the past year! Without that I don’t know that I would put in the effort!

I very much hope that the content produced continues to be of some use in the coming year…

Moving swiftly on to the Top 5 most popular posts of 2015; they were as follows:

  1. My Synology DSM Blue LED issue was actually just a failed drive!
  2. NanoLab – Running VMware vSphere on Intel NUC – Part 1
  3. Installing Docker on Ubuntu Quick Fix
  4. Docker Part 1 – Introduction and HOWTO Install Docker on Ubuntu 14.04 LTS
  5. NanoLab – Running VMware vSphere on Intel NUC – Part 2

I can’t say I’m surprised with the Synology post popularity, as there are far more Synology users out there than virtualisation and storage admins I should think… not 100% my usual reader demographic. 😉

The Nanolab series continues in popularity, which warms my cockles! They make for an awesome homelab, and I have a handful of posts almost ready to go for 2016 to continue this series.

Finally, and most interestingly considering current industry trends, the Docker HOWTO series has definitely proven very popular, even though so far I have stuck to the absolute basics! I will definitely endeavour to expand on this series throughout this year.

So that’s it for now, just a quick one. I hope you all had an awesome New Year (I spent mine this year watching Star Wars: The Force Awakens woohoo!) and wish you all the best for the exciting things to come in 2016!

VMworld Europe 2015 Day Three Roundup

Day three was quite simply Cloud Native Apps day for me!

I began in the morning with an internal partner briefing with some of the guys in the CNA team. Needless to say this was really interesting and for me it was a total nerdgasm! I did get a real sense that VMware are certainly not planning to get left behind in this new era, in fact far from it as some of their future plans will push the boundaries of what is already bleeding edge today. For the Pratchett fans amongst you, I would suggest that we are indeed living in Interesting Times!

Immediately following this I legged it down to Hall 8 for the CNA panel session, hosted by VMware CTO Joe Baguley, and featuring some regular faces from the London VMUG including Robbie Jerrom and Andy Jenkins. One of the interesting discussions which came up was about DevOps. DevOps is a nice vision, but developers today understand code, point them at a faulty storage array and they will look at you blankly… There is a skills gap there!

If the entire world is expected to become more DevOps focussed, Infrastructure will have to become a hell of a lot easier, or everything will need to just move to the public cloud. The reverse holds true of course, point most infra guys at something much more complex than a PowerShell / Bash / Perl script and you’re asking for trouble.

A true DevOps culture will require people with a very particular set of skills. Skills they have acquired over a very long career. Skills that make them a nightmare for… (ok I’ll stop now!).

Next was a wee session on the performance of Docker on vSphere. This actually turned out to be a stats fest, comparing the relative performance of Docker running on native tin and virtualised. The TLDR for the session was that running docker in a VM provides a minimal overhead to most things. Slightly more impact on network latency than other resources, but depending on the scale out nature of the solution it can actually perform better than native due to optimal NUMA scheduling.

Consider requirements over performance when looking at how to roll out your container platform. If you are running to performance margins of sub 5-10% on any resource then you have under-designed your infrastructure!

The final session of the day (INF5229) was actually probably my favourite of the whole week. If this is released on youtube I recommend you catch it above any other session! Ben Corrie (Lead Engineer on Project Bonneville) took us through a clear and detailed explanation of the differences between running Docker on Linux inside of a standard VM compared to running vSphere Integrated Containers and Photon.

After a quick overview of some of the basics, Ben then proceeded to do several live demos using a one day old build, inside of his Mac Mini test lab (with he appropriate nod given to Mr William Lam of course)! I’m convinced he must have slaughtered many small animals to the gods of the Demos, as the whole thing went off without a hitch! Perhaps Bill Gates could have done with his help back in 1998!

Most importantly, Ben showed that via the use of vSphere Integrated Containers, you are no longer limited to simply containerising Linux, and the same process can be applied to virtually any OS, with his example being MS-DOS running Doom in a container!!! When cloning Windows VMs, the same technology will be used as last year, which enables the ability to generate a new SID and do a domain join almost instantly.

It’s also worth noting that this is not based on the notoriously compromised TPS, and is all new code. Whether that makes it more secure of course, is anyone’s guess! 🙂

MS-DOS Container under Docker and VIC, running Doom!

MS-DOS Container under Docker and VIC, running Doom!

Once the sessions were all done for the day I wandered down to the Solutions Exchange for the annual “Hall Crawl”, where I was admiring Atlantis Computing CTO Ruben Spruijt’s Intel NUC homelab, running in a hyper converged configuration. The only negative I would suggest is that his case is the wrong way round!


The day finished off with the VMworld party, and a great performance from Faithless on the main stage. As a Brit, this was a great choice, but I did see a few confused faces from many of our EU counterparts, at least until Insomnia started playing!

Day Three QotD

Robbie Jerrom produced Quote of the Day for me on the CNA panel (which was where my Quote of the Event came from, but more of that later). It is very simple but succinct in getting across a relatively complex subject:

A micro service does one thing, really well.


%d bloggers like this: