Tag Archive for HOWTO

Guide to OpenStack for VMware and AWS Admins – Part 1 – Intro

As a newcomer to the OpenStack world, with quite a bit of VMware and some AWS experience, I thought it would be worthwhile documenting some of the basics as I learn. Hopefully, this will provide something useful for others with a background in either technology, who choose to follow the same path in the future. In many ways, this is planned to be as much to solidify my understanding, as anything else!

Also, it’s probably worth noting that I may express some opinions throughout this series as to where one technology may suit specific workloads better than another. This certainly does not constitute me expressing a preference for one or another! I’m lucky enough to have a day job where I work with a huge range of great technologies; each has their own place in today’s enterprises. The technology should fit the use case – Technology Agnosticism FTW! 🙂

The Basics

Firstly, a few key basics and observations!

  • OpenStack is a collection of different tools and technologies, most of which are entirely interchangeable and / or optional. For example, you could choose to use any of a huge number of hypervisors, such as KVM, Xen, VMware, etc, each of which will have their own pros and cons. I will try to dig into one or two of these when I do a post on Nova, later.
  • OpenStack is quite a complex beast, and most certainly not a simple monolithic stack. Within each of the separate elements of OpenStack (known as “OpenStack services“) there are actually multiple independent processes, all of which do different tasks within their specific service. Here is a quick conceptual diagram which describes a typical solution and all of the interconnecting services:OpenStack Conceptual Architecture
  • Taking that a stage further, there is also a far more complex diagram which shows all of the logical processes in a typical architecture as well!
    OpenStack Logical Architecture
  • Keystone is the most critical service of all, as it is the glue which binds all other OpenStack Services together. As you add more services into your stack, they all register back to Keystone to provide service discovery, API client authentication and a number of other functions. The closest equivalent in vSphere is the PSC. In AWS it would be IAM, but IAM is mainly about the permissions and security elements only, compared to the broad set of functions Keystone provides.
  • If you are an AWS developer and don’t want to have to re-learn or re-write all of your API calls for your software, you don’t have to! You could use HPE Helion Eucalyptus to effectively provide an AWS-compatible API for your OpenStack cloud – that’s pretty cool!
  • OpenStack is still being developed at a huge rate of knots! The releases come out every 6 months and are named alphabetically. We are already at M (Mitaka), with N (Newton) coming out imminently! It’s definitely getting pretty mature as a platform, and I suspect that’s probably why many more enterprises are being quite vocal about looking at it for their private clouds these days.
Private Cloud
Building OpenStack

The control plane and proxy services can all be run as containers. A typical highly-scalable design pattern is therefore a set of physical hosts running containers for all management / API / control processes. You then add one or more separate compute and storage clusters based on your scalability and resilience requirements. For a test lab, you can collapse these onto as little as a single physical host if you use nested instances.

In fact, it will even install in as little as 8GB of RAM, as Eric Wright described in his blog post here about installing on top of OSX. This was based on the 2nd Edition of the awesome OpenStack Cloud Cookbook from Kev Jackson and Cody Bunch. I also did a recent review of the book, for those who are interested.

Vagrant is an excellent way to help get started quickly as it will pull down images and spin up machines very quickly, with minimal effort. It supports multiple environments from VirtualBox and VMware to Docker and even AWS.

The fact that OpenStack is designed from the ground up with automation in mind means you can do some really amazing stuff with it. For example, the other day I was at a presentation where my colleague @the_cloudguru deployed a development stack on his laptop using just 3 lines of OpenStack Ansible code! Very impressive!

Closing Thoughts

I’m still really early in my OpenStack learning journey, but as my knowledge builds I will expand on this series. If you do see any errors in the information in this series, please don’t hesitate to let me know!

Docker Part 3 – HOWTO Create a Simple Python Web App in Docker

Docker Logo

If you’ve been following this series (last part here), we now have docker installed, but what do we do next? Create our first containers of course!

I think we need to make it a bit more interesting though as just creating containers is a bit meaningless, in real life we’re actually going to do something with them. The scenario is that we want a few copies of our simple python web application. To achieve this we need to use a few simple docker commands:

  • Create a new container
  • Install an application inside of it
  • Store it as an image
  • Duplicate it more than once and make these available to other clients
  • Test each instance to ensure they are unique and accessible

The good thing here is that all of the above steps are repeatable with whatever application you wish to install inside your containers. This is just a simple way to help get your head around the concepts and commands.

We start by creating our first empty Ubuntu container. The –i connects us to the shell of the container (interactive).

$ sudo docker run -i -t --name="firstcontainer" ubuntu:14.04 /bin/bash

Then in this case we need to install the python and web.py dependencies INSIDE of the container. This could be modified for any required dependencies or apps.

$ apt-get update
$ apt-get install -y python python-webpy

Within the container, create a new python script:

$ sudo mkdir /home/test1
$ sudo vi /home/test1/app.py

The contents of the script are:

import web,sys
urls = (
 '/', 'index'
app = web.application(urls, globals())
class index:
 def GET(self):
 argumentone = sys.argv[2]
 greeting = "Hello World, the test message is " + argumentone
 return greeting
if __name__ == '__main__' :
 app = web.application(urls, globals())

Exit the container, back to the Native OS:

$ exit

Confirm the name of your container (the last container run):

$ sudo docker ps –l
 f711ff0fd695 ubuntu:14.04 /bin/bash 32 minutes ago Exit 0 firstcontainer

Create a new image from your docker called testpython1

$ sudo docker commit firstcontainer testpython:0.1

Confirm you can see the image and get the image ID:

$ sudo docker images
 testpython 0.1 fcb365f7591b 2 minutes ago 247.8 MB

Finally, start up 3 instances of your web application:

$ sudo docker run -d -p 8081:8081 fcb365f7591b python /home/test1/app.py 8081 "instance1"
$ sudo docker run -d -p 8082:8082 fcb365f7591b python /home/test1/app.py 8082 "instance2"
$ sudo docker run -d -p 8083:8083 fcb365f7591b python /home/test1/app.py 8083 "instance3"

Open a browser on your network and connect to http://dockerserverip:8081
Try the same for the other two port numbers. Note we now have a system running 3 separate containers which could then be load balanced using a third party tool, or even run completely different content. Cool huh?

Next, how to mount a drive into your container…

NanoLab – Part 5 – Intel NUC BIOS Update Issues FwUpdateFullBuffer

Having taken delivery of a new Intel NUC D34010WYKH this week, I followed the usual (and Intel recommended process) of upgrading the firmware / BIOS to the latest version. As it happens, this was version 0030 (WY0030.BIO). This was installed using the standard USB with a .BIO file, and press F7 method as there was obviously no OS installed.

Unfortunately having installed this version, building and booting the ESXi host, I was getting some very strange network issues. Specifically no DHCP address being picked by the host, but a manual IP would ping intermittently (around 10-15% of the time). Not good. In addition there were some very odd behaviours observed in the BIOS such as not booting from USB consistently, hanging when I hit ctrl-alt-del and others.

My guess was that this was a firmware related issue, so I decided to roll it back to an earlier version. I started with 0026 by installing the firmware using the same F7 method above. This is when I got an error message which stated FwUpdateFullBuffer followed by several numbers (no screenshot I’m afraid). At this point, the firmware update bombed out. Really not good!

Repeating the activity only achieved the same result, even with different firmware versions and install methods (such as a bootable USB drive with FreeDOS and iFlash2.exe).

After a bit of searching I found the following BIOS recovery mode instructions for situations when you have a screwed up BIOS:

  1. Copy the recovery file (*.bio) to a bootable USB device.
  2. Plug the USB device into a USB port of the target Intel NUC.
  3. Shut down the computer and unplug AC power.
  4. Open the chassis and remove the yellow BIOS Configuration Jumper. See the Technical Product Specification for the location of this jumper.
  5. Power the system on.
  6. Wait 2-5 minutes for the update to complete.

    Intel NUC BIOS Recovery from 0030 to 0025

    Intel NUC BIOS Recovery from 0030 to 0025

  7. The computer will either turn off when the recovery process is completed or it will prompt you to turn off the computer.
  8. Remove the USB device from the USB port.
  9. Replace the BIOS Configuration Jumper.
  10. Restart the computer.

Following the above, I have updated my Intel NUC D34010WYKH to version 0025 and have found it to be reasonably stable so far, and definitely works with ESXi.

Obviously follow any of the above suggestions at your own risk. I cannot be held responsible if your NUC becomes a BRICK, but hopefully this will save people some time and frustration, as this was several hours of messing around in my case!

Docker Part 2 – HOWTO Remove / Delete Docker Containers

Docker Logo

So you have been messing with docker for a few minutes or hours, and now you have a bunch of either running or stopped containers you no longer need. How do you get rid of them?

Removing Single Containers

To remove a single docker container, you simply start by listing all of the docker containers (started or stopped) to ensure you know which one to delete:

$ sudo docker ps –a

Then remove the chosen container:

$ sudo docker rm <container name>

If the container is currently running you can simply add –f to stop and remove the container in a single command:

$ docker rm -f <container name>

Unless it’s paused, then you will get an error something like the following:

Error response from daemon: Could not kill running container, cannot remove - Container e4f28eccb0cbcfbf4d78104bfe3e84039f62c5073f7301f8a39bb77a9598ae72 is paused. Unpause the container before stopping

This is easy to resolve. The “docker pause” command was added as of Docker 1.0, allowing for better resource utilisation if you have containers you don’t currently need to be wasting CPU cycles. As of Docker 1.1, running containers are also paused during commit activities, to ensure a consistent file system. Simply check the ID of the VM (with a ps command), unpause it, then remove:

sudo docker ps
sudo docker unpause <container id>
sudo docker rm -f <container id>


Removing Multiple Containers

Sometimes we have built up a number of containers and we just want to scrub the lot in one go. If you want to remove all containers (running or not), first you need to generate a list of all of the container IDs, then you pass that list to the docker rm command as follows:

sudo docker rm -f $(sudo docker ps -aq)

Alternatively if you wish to remove only the non-running containers:

sudo docker rm $(sudo docker ps -q)


That’ll do for now, but in the next post I will go into how to install your first app…

Docker Part 3 – HOWTO Create a Simple Python Web App in Docker