My First Docker Container

I’ve been reading and writing about Docker for a while. Recently I started using Docker for a project, getting hands-on to solve real problems is my favorite way to learn. Like most infrastructure people I don’t consider myself a developer. Sure, I write a lot of scripts. But I know almost nothing about proper software development. For my workload generator project, I was using Ansible to manage the build of a bunch of Ubuntu machines. These machines exist simply to run some workload scripts. My first workload is a script written in Python; it does some file server access and emailing.

The Ansible builds are pretty cool. A few dozen lines of playbook take a bunch of vanilla Ubuntu machines and have them all configured exactly the same. Some of the configuration was to get the newly built machines properly setup to access the network. But a lot was to get all the dependencies for the Python script. I needed a few Python libraries added and some helper applications. One of the interesting challenges is that my Ansible build is accretive. What I mean is that a component added to test a script function was never removed, until I rebuilt the machine. When I took a piece out of the build script I was never completely sure of its effect until the next rebuild. I rebuilt the Ubuntu machines almost daily, using a fully automated PXE based install and the Ansible playbook. Even with all the automation, it was still a bit painful. The final straw was when I found that I needed to run multiple copies of the workload script on each Ubuntu machine. And I needed to be able to scale up and down the number of copies. So I had a few requirements that seem to fit with using Docker:

  1. No application specific build on the worker VM
  2. Multiple worker VMs treated as a pool
  3. Scale up and down the number of workload instances
  4. Ability to have multiple workload types and scale each independently

I took all of the workload specific parts out of the Ansible build and put them into my Dockerfile for the workload. Now the Ubuntu hosts have no installations or configurations that are specific to any workload script. If I remove a component from the build I just have to rebuild the Docker image to verify functionality. Rebuilding a Docker image is a couple of minutes, rather than the 45 minutes it takes to rebuild the Ubuntu boxes. Docker swarm lets me run multiple copies of the workload script spread across the Ubuntu boxes. I can increase the number of instances with a single command and decrease them just as easily.

Now that the workload is not reliant on the Ubuntu build, I don’t need to rebuild the Ubuntu boxes for workload changes. Let me draw a parallel to a production environment. The application that is running on my servers does not require a specific build. The container host (my Ubuntu boxes) can be patched and upgraded without breaking the applications in the containers. One container application can use a different library version, or operating system, to every other application. Each application can be upgraded or changed without impacting the build of any other application. This is like the benefits of virtual machines but extending up into the operating system. Change management becomes easier as there is more isolation between applications.

I am very impressed with Docker containers and Docker swarm as a way to simplify working with my workload application. I did need to make some changes to my application for it to be a good fit in a container. But that is a story for another blog post.

© 2016, Alastair. All rights reserved.

About Alastair

I am a professional geek, working in IT Infrastructure. Mostly I help to communicate and educate around the use of current technology and the direction of future technologies.
This entry was posted in General. Bookmark the permalink.