Containers (Docker, LXC) @ nodeconf 2015
Containers, like the ones you put sundries in?
Linux Containers (LXC) have been a hot new topic over the last 2 years. The easiest way to understand a container is to compare it with a Virtual Machine (VM). You can run a VM on your machine and it will pretend to be a computer inside a computer. It will emulate hardware so it can run an Operating System (OS) within your computer. When you start a VM you have to choose how much of your memory and CPU it can take up, and once you start the machine that piece of memory and CPU are designated to the VM (the guest) and lost to your computer (the host).
When you start a LXC however, you can start a box without it having to emulate machinery, it will create a “contained” environment, sharing the stuff that is common (the Linux kernel), and fork out a platform to run your processes and OS (any Linux variant) on. Running that container then, doesn’t take that much extra memory, because it doesn’t need to run an entire machine, with hardware and OS. But just the part that is different from the host. This also means bootups of a new container are close to instantaneous.
One of the products that was born as a result of this kernel API is: Docker. Docker makes it possible to create a container that you can provision and deploy your app on, and commit those changes to the box. It’s like git for machines. All the changes to your box can be committed. Once you think your container is ready for deployment you can take the whole image and deploy that on a server. Instead of deploying your code, you now deploy your container.
Why would you do that?
Instead of killing a process or restarting a server, you just swap out a container. Deployment can become instant. You test the container on your local box, integration server, staging and production, it will be a matter of switching to an image, instead of deploying the code on there. If it doesn’t work, you can just go back a commit. Easy as pie.
If you’re using Vagrant you can actually try this already instead of using your VM’s (read: slow VM’s). Check this vagrant plugin out: vagrant lxc. You can just try it out here. Play around, install all kinds of conflicting versions of software, destroy the box, and enjoy how you didn’t completely bork your dev environment.
The people from Modulus were in the session too, telling how happy they’ve been with Docker. They were really excited about Docker Swarm. A proxy API with which you can use the same Docker API you’re used to, but have it talk to multiple instances in one API call.
Best practices for Docker containers
First of all, most people agreed they’re is not really a “wrong” way to do a Dockerfile. When your Docker container becomes more complex it will become bigger though. Just like a git repository, the more commits, the bigger the git tree will become. Because it tracks the changes. A common practice is to compile and build a docker container and copy the compiled version to an empty docker image, to keep it small.
Other people suggested running a startup script inside the Docker image, instead of having the Dockerfile do all the provisioning.