25 Docker

When you mention containers, most people will think of Docker. It’s not the only container solution out there, but it is the one that took off.

Docker is more than a container manager – it’s a complete set of tools for container-based development and deployment. These tools empower you to scale your site up and down to meet different challenges.

Sometimes, traffic volumes can spike, and a single server may struggle to deal with all the added work. With Swarm Mode, you can scale your Docker-ized site onto multiple physical servers with a single command. When traffic volumes return to normal, you can scale back.

Docker Cloud is a web-based service that automates the task for you. It integrates smoothly with the most popular cloud hosting platforms (such as Amazon Web Services, or Digital Ocean).

Docker is quite easy to learn, it reduces the pain and effort of cloud computing, and it can make your site more secure!

Let’s take a closer look at some of the additional benefits containers bring, and how they can make your life easier.

Deploying With Containers

Deploying applications has traditionally been very painful.

Code that works perfectly on the developer’s PC may fail or crash on a live server. This is due to small differences between the developer’s computer and the environment on the live server.

Maybe the developer has a different version of a software package installed. Maybe an application on the server is incompatible with the dev’s code.

In any case, tracking down and fixing these problems can be very painful indeed.

This often happens in the WordPress world, especially if your site uses a lot of custom code.

With containers, you can guarantee that the developer’s environment is exactly the same as the live server. Before the developer writes a single line of code, they can download the container to their computer.

If the code works on their PC, it will work on the live server.

This is a huge benefit of using containers, and it’s a major reason for their popularity.

As long as your host machine can run Docker, you can count on dependable deployment. If you make changes to your WordPress container and it works on the developer’s machine, it will work on your server. If it works on your current server, it will work on another one.

This makes it easy to migrate between hosts if you ever have to change service provider.

Getting Started With Docker and Containers

Docker is designed for simplicity, and I could give you a list of copy-paste instructions and leave you to it. But you’ll get much more out of it if you understand how it works.

So far, I’ve described containers as a type of magic box. This is a little vague, so let’s get into some specifics.

Docker uses some functionality that’s built into the Linux kernel – these include namespaces and cgroups. It also relies on “copy on write” filesystems.

Without this functionality, there would be no Docker. Let’s explain these concepts one by one.

What are Cgroups?

Cgroups stands for control groups. It’s a way to organize a group of processes, and control their access to resources – like memory, CPU time, device access, and network connections.

Cgroups basically limit the resources a group of processes can use. This allows Docker to contain its processes, so they don’t eat up the host machine’s resources.

What are Namespaces?

Namespaces affect the way processes view the system they are running in, and it limits their ability to interact with the wider system.

This is the mechanism Docker uses to create the illusion that the processes are running on another machine.

Here are the types of namespace in Linux:

When Docker launches a process, it creates a new namespace for each of these categories and places the running process in that namespace. It also creates a new root filesystem just for that container and adds this to the mnt namespace of the process.

If a process creates any child processes, they are automatically included in these 6 namespaces that docker created.

To the running process, the world outside the container is invisible.

Now, let’s talk about the filesystem that Docker creates. Docker builds a “copy on write” file system. To the process(es) running in the container, it looks like a normal filesystem. However, it’s not.

It’s actually made up of multiple read-only images layered on top of each other, together with a read-write layer on top.

Imagine it like a stack of tracing paper. Each piece of paper has pictures on it. If you look down from above, it looks like all the images are on one sheet. The top sheet starts empty.

Now you can draw your own pictures on this top sheet, even drawing over the pictures on the other sheets. It looks like you destroyed these images, but they’re still there.

You can also “delete” images by drawing over them with white paint. Again, the images are still there on the other sheets of paper. But, to your view, it looks like they are gone.

Let’s see how this applies to a Linux container. Inside the container, it looks like there’s a complete filesystem, together with all the files you would expect to see on a running Linux box. There are system files and directories. Programs are stored in the expected places (such as /bin/ and /usr/bin/, etc).

If you were a process accessing this filesystem, you would find everything you need to do your job! You could find your configuration files in /etc/your-name/. You’d see data in other folders, and process information in /proc/. You could write and read data (as long as you had the right permissions!)

You’d never know you were living in a simulated file system! Actually, Docker’s a little like The Matrix for Linux processes…

So, why use such a strange file structure? The idea is to keep Docker containers as light as possible. It’s quite common to run 5-10 docker containers on a single machine. If each container got a complete file system, this would rapidly use up your hard disk.

Instead, each container shares images. If you had ten copies of the same container running, they would all share the same “layers” – the only thing that would be different would be the top “read-write” layer, like the top sheet of tracing paper from our example above.

Now, some containers could be different in small ways – maybe one of them has Apache and PHP, and another has MySQL. However, if you were careful to build these containers from the common image, they would share many of the lower layers. They would have a couple of unique layers, but otherwise, they would be the same.

When containers share filesystem layers, they’re actually reading data from the same files on your hard disk.

A complete stack of these layers is called an “image”. You can make a new image from an old one by adding extra layers on top (adding, changing or deleting files).

The Docker Daemon

So I’ve explained how Docker creates containers. And I briefly mentioned images – I’ll get back to that soon. Now let’s look at how Docker works.

The core of Docker is the Docker Engine – which lives inside a daemon – or long-running process. The Docker daemon is responsible for:

When you install Docker on your machine, it will automatically start the Docker daemon. It will also start when you reboot your machine.

The daemon also has a built-in RESTful API interface – like a web service. You can access this interface from within your machine, or remotely.

Now, you could communicate with the Docker daemon through the API, with a command line tool such as curl. It would be pretty messy, but it could be done. You would have to type the HTTP requests and understand the responses.

Fortunately, there’s a more user-friendly way to control the daemon.

The Docker Client

The docker client is a single command – “docker”. You add extra words after “docker” – like “docker run” or “docker pull”. The client then sends a command to the docker daemon, which performs the action.

Because the daemon exposes a restful API, it’s also possible to create your own applications that communicate with it – that’s way beyond the scope of this article.

Docker Hub

Docker would be a great tool if these were the only features. But there’s another great resource – the Docker Hub. The hub is an online directory of community-made images you can download and use in your own projects. These include Linux distributions, utilities, and complete applications.

Docker has established a relationship with the teams behind popular open source projects (including WordPress) – these partners have built official images that you can download and use as-is. Or you can use them as starting points for your own projects.

What’s more, you can get the Docker Hub code and create your own private directories – for images you don’t want to share (such as your own WordPress website). Or you can pay Docker to host private image repositories for you.

How to Build an Image – the Low-Level Way

Let’s build a quick and dirty Ubuntu LAMP server – just to demonstrate how it’s done. LAMP stands for Linux, Apache, MySQL, PHP/Perl/Python. We’ll use PHP in this example.

First, we need a base image to start from:

You should see a bunch of lines indicating the progress of the download. If you look closely, you’ll see there are several lines running. Each line is a separate image (remember, docker’s filesystem is a stack of layers – each layer is an image).

When Docker has finished downloading the complete image, you can move on to the next step – running the container:

Let’s explain the options:
-i means “interactive mode” – this mode sends the output to your screen.
-t this creates a pseudo-terminal or “tty”. It allows you to type commands into the session – inside the container.
/bin/bash – this is the path to the BASH shell (or command line, if you prefer).

After running that command, you’re inside the container, communicating with the BASH shell. Any commands you type at this point will be executed inside the container.

Right now, we have a bare-bones Ubuntu installation. There’s no apache, MySQL or PHP. You can test that by typing one of their commands, eg

And you get an error. So we’ll have to install these components, using APT – just as you would on a regular Ubuntu machine.

Before we install anything, we need to update the package database:

OK, let’s install Apache! Type:

It will take a few moments to finish.

The install process will ask you to supply a password for the root user. In this example, you can skip it. In the real world, you should always set a password for the root user!

Next, we’ll install PHP

This command installs PHP 7 and the PHP module for Apache.

Now let’s test if PHP is working:

You should get the output “Yep, PHP Works.”

Now let’s test MySQL

This command starts the MYSQL server.

You should be logged into the MySQL client.

If everything is working, you’ll see a list of databases – MySQL uses them to store its own settings.

You can log out by typing:

Finally, type “exit” to leave the container’s shell and return to the host environment.

OK, so we have installed the complete LAMP stack. What’s happened inside the container?

Remember, only the top layer of the file system is writable – and this layer is deleted whenever you remove the container.

In order to save our work, we’d need to save the temporary file layer – in fact, that’s how you create a new image!

Leave the container with exit

The container is still alive, although the BASH process has stopped. You can see the list of containers by typing:

This will give you a bunch of information about your container, including an id hash and a made-up name. If you don’t provide a name to Docker run, it will create one for you.

Now, we want to save our work – here’s how you do that:

Docker will save the temporary filesystem as a new layer, and return a number. This number is your new image’s id hash – that’s how Docker tracks these images internally. Because it’s hard to remember a hash, we’ve named our new image “my_lamp”.

Next, let’s take a look at the images installed on your machine:

You should see your new image.

Great, let’s kill the old container:

Your container is gone, but you have an image!

But does it really work? Can you really use it to serve web pages? Let’s find out:

We need to start the apache web server in foreground mode (outputting text to the terminal). If we try to run it as a demon, the shell will exit immediately, and the container will stop. Here’s the command we need:

This command tells docker to run (create and start) a container from the my_lamp image. Here are the options in detail:

Note – this command will fail if you’re already running a web server on the machine you use to follow these steps. That’s because only one process can bind to port 80 at any given time. You can choose a different port number.

Now you should be able to open your browser and type in your machine’s address (localhost if you’re using your PC, otherwise use your machine’s IP address).

Well, if all goes well, you should see Apache’s “It works!” page.

Now, we don’t want a server that only shows the “It works!” page. How do we get out content inside that server? The most naive way is to log in to a container and create the content inside it. And that works fine.

Let’s delete our current container, and start a new one (that we can log into).

The -rm option tells Docker to delete the container when the process ends – BASH will end when we log out. It’s a useful way to prevent your system from getting cluttered with old containers.

The first step is to get the apache daemon running – we can run it in the background this time. We’ll need the command line for sending commands.

Next, we have to get our content into the web root (the directory that Apache looks into to find content).

/var/www/html/ is the web root on Ubuntu – you can change it, and other Linux distributions use different locations for the default web root.

Now we can test our changes by opening a new browser window and typing our host machine’s IP address in the address bar. You should see the new content.

So, we can get our content into the container – in a fairly ugly way. But it works. We know it works for plain old HTML – let’s test PHP.

First, delete the index.html file:

Now create a new index.php file, containing the phpinfo() function:

Refresh your web browser, and you should see the PHP info screen.

Now we know PHP is working through Apache. Everything works fine. So we can save our container.

Exit the container:

Now let’s commit it to create a new image:

A Better Workflow

So far, our workflow looks like this:

This is a fairly ugly way to build new images. Here are some of the problems:

There is a solution – Docker’s build command, and Dockerfiles.

A Dockerfile (with a capital “D”) is a list of instructions for Docker. They tell Docker how to build an image.

Docker reads through the file and executes each line one by one. Each new line creates a new image, which docker saves.

Behind the scenes, Docker’s going through the same steps we just did – but it’s working much faster, and it’s following the procedure flawlessly. If the build works once, it will work again and again – and different environments, too.

Here’s a sample docker file which accomplishes the same steps we took:

The first line gives the base image – the image that docker pulls from the hub. We’re using version 16.04.

It’s a good idea to use a fixed starting point, because Ubuntu could change in the future. Without a fixed version number, Docker will pull the latest version. At some point, the Ubuntu developers may decide to use different directories for the web root, or for MySQL’s startup script. This would break our image.

The last line starts the MySQL server and runs the apache daemon in foreground mode. It’s a convenience so we don’t have to type these commands every time we run the container.

Let’s build this file and create a new container:

The build command follows this syntax:

docker build [options] PATH

In this case, the options were:

And the path was:

If you watch the output, you’ll see that Docker builds a number of images before it reaches the final one. Docker stores these images in a cache, so it can build the image quickly the next time you run docker build.

Run the command again to see the cache in action:

It’s much faster, right? This is very useful when you make a lot of small changes to your project, for instance when you’re developing and testing code.

Finally, we can run our new image in a new container:

Check it out in the browser. Yay! It works!

Let’s stop and delete it with:
sudo docker rm -f new_phpinfo

But we’ve done something very ugly here – we’re creating our web files inside the Dockerfile, using the echo command. For short pages, that’s not a big problem.

But most web pages in the real world are hundreds of lines – if we dump that into the Dockerfile, it will become unreadable!

Including Files in a Docker Image

We can put our content in files on our host machine and use docker build to insert them into the image. Let’s create some new files on our file system:

We’ve made a folder with two html pages.

Now let’s add them to a new image through the Dockerfile. Edit the Dockerfile to look like this:

Now we’ll build the image, and run it.

Notice how Docker uses the cache to speed up the build, even though we have made changes?

Now we’ll test it with

Let’s test it in the browser. If all goes well, it should work perfectly.

We can delete the container with

Using Dockerfiles gives you automation – it also documents the build process. If you passed this file over to a developer or administrator, they would be able to understand how your container was built, and they’d understand what to expect from it.

Docker Volumes

Up to this point, we’ve had to build a new container each time we want to change the files inside the container’s filesystem. This isn’t a big deal if we’re building the final version of our tested code.

But if we are developing and testing code, it’s a pain to build each time we make a small change.

Docker allows you to add directories from your host file system as directories in a container. It calls these directories “volumes”.

Volumes are meant to save persistent data – data you want to keep. Remember, the data stored in a Docker container lives inside a special temporary file system. When you delete the container, that information is lost.

That’s bad news if the container held your MYSQL database and all your content!

While the official use for volumes is to hold your volatile data, it’s also a useful tool for changing code inside a running container.

Of course, when you finish developing your code, you should wrap it up inside a complete container image. The goal is to put all the code that runs your site inside a container and store the volatile data in a volume.

You can add a volume at runtime like this:

The -v option needs a little explanation. It tells the docker daemon to mount files from the local web directory to the /var/www/html/ directory. The $PWD fragment is a system variable. It contains the working directory – the same info you would get if you executed “pwd” (print working directory).

In other words, $PWD is our current working directory.

I inserted $PWD because the Docker daemon requires the full path to the host directory – it doesn’t work with local paths.

OK, let’s test it in the browser. Load the local machine in your browser, and you’ll see your pages.

Now let’s change the index.html file:

Reload your page in the browser. You should see the new version.

Using volumes like this helps to speed up development, but you should always include your code in the final version of your container image. The code should ideally live inside the container, not in an external volume.

Docker Compose

So, we’ve seen that there’s a nice automated way to build containers, with the docker build command. This gave us automation and better reliability over the image building process.

Running new containers is still a little ugly, with long commands packed with options and extra details.

There’s a simpler way to launch docker containers. It gives you more control, and it’s more readable. I’m talking about the docker-compose command and docker-compose.yml files.

What’s a yml file? It’s a file in YAML format (Yet Another Markup Language). YAML is a structured data language (not markup at all). It’s like XML or JSON, but it’s much more readable than either.

In the docker-compose.yml file, you describe your operational environment. You tell docker which containers to create, which images to use, and how to configure them. You can also tell Docker how these containers should communicate.

In our example, we’ve placed everything inside the same container – Apache, MySQL and PHP. In real practice, it’s better to separate your application into several containers, one for each process – we’ll discuss this shortly.

Separating processes into containers makes it easier to manage your app, and it’s better for security.

Let’s go over a simple docker-compose.yml file we could use to launch our existing server container:

The version line refers to which version of the docker-compose.yml format we are using. Version 3 is the latest version at the time of writing.

The next line mentions services. What are services?

In our case, our website is a service – it’s a process that does work for users. An application can have multiple services, but our site is a single service. We’ll look at multi-service applications a little later.

A service could contain multiple containers. For instance, if our site gets really busy, we could decide to launch multiple web server containers to handle the load. They’d all be doing the same job, so together they would provide the same service.

If you do launch multiple containers through the docker-compose.yml file, Docker will automatically handle the load balancing for you. It will send each request to a different container in a round-robin fashion.

The next line is the name of our service – I’ve decided to call it “lamp_server”, although the image name is two_page_site. This doesn’t mean the container will be called lamp_server – Docker may decide to call it yourprojectname_lamp_server_1 or something similar.

We can use “lamp_server” with the docker-compose command to refer to the collection of containers.

The ports line bind’s the service to port 80.

The volumes line maps the local web directory to the /var/www/html/ directory inside the containers. Note that it’s OK to use local paths here, even though “docker run” rejects them.

Next, there’s a deploy option. This is where we can specify deployment details for our service. There are lots of possible options, but I have only used the “replicas” keyword. This allows me to say how many containers should run to provide the service – in this case, 1.

If we ever wanted to scale the service, we could do it with a single command:

docker-compose scale lamp_server=2

With this file in place, we can start our service with one command:

Notice that we’re using a new command – “docker-compose”, instead of “docker”. Docker Compose is a separate program, although it communicates with the Docker daemon in a similar way to the regular docker command.

The command will launch the service, starting a single container.

OK, we’ve covered enough of Docker for you to set up a Docker-based WordPress site. There’s more to learn, of course.

You should make some time to learn more about Docker, since there are some amazing features for scaling your site across the cloud – which is a good way to deal with traffic surges.

It’s an extremely useful tool!

Docker Security Warning

Although Docker can help to make your site more secure, there’s are a few major issues you need to understand.

Let’s take these up one at a time:

The Root of All Evil

The Docker daemon runs as the superuser! This means that an attack against the docker daemon would potentially give an attacker complete power over the system – as the superuser has unlimited powers under Discretionary Access Control.

If you were paying attention during the earlier sections, you’ll know that it’s possible to limit the abilities of a specific process with Linux capabilities and Mandatory Access Control.

The solution to this issue is to use a MAC solution like SELinux, GRSecurity or AppArmor.

Let Me Delete Your Hard Disk for You

We spoke about docker volumes above. In short, you can insert directories from your real filesystem into a container. It’s a useful feature.

Unfortunately, it’s also possible to mount the entire host filesystem into a container. If you do that, the container can (in theory) access any of your data, or delete it all.

This is potentially a big risk. The solution – once again – mandatory access control. Prevent the docker daemon from accessing files that are unrelated to its job and the containers you intend to run.

Pass the Daemon

It’s possible to pass a reference to the Docker daemon into a running container – there’s a socket file for the daemon. This allows processes in the container to communicate with the daemon, gaining control over it.

Some useful docker projects use this to provide container monitoring services, or to route internet traffic to the right container. We’ll use a couple of these to handle HTTPS traffic and to host multiple WordPress sites on a single machine.

However, the Docker daemon has tremendous power. It can be used to:

Note that containers cannot access the Docker daemon unless you pass it to them inside a volume – either through the “docker run” command or inside a docker-compose.yml file.

For this reason, you should be very careful to always understand the commands you type. Never let anyone trick you into running a strange docker command.

Also, only download and use Docker images from a trustworthy source. Official images for popular images are security audited by the Docker team. Community images are not – so make sure you read the Dockerfile and understand what the image does.

The same goes for docker-compose.yml files.

In addition, you should use mandatory access control (such as AppArmor) to limit what each container does – Docker includes an option to name an Apparmor security profile for each container you run.

You can also specify Linux capabilities in the docker-compose.yml file, or with the Docker run command.

It’s also possible to use an access control plugin for the Docker daemon. An example is the Twistlock Authz Broker. I won’t cover it in this article because we don’t have the time or space to cover every possible angle. But you can learn more by clicking on the link.

WordPress is Complex

WordPress may look simple, but it’s actually quite a complex beast. It relies on several components to make it work – obviously, there are the core WordPress files. These interact with the web server through the PHP runtime. WordPress also relies on the file system and a database server.

The situation is even more complex when you have custom code. Your code may require additional PHP modules to run. These modules may require software packages, services, or shared libraries to function.

For instance, you may use ImageMagick to manipulate or generate graphics. To use ImageMagick from PHP, you have to install several packages – ghostscript and imagemagick, and the PHP extension.

Every piece of software you install on your web server increases your security risk. And they introduce maintenance overhead – you have to spend the time to update each component on every server image.

This is bad enough if you’re only running a single web server. If you’re duplicating your server (for load balancing) you have to ensure each one is updated.

Simpler Maintenance

Using containers makes it easier to keep these images in sync, but there’s another conceptual tool that can simplify the task.

“The separation of concerns” is a useful principle in software development and maintenance. It makes maintenance easier, and it tends to contain risk.

The basic idea is that you break a complex system (or application) down into simple pieces that work together. Each piece is responsible for one simple task (or concern). When some part of the system has to change (to add a feature, fix a bug, or improve security) you can zero in on a small piece and make a change there. The rest of the system remains the same.

There are lots of ways to apply this idea to an application like WordPress. Object Oriented Programming is one tool – but this applies to the code inside your PHP files. “Microservices” are a higher-level tool you can use to decompose your web server into smaller pieces that are easier to manage.

What is a microservice? Well, let’s start by defining a service.

A service is some software component that listens for requests (over a protocol) and does something when it receives those requests. For instance, a simple web server listens for HTTP requests and sends files to users when it receives a request.

The client (the person or thing that makes the request) may be a person. It could be another program running on a different computer. It could even be a process running on the same computer.

Complex applications are often composed from multiple services. A simple WordPress site relies on a web server, a PHP runtime, and a database (usually MYSQL).

From the viewpoint of the WordPress app, each of these is a “microservice”.

Usually, all of these services run on the same machine. But they don’t have to. You could put your database on a different server, for instance. Or you could put it in a different container.

Using Docker, you could install WordPress, Apache, and PHP in one container, and run MySQL from another. These containers could run on the same physical machine, or on different ones, depending on your setup.

The database service container can be configured to only accept connections that originate from the web container. This immediately removes the threat of external attacks against your database server (an incorrectly configured MySQL server on your web server could be accessed over the Internet).

With this configuration, you can remove a ton of unnecessary services from your containers. In fact, official Docker images are now built on top of Alpine Linux, a very minimal, highly secure distribution.

Because all the complex software that runs your site is safely stored in the “magic box”, you don’t need them cluttering up your host system. This gives you the perfect opportunity to remove high-risk software from your host machine, including:

You don’t need a web server program or database on your host machine (these live inside containers).

Behind the scenes, docker creates a safe internal network for your containers to talk to each other. Other processes on your system are not included in this network, so they can’t communicate with each other. And the rest of the Internet is locked out of the internal network.

If your WordPress site communicates with other web applications on your machine, you can set them up in their own containers and allow them to communicate with your web server through Docker’s built-in private networks.

Docker makes it simple to set up these microservices without having to change your app’s code. It routes the messages between the containers in a secure way.

If a new version of MySQL is released, you can update the database container without touching the web container. Likewise, if PHP or Apache are updated, you can update the web container and leave the database container alone.

Some people take this architecture a step further – they separate the filesystem into its own container. WordPress uses the filesystem for its core files, templates, plugins and uploaded content.

If you separate the filesystem from the web server container, then you don’t have to update the web server so often. In fact, the only time you’ll have to update it is to install Apache or PHP updates!

In this article, we’ll use a local host directory as a volume, to store our WordPress files.

Using Multiple Containers for your WordPress Site(s)

As you can see, Docker containers are ideal for running small software components. Because Docker makes it easy to connect these containers together, there’s no reason to lump all your software inside a single container.

In fact, it’s a bad practice – it increases the security risk for any single container, and it makes it harder to manage them.

That’s why we’re going to use multiple containers for our new host machine. Each container has a reason to exist – a simple responsibility. Because Docker’s so lightweight, it’s viable to run a single container for each task.

Some of the tasks should be long running. For instance, your web-server containers should run constantly. The same goes for the database containers.

Other tasks are short-lived. Backing up your data may take a minute or two, but then it’s done. If you run a short-lived command inside a Docker container, the container will stop running as soon as the command has completed. You can configure these containers to delete themselves:

The Ideal Docker Host

So, we’ve looked at how you can deploy your WordPress site through Docker. All the “risky” software is safely locked away inside containers, where they can’t do much harm.

However, if you’re running your containers on a typical web host machine, you already have copies of these programs installed and running. This is far from ideal – we’re trying to remove these security risks by putting them in containers!

Ideally, your host machine should be a very minimal system – virtually skeletal in its simplicity.

Building a simple system from scratch is much easier than trying to uninstall a bunch of software packages. There’s always the risk that you’ll break something along the way!

If your site is already live on an existing server, the best approach is to set up a new host machine and then migrate over to it. Here are the steps you need to take:

Let’s go through these steps one by one.

This content was originally published here.