Learning Docker – A WordPress set up

In my last post about Docker, we saw the basics of setting up images and running containers. Now we are going to build on this with another tool from Docker known as Docker Compose. We will use Docker Compose to set up a WordPress project. If you want to check out the code you can go to the GitHub Repository and play around yourself. WordPress is one of the main technologies I use on a daily basis for work and I thought it would be a good way to dig deeper into Docker.

Docker Compose

Docker Compose is a tool for Docker that let’s you automatically orchestrate containers. In our first foray into docker we manually started and ran each container from the command line. This works great in a small setup or for testing something out, but as soon as you have many containers that all talk to each other and you want a quick way to setup and tear down your project Docker Compose is the way to go. Docker Compose makes use of a docker-compose.yml file to organize the project. Here is our WP setup:


version:                     '3.3'

    image:                   mariadb:${MARIADB_VERSION:-10.2}
      MYSQL_ROOT_PASSWORD:   somewordpress
      MYSQL_DATABASE:        wordpress
      MYSQL_USER:            wordpress
      MYSQL_PASSWORD:        wordpress
      - db_data:/var/lib/mysql
    restart:                 always
      - back

      - db
    image:                   'wordpress:${WP_VERSION:-5.6.1}-php${PHP_VERSION:-7.4}-apache'
      - "8000:80"
    restart:                 always
    hostname:                '${DOCKER_DEV_DOMAIN:-project.test}'
      - ./themes:/var/www/html/wp-content/themes:rw
      - ./plugins:/var/www/html/wp-content/plugins:rw
      - ./mu-plugins:/var/www/html/wp-content/mu-plugins:rw
      - front
      - back
      VIRTUAL_HOST:          '${DOCKER_DEV_DOMAIN:-project.test}'
      WORDPRESS_DB_HOST:     db:3306
      WORDPRESS_DB_USER:     wordpress
      WORDPRESS_DB_PASSWORD: wordpress
      WORDPRESS_DB_NAME:     wordpress

    image:                   jwilder/nginx-proxy:alpine
    restart:                 always
      - '80:80'
          - '${DOCKER_DEV_DOMAIN:-project.test}'
      - '/var/run/docker.sock:/tmp/docker.sock:ro'

  front:                     {}
  back:                      {}

  db_data:                   {}

To spin up the WordPress set up you can run docker in detached mode. You can then navigate to localhost, or project.test (if you added it to your hosts file). You will see a fully operational WordPress setup.

docker-compose up -d

At the very top of the docker-compose.yml we see a version declaration. I am only familiar with Docker Compose 3, but there are more details on the differences on Docker’s website. The next area we see our declaration of services. Each of these services is essentially going to be a Docker container. Within each service we see various properties assigned to each container.


Each container has an image specified here we have 3, mariadb:10.2, wordpress:5.6.1-php7.4-apache, and jwilder/nginx-proxy:alpine. These images all get pulled in from DockerHub, the official repository of Docker images. We have a db container running MariaDB, a wordpress container running WordPress & Apache, and lastly an Nginx server setup as a reverse proxy.


In our WordPress and DB containers we see an environment property where we are setting up our database. In a production environment you would not want to use these values and instead use real passwords and usernames that are secure. For a local setup, this works just fine.


We also see a property for each of our containers called networks. These are used by Docker to allow containers to interface with each other. In this example we have two simple networks, front and back. In our docker-compose.yml towards the bottom you will see this code:

  front:                     {}
  back:                      {}

This sets up the two Docker networks that our containers will communicate on. You can see that the db container only is on the back network and not the front. This is because we want the database to be on a network with other backend services. The wordpress container is on both the front and back network so that it can talk to the database, as well as talk to the Nginx reverse proxy. Lastly we see the Nginx proxy runs on the front with an alias set to whatever our domain for the project will be, making it so that this network is accessible by that domain name. By default this is set to project.test, as long as your hosts file points to localhost, you will be able to use that domain to access WordPress. Pretty neat.

Depends On

We have the wordpress container setup to depend on the db container. This means that the WordPress setup will not fully run until we have set up and have a running db container. This makes sense as WordPress cannot work without the SQL db backing it. depends_on allows us to have complex orchestration of containers and as more services get added there may be more dependencies that are needed.


If a container crashes, what should happen. This is the goal of the restart property which tells Docker what to do when a given container crashes. In this case all three containers are set to always restart.


The volumes property is one of the most important. We will look at two different types of mounts. First we bind our local file system to the wordpress container filesystem for the themes, plugins, and mu-plugins directories:

      - ./themes:/var/www/html/wp-content/themes:rw
      - ./plugins:/var/www/html/wp-content/plugins:rw
      - ./mu-plugins:/var/www/html/wp-content/mu-plugins:rw

The first part of volume binding is where we want to locally bind the folder, here it is the current directory themes folder. The next part of the binding is the folder on the actual container’s filesystem. In this case WordPress is installed at /var/www/html, making the themes folder available at /var/www/html/wp-content/themes. The last part of the binding is what type of settings are set for this binding. In this case each of the volumes for the wordpress container are set to Read Write. Allowing us to read and write to the files on our host machine’s filesystem.

Another type of volume we see is just a regular volume mount. Volumes are what Docker uses to persist data. In this case we are using a volume for our database, so that when we stop our containers and restart them the WordPress database will be what it was. Volumes unlike bind mounts are easier to share between containers and are great when you do not need to locally edit the files but still need persistence. Let’s look at two parts of our docker-compose.yml to see how to specify a volume.

      - db_data:/var/lib/mysql

  db_data:                   {}

The first property is a property on the db container. It is a volume named db_data being bound to the /var/lib/mysql folder on the db container’s filesystem. This is where MariaDB stores the database so we are essentially creating a volume to store all of our db data for the project.

The second volumes property is at the top level of the file and is simply declaring that there is a volume and its name is db_data the {} signifies we are just using the default properties for the volume.

Wrapping Up

I really like how quickly you can set up projects with Docker and how each part of the project can be quickly spun up and down without ever having to install anything on the host machine. If you want to run some PHP code you could spin up a Docker container with an image of PHP installed allowing you to run the code. Then when you are done simply stop and remove the container and nothing was installed directly to the host machine. With Docker Compose you can set up complex projects with different services set inside of containers that can all communicate with one another. Managing this locally can be cumbersome. Managing this with Docker Compose is a breeze. Next I am going to be digging deeper into Docker, and how to use it for actual production deployments instead of just as a local development tool.

Learning Docker – Starting with the basics

Docker is a tool I have been looking to incorporate into my development process for sometime. I have used it many times glossing over the basics and hacking my way through it, but I have not dug in to the finer details of how to operate docker until now. This guide assumes you already have Docker installed.

What is Docker?

Docker is an open-source project for automating the deployment of applications as portable, self-sufficient containers that can run on the cloud, on-premises, or locally. Docker is also a company that promotes and evolves this technology.

Docker allows you to run code inside a container that communicates directly with the host operating system through the Docker engine. Containers are like a computer with an image that is ready to run. The image is like a ready to go hard drive for a container. One of the biggest appeals of Docker is that it allows you to create and run consistent environments, meaning you will not run into situations where code works on someone’s machine but not yours.

Getting started with the basics

Docker is a big topic. We will be exploring it piece by piece and hopefully by the end a clearer understanding of what Docker is, and how to use it will emerge.


Dockerfile is used to build a Docker image. It is a basic file that contains sequential instructions on how to build an image. The name must be capitalized for it to work. Let’s create a project directory with a Dockerfile in it with the following code:

FROM ubuntu


RUN apt-get update

CMD ["echo", "Welcome to Docker"]

Now let’s breakdown each line.


Generally you use an official image on which you build a new Docker image. In this case ubuntu is our base image, which is called the parent image. You can find images hosted here on Docker Hub. If we want to create a base image, we use ‘FROM scratch’ in the Dockerfile. Any other instructions mentioned in the Dockerfile are going to modify the Docker image.


Who the maintainer of the image is.


ENV is a way to create key value pairs. There are two syntaxes one being a one liner.

ENV APP nginx

The above is equivalent to the following:

ENV APP=nginx PORT=80

Then those values can be used in the rest of the Dockerfile.


RUN apt-get update

This line runs code on the shell and allows us to modify the docker image. Multiline instructions can be built using the \ character.

RUN apt-get update && apt-get install –y \
package1 \
package2 \


The CMD directive defines what will be executed when a container is created using the Docker image. We should define only one CMD in a Dockerfile, however if we define multiple CMD instructions in a Dockerfile, only last CMD will get executed. If we specify any command while running the container, the specified command will take precedence and CMD instruction will not get executed. In this case it is a simple echo using the following message provided in the args.

Building the image

Now that we have gone over the basics of our example Dockerfile, let’s build the Docker image using above sample Dockerfile using below command:

docker build -t my_image .

This will build the image named my_image at our current working directory . where the Dockerfile should reside. You will see a whole bunch of output in your terminal, following the pulling of the base image, you will see the RUN command run and finally some more output around building the image. The -t flag is what allows us to tag this image with a name my_image.

Running the image on a container

Now we will run the image on a container. Remember the container is like a computer, and the image is like a special hard drive ready to go. We can run a container with the following command.

docker run --name mycontainer my_image

We should see the output Welcome to Docker on our terminal. Meaning the container started and ran our CMD. This is kind of like a Hello World example of Docker. We will explore more together in future posts!

Wrapping up

Docker uses containers to run images of predefined files, libraries, and code. Images can be built using a Dockerfile which is essentially a list of instructions for how to modify the image. Images can be built on top of parent images allowing ease of extension. Containers are like a computer that runs within the Docker engine to communicate with our host operating system.