2022-06-13

Running Multi-container with Docker Compose

Introduction

In this article, I provide how to run multi-container with Docker Compose, including the fundamentals, structure of a Compose file, defining services, configuring networks and volumes, and managing application lifecycle.

Structure of a Compose file

A Docker Compose file is a YAML file that defines the services, networks, and volumes for your multi-container application. The file is typically named docker-compose.yml and placed at the root of your project directory. The basic structure of a Compose file includes the following top-level elements:

  • version
    Specifies the Compose file format version. Different versions support different features and syntax.

  • services
    A list of services that make up your application. Each service corresponds to a container and is defined by a Docker image or a build context.

  • networks (Optional)
    A list of custom networks for your application. Networks enable communication between your services and can be configured with specific drivers and options.

  • volumes (Optional)
    A list of named volumes for your application. Volumes provide persistent storage for your containers and can be used to share data between services.

Defining services

Services are the core components of your application and are defined within the services section of your Compose file. Each service represents a container and is built from a Docker image or a build context. To define a service, follow these steps:

  1. Under the services section, add a new entry with a descriptive name for your service (e.g., web, database, redis).
  2. Specify the Docker image to use for the service using the image key. You can use an official image from Docker Hub or a custom image from your private repository.
  3. (Optional) If you need to build a custom image, use the build key instead of the image key, and provide the build context (usually a path to your Dockerfile).
  4. (Optional) Use the depends_on key to specify any services that the current service depends on. This ensures that the dependent services will start before the current service.
  5. (Optional) Configure any additional settings for the service, such as ports, volumes, environment, networks, and more.

Here's an example of a Compose file defining a web service, a Redis service, and a PostgreSQL service with interdependencies:

docker-compose.yml
version: '3.9'
services:
  web:
    image: nginx:latest
    ports:
      - "80:80"
    depends_on:
      - app
  app:
    build: ./app
    depends_on:
      - redis
      - db
  redis:
    image: redis:alpine
  db:
    image: postgres:13-alpine
    environment:
      POSTGRES_USER: myuser
      POSTGRES_PASSWORD: mypassword
      POSTGRES_DB: mydb

In this example, the web service depends on the app service, and the app service depends on both the redis and db services. This configuration ensures that the services are started in the correct order: redis and db will start first, followed by app, and finally, web.

Configuring networks

By default, Docker Compose creates a single default network for your application and connects all services to it. However, you can define custom networks for more granular control over the communication between your services. To create a custom network, follow these steps:

  1. Under the networks section, add a new entry with a descriptive name for your network (e.g., frontend, backend).
  2. (Optional) Specify the network driver and options using the driver and driver_opts keys.
  3. In the services section, use the networks key for each service that you want to connect to the custom network. Specify the network name as a list item under the networks key.

Here's an example of a Compose file defining two custom networks and connecting different services to them:

docker-compose.yml
version: '3.9'
services:
  web:
    image: nginx:latest
    ports:
      - "80:80"
    depends_on:
      - app
    networks:
      - frontend
  app:
    build: ./app
    depends_on:
      - redis
      - db
    networks:
      - frontend
      - backend
  redis:
    image: redis:alpine
    networks:
      - backend
  db:
    image: postgres:13-alpine
    environment:
      POSTGRES_USER: myuser
      POSTGRES_PASSWORD: mypassword
      POSTGRES_DB: mydb
    networks:
      - backend

networks:
  frontend:
  backend:

In this example, we created two custom networks: frontend and backend. The web and app services are connected to the frontend network, while the app, redis, and db services are connected to the backend network. This configuration isolates the frontend and backend components of the application.

Configuring volumes

Docker Compose allows you to create named volumes to persist data and share it between services. To create a named volume, follow these steps:

  1. Under the volumes section, add a new entry with a descriptive name for your volume (e.g., db_data, app_uploads).
  2. In the services section, use the volumes key for each service that you want to mount the named volume. Specify the source (volume name) and the target (mount point in the container) in the format <source>:<target>.

Here's an example of a Compose file defining a named volume and mounting it to the PostgreSQL service:

docker-compose.yml
version: '3.9'
services:
  web:
    image: nginx:latest
    ports:
      - "80:80"
    depends_on:
      - app
  app:
    build: ./app
    depends_on:
      - redis
      - db
  redis:
    image: redis:alpine
  db:
    image: postgres:13-alpine
    environment:
      POSTGRES_USER: myuser
      POSTGRES_PASSWORD: mypassword
      POSTGRES_DB: mydb
    volumes:
      - db_data:/var/lib/postgresql/data

volumes:
  db_data:

In this example, we created a named volume called db_data and mounted it to the /var/lib/postgresql/data directory in the db service container. This configuration ensures that the PostgreSQL data is persisted across container restarts and can be shared between multiple services if needed.

Building and Running Applications with Docker Compose

Building your Docker images

If your Compose file specifies any services that require building custom Docker images, you'll need to build those images before starting your application. To build the images defined in your Compose file, run the following command in the same directory as your docker-compose.yml:

bash
$ docker-compose build

This command will build all the required images for your services, as defined in the Compose file. If you only need to build a specific service, you can specify the service name after the build command:

bash
$ docker-compose build <service_name>

Starting services and containers

Once your images are built, you can start your application using Docker Compose. To start your services and create containers for them, run the following command:

bash
$ docker-compose up

By default, this command will run in the foreground, displaying logs from all services on the console. To run your services in detached mode (background), use the -d flag:

bash
$ docker-compose up -d

You can also specify the desired scale for each service by using the --scale flag followed by the service name and the number of replicas:

bash
$ docker-compose up --scale <service_name>=<number_of_replicas>

Stopping and removing containers

To stop your services and remove the associated containers, use the following command:

bash
$ docker-compose down

This command will stop all running services, remove the containers, and also remove any networks and volumes defined in your Compose file. If you want to stop the services without removing the containers, networks, and volumes, use the stop command instead:

bash
$ docker-compose stop

Scaling services

If your application requires horizontal scaling, Docker Compose makes it easy to adjust the number of replicas for each service. To scale a specific service, use the scale command followed by the service name and the desired number of replicas:

bash
$ docker-compose up -d --scale <service_name>=<number_of_replicas>

Keep in mind that scaling services may require additional configuration, such as load balancing or data partitioning, depending on the nature of your application.

Ryusei Kakujo

researchgatelinkedingithub

Focusing on data science for mobility

Bench Press 100kg!