In a previous blog post, I demonstrated how to implement integration tests in a .NET application using Testcontainers.

If you prefer not to manage Docker containers within the test code, it is advisable to start and stop the containers independently of the test execution. Docker Compose and the Azure DevOps feature Service Containers are good alternatives for this.

The example code for this blog post is available on Github: https://github.com/davull/demo-docker-compose-test

Docker Compose

Docker Compose allows the definition and management of multiple Docker containers in a single file. These can then be started and stopped with just one command. Additionally, it enables the definition of networks between containers, the creation of storage volumes, and the setting of environment variables. For the demo application used here, I utilize a MariaDB database and a phpMyAdmin container. A simplified version of the Docker Compose file looks like this (the full configuration can be found in the Github repository):

name: "orderapp"

services:
  order-mariadb:
    image: mariadb:11.3
    container_name: order-mariadb
    volumes:
      - order-mariadb:/var/lib/mysql
    networks:
      - order-net
    environment:
      MYSQL_ROOT_PASSWORD: "some-password"
      MYSQL_DATABASE: "orders"

  order-pma:
    image: phpmyadmin
    container_name: order-pma
    ports:
      - "9002:80"
    networks:
      - order-net

networks:
  order-net:

volumes:
  order-mariadb:

Using docker compose up and docker compose down, you can start and stop all containers respectively.

If you want to use a MariaDB container for the integration tests of your application, there are a few things to consider. You need to initialize the database and populate it with appropriate test data. This can be done using a Shared Context within your test suite for xUnit, for instance, performing a Seeding of the database before each test run. After the test run, the database is simply deleted.

public class DatabaseFixture : IAsyncLifetime
{
    private string _databaseName;
    
    public async Task InitializeAsync()
    {
        _databaseName = GetRandomTestDatabaseName();
        await Database.Seed(_databaseName);
    }

    public async Task DisposeAsync()
      => await Database.DeleteDatabase(_databaseName);
}

Alternatively, you can utilize a feature of the MariaDB container that allows the execution of arbitrary SQL scripts when the container starts. To do this, you mount a folder into the container at /docker-entrypoint-initdb.d. All SQL files in this folder will be executed when the container starts.

services:
  order-mariadb:
    image: mariadb:11.3
    volumes:
      - ./initdb:/docker-entrypoint-initdb.d

Besides the initial filling of the database, it is necessary to be able to determine after the start of the container whether the database server is ready to receive requests. This is crucial for running integration tests in a CI/CD pipeline to ensure that the tests are only executed when the container is fully operational. For Docker containers, the option of healthchecks is suitable. MariaDB already includes an appropriate healthcheck.sh script for this purpose.

services:
  order-mariadb:
    image: mariadb:11.3
    healthcheck:
      interval: 3s
      retries: 3
      test:
        [
          "CMD",
          "healthcheck.sh",
          "--su-mysql",
          "--connect",
          "--innodb_initialized",
        ]
      timeout: 30s

Docker Compose in a Azure Pipeline

With the previously created Docker Compose file, you can run the integration tests locally and also in an Azure DevOps CI/CD pipeline. First, we start our containers using the DockerCompose@0 task:

- task: DockerCompose@0
  displayName: "Docker compose up"
  inputs:
    containerregistrytype: Container Registry
    dockerComposeFile: "./docker/docker-compose.yml"
    action: Run a Docker Compose command
    dockerComposeCommand: "up -d"
    projectName: "orderapp"

After starting the containers, you need to wait for the database container to be ready for operation using a healthcheck.

If you are running your Azure pipeline in a Container Job, you must connect the database container to the network of the pipeline container. The environment variable $(Agent.ContainerNetwork) contains the name of the network in which the pipeline container is running.

- task: CmdLine@2
  displayName: "Prepair containers"
  inputs:
    workingDirectory: "./docker"
    script: |
      echo -e "Connect database to network $(Agent.ContainerNetwork) ...\n"
      docker network connect $(Agent.ContainerNetwork) order-mariadb
      
      echo -e "Waiting for container to be healthy ...\n"
      until [ "$(docker inspect -f '{{.State.Health.Status}}' order-mariadb)" == "healthy" ]; do
        sleep 1
      done

Afterwards, the tests can be run. Since the configuration of the database connection in the CI pipeline is different from that on a local PC or in a production environment, you can set it accordingly using a .runsettings file. This file allows you to specify different settings for different environments, ensuring that your tests can connect to the correct database instance depending on where they are being run.

<RunSettings>
  <RunConfiguration>
      <EnvironmentVariables>
          <DB_SERVER>order-mariadb</DB_SERVER>
          <DB_PORT>3306</DB_PORT>
          <DB_NAME>orders</DB_NAME>
          <DB_USER>root</DB_USER>
          <DB_PASSWORD>some-password</DB_PASSWORD>
      </EnvironmentVariables>
  </RunConfiguration>
</RunSettings>

In the DotNetCoreCLI@2 task, you specify the file as an argument. This can be done by including the --settings option followed by the path to your .runsettings file, allowing the .NET Core CLI to apply the specified configurations during the test run.

- task: DotNetCoreCLI@2
  displayName: "Dotnet test"
  inputs:
    command: test
    arguments: "--settings ./src/ci-tests.runsettings"

After the test run, the containers are stopped again.

- task: DockerCompose@0
  displayName: "Docker compose down"
  condition: always()
  inputs:
    containerregistrytype: Container Registry
    dockerComposeFile: "./docker/docker-compose.yml"
    currentWorkingDirectory: "./docker"
    action: Run a Docker Compose command
    dockerComposeCommand: "down -v"
    projectName: "orderapp"

Azure DevOps service containers

As an alternative to setting up and starting Docker containers using the Docker@2 or DockerCompose@0 task, Microsoft offers the option to deploy container resources as Service Containers within a pipeline. These containers run parallel to the build and test jobs and can be accessed by them. Therefore, Service Containers are also suitable for providing resources such as databases for integration tests.

The configuration of Service Containers is done directly in the .yaml file of the pipeline definition; the feature is not available for classic Azure Pipelines. The syntax is very similar to that of Docker Compose, so those familiar with it will find it easy to adapt.

Containers are defined under the resources section and then exposed under services (container resources can also be used for Container Jobs).

resources:
  containers:
    - container: order-mariadb
      image: mariadb:11.3
      ports:
        - 3306:3306
      env:
        MYSQL_ROOT_PASSWORD: "some-password"
        MYSQL_DATABASE: "orders"

services:
  order-mariadb: order-mariadb

With this setup, the MariaDB server can be accessed at localhost:3306.

If you are not using Docker containers for local development that can reuse configurations for integration tests within the CI pipeline, Service Containers offer a simple way to provide the necessary resources.