Use services and databases in Bitbucket Pipelines
Bitbucket Pipelines allows you to run multiple Docker containers from your build pipeline. You'll want to start additional containers if your pipeline requires additional services when testing and operating your application. These extra services may include datastores, code analytics tools and stub webservices.
You define such additional services (and other resources) in the
definitions section of the bitbucket-pipelines.yml file. These services can then be referenced in the configuration of each pipeline that needs them.
When a pipeline runs, services referenced in a step of your bitbucket-pipeline.yml will be scheduled to run with your pipeline step. These services share a network adapter with your build container and all open their ports on localhost. No port mapping or hostnames are required. For example, if you were using Postgres, your tests just connect to port 5432 on localhost. The service logs are also visible in the Pipelines UI if you need to debug anything.
Pipelines enforces a maximum of 5 service containers per build step. See sections below for how memory is allocated to service containers.
For further examples of using databases with Pipelines, see Test with databases in Bitbucket Pipelines.
This example bitbucket-pipelines.yml file shows both the definition of a service and its use in a pipeline step. A breakdown of how it works is presented below.
pipelines: branches: master: - step: image: redis script: - redis-cli -h localhost ping services: - redis - mysql definitions: services: redis: image: redis:3.2 mysql: image: mysql:5.7 environment: MYSQL_ROOT_PASSWORD: password
Defining a service
Services are defined in the
definitions section of the bitbucket-pipelines.yml file.
For example, the following defines two services: one named
redis that uses the library image
redis from Docker Hub (version 3.2), and another named
database that uses the official Docker Hub MySQL image (version 5.7).
Note that the environment section must contain only literal values, not environment variables.
definitions: services: redis: image: redis:3.2 mysql: image: mysql:5.7 environment: MYSQL_ROOT_PASSWORD: password
Service memory limits
Each service definition can also define a custom memory limit for the service container, specified as a
memory parameter in megabytes.
The relevant memory limits and default allocations are as follows:
- Regular steps (
size: 1x) will be allocated 4096 MB of memory in total, large build steps (
size: 2x) will be allocated 8192 MB in total.
- The build container is always allocated 1024 MB of this, which covers your build process and some Pipelines overheads (agent container, logging, etc).
- The total memory allocated for services on a given pipeline step must not exceed the remaining memory, which is 3072/7128 MB for 1x/2x steps respectively.
- Service containers are allocated 1024 MB memory by default, and can be configured to any number of megabytes from 128 MB up to the step maximum of 3072/7128 MB.
- The Docker-in-Docker daemon used for Docker operations in Pipelines is treated as a service container, and also has a default memory limit of 1024 MB. This can also be adjusted to any value between 128 MB and 3072/7128 MB by changing the memory setting on the built-in
dockerservice in the definitions section.
In the example shown below, if a step is configured with Docker with 512 MB, Redis with 512 MB, and MySQL with the default memory (1024 MB), the build container for that step will be have a memory limit of 2048 MB.
pipelines: default: - step: services: - redis - mysql - docker script: - echo "This step is only allowed to consume 2048 MB of memory" - echo "Services are consuming the rest. docker 512 MB, redis 512 MB, mysql 1024 MB" definitions: services: redis: image: redis:3.2 memory: 512 docker: memory: 512 # reduce memory for docker-in-docker from 1GB to 512MB mysql: image: mysql:5.7 # memory: 1024 # default value environment: MYSQL_ROOT_PASSWORD: password
Use a service in a pipeline
Once a service has been defined in the 'definitions' section of the bitbucket-pipelines.yml file, you can reference that service in pipeline steps.
For example, the following causes the
redis service to run alongside the step:
pipelines: default: - step: image: node script: - npm install - npm test services: - redis
Use a private image
You can define a service that has restricted access as follows:
definitions: services: redis: image: name: redis:3.2 username: firstname.lastname@example.org password: $DOCKER_PASSWORD
For more complete example of using docker images from different registries and different formats, see Use Docker images as build environments
Caveats and limitations
Services in Pipelines have the following limitations:
- Maximum of 5 services for a step
- Memory limits as described above
- Service environment section must contain only literal values, not environment variables
- No REST API for accessing services and logs under pipeline results
- No mechanism to wait for service startup.
If you want to run a larger number of small services, running these via Docker run or docker-compose might be a viable alternative.