Use services and databases in Bitbucket Pipelines
Configuring your pipeline
- View your pipeline
- Configure bitbucket-pipelines.yml
- Using artifacts in steps
- Use Docker images as build environments
- Run Docker commands in Bitbucket Pipelines
- Branch workflows
- Variables in pipelines
- Use SSH keys in Bitbucket Pipelines
- Push back to your repository
- Specify dependencies in your Pipelines build
- Caching dependencies
- Use services and databases in Bitbucket Pipelines
- How to run common databases in Bitbucket Pipelines
- Test with databases in Bitbucket Pipelines
- Cross-platform testing in Bitbucket Pipelines
On this page
- No related content found
Still need help?
The Atlassian Community is here for you.
Bitbucket Pipelines allows you to run multiple Docker containers from your build pipeline. You'll want to start additional containers if your pipeline requires additional services when testing and operating your application. These extra services may include datastores, code analytics tools and stub webservices.
You define these additional services (and other resources) in the
definitions section of the bitbucket-pipelines.yml file. These services can then be referenced in the configuration of any pipeline that needs them.
When a pipeline runs, services referenced in a step of your bitbucket-pipeline.yml will be scheduled to run with your pipeline step. These services share a network adapter with your build container and all open their ports on localhost. No port mapping or hostnames are required. For example, if you were using Postgres, your tests just connect to port 5432 on localhost. The service logs are also visible in the Pipelines UI if you need to debug anything.
Pipelines enforces a maximum of 5 service containers per build step. See sections below for how memory is allocated to service containers.
For further examples of using databases with Pipelines, see Test with databases in Bitbucket Pipelines.
This example bitbucket-pipelines.yml file shows both the definition of a service and its use in a pipeline step. A breakdown of how it works is presented below.
pipelines: branches: master: - step: image: redis script: - redis-cli -h localhost ping services: - redis - mysql definitions: services: redis: image: redis:3.2 mysql: image: mysql:5.7 variables: MYSQL_DATABASE: my-db MYSQL_ROOT_PASSWORD: $password
Defining a service
Services are defined in the
definitions section of the bitbucket-pipelines.yml file.
For example, the following defines two services: one named
redis that uses the library image
redis from Docker Hub (version 3.2), and another named
database that uses the official Docker Hub MySQL image (version 5.7).
The variables section allows you define variables, either literal values or existing pipelines variables.
definitions: services: redis: image: redis:3.2 mysql: image: mysql:5.7 variables: MYSQL_DATABASE: my-db MYSQL_ROOT_PASSWORD: $password
Service memory limits
Each service definition can also define a custom memory limit for the service container, by using the
memory keyword (in megabytes).
The relevant memory limits and default allocations are as follows:
- Regular steps have 4096 MB of memory in total, large build steps (which you can define using
size: 2x) have 8192 MB in total.
- The build container is given 1024 MB of the total memory, which covers your build process and some Pipelines overheads (agent container, logging, etc).
- The total memory for services on each pipeline step must not exceed the remaining memory, which is 3072/7128 MB for 1x/2x steps respectively.
- Service containers get 1024 MB memory by default, but can be configured to use between 128 MB and the step maximum (3072/7128 MB).
- The Docker-in-Docker daemon used for Docker operations in Pipelines is treated as a service container, and so has a default memory limit of 1024 MB. This can also be adjusted to any value between 128 MB and 3072/7128 MB by changing the memory setting on the built-in
dockerservice in the definitions section.
In the example shown below the build container has a memory limit of 2048 MB:
- Docker has 512 MB,
- Redis has 512 MB
- MySQL uses the default memory limit (1024 MB),
pipelines: default: - step: services: - redis - mysql - docker script: - echo "This step is only allowed to consume 2048 MB of memory" - echo "Services are consuming the rest. docker 512 MB, redis 512 MB, mysql 1024 MB" definitions: services: redis: image: redis:3.2 memory: 512 docker: memory: 512 # reduce memory for docker-in-docker from 1GB to 512MB mysql: image: mysql:5.7 # memory: 1024 # default value variables: MYSQL_DATABASE: my-db MYSQL_ROOT_PASSWORD: $password
Use a service in a pipeline
If a service has been defined in the 'definitions' section of the bitbucket-pipelines.yml file, you can reference that service in any of your pipeline steps.
For example, the following causes the
redis service to run with the step:
pipelines: default: - step: image: node script: - npm install - npm test services: - redis
Use a private image
You can define a service that has restricted access as follows:
definitions: services: redis: image: name: redis:3.2 username: email@example.com password: $DOCKER_PASSWORD
For more complete example of using docker images from different registries and different formats, see Use Docker images as build environments
Caveats and limitations
Services in Pipelines have the following limitations:
- Maximum of 5 services for a step
- Memory limits as described above
- No REST API for accessing services and logs under pipeline results
- No mechanism to wait for service startup.
If you want to run a larger number of small services, it would be better to use Docker run or docker-compose.
There are some reserved ports which can't be used:
- No related content found