Run Docker commands in Bitbucket Pipelines
Bitbucket Pipelines allows you to build a Docker image from a Dockerfile in your repository and to push that to a Docker registry, by running Docker commands within your build pipeline. Dive straight in – the pipeline environment is provided by default and you don't need to customize it!
Enable access to Docker
To enable access to Docker daemon, you can either add docker
as a service on the step (recommended), or add the global option in your bitbucket-pipelines.yml
.
Add Docker as a service in your build step (recommended)
pipelines:
default:
- step:
script:
- ...
services:
- docker
Note that Docker does not need to be declared as a service in the definitions
section. It is a default service that is provided by Pipelines without a definition.
Add Docker to all build steps in your repository
options:
docker: true
Note that even if you declare Docker here, it still counts as a service for Pipelines, has a limit of 1 GB memory, and can only be run with two other services in your build step. This setting is provided for legacy support, and we recommend setting it on a step level so there's no confusion about how many services you can run in your pipeline.
How it works
Configuring Docker as a service will:
- mount the Docker CLI executable in your build container
- run and provide your build access to a Docker daemon
You can verify this by running docker version
:
pipelines:
default:
- step:
script:
- docker version
services:
- docker
You can check your bitbucket-pipelines.yml file with our online validator.
Running Docker commands
Inside your Pipelines script you can run most Docker commands. See the Docker command line reference for information on how to use these commands.
We've had to restrict a few for security reasons, including Docker swarm-related commands, docker run --privileged
, docker run --mount
, and mapping volumes with a source outside $BITBUCKET_CLONE_DIR
.
Using Docker Compose
If you'd like to use Docker Compose in your container, you''ll need to install a binary that is compatible with your specified build container.
Using an external Docker daemon
If you have configured your build to run commands against your own Docker daemon hosted elsewhere, you can continue to do so. In this case, you should provide your own CLI executable as part of your build image (rather than enabling Docker in Pipelines), so the CLI version is compatible with the daemon version you are running.
Docker layer caching
If you have added Docker as a service, you can also add a Docker cache to your steps. Adding the cache can speed up your build by reusing previously built layers and only creating new dynamic layers as required in the step.
pipelines:
default:
- step:
script:
- docker build ...
services:
- docker
caches:
- docker # adds docker layer caching
A common use case for Docker cache is when you are building images. However, if you find that performance slows with the cache enabled, check you are not invalidating the layers in your dockerfile.
Docker layer caches have the same limitations and behaviors as regular caches as described on Caching Dependencies.
Docker memory limits
By default, the Docker daemon in Pipelines has a total memory limit of 1024 MB. This allocation includes all containers run via docker run
commands, as well as the memory needed to execute docker build
commands.
To increase the memory available to Docker you can change the memory limit for the built-in docker
service. The memory
parameter is a whole number of megabytes greater than 128 and not larger than the available memory for the step.
In the example below we are are giving the docker service twice the default allocation of1024 MB (2048). Depending on your other services and whether you have configured large builds for extra memory, you can increase this even further (learn more about memory limits).
pipelines:
default:
- step:
script:
- docker version
services:
- docker
definitions:
services:
docker:
memory: 2048
Authenticate when pushing to a registry
To push images to a registry, you need to use docker login
to authenticate prior to calling docker push
. You should set your username and password using variables.
For example, add this to your pipeline script:
docker login --username $DOCKER_USERNAME --password $DOCKER_PASSWORD
Reserved ports
Port restrictions
There are some reserved ports which can't be used:
29418