Debug your pipelines locally with Docker
You can test your Bitbucket Pipelines build locally with Docker. This can be helpful to check whether your Docker image is suitable, or if you are having memory issues in Pipelines when you try to build.
This guide will shows 3 levels of testing:
- test building your container
- test running your container
- test running commands inside your container
More information can be found in Docker's documentation:
Before you begin
Prepare your local environment. Make sure you have a local copy of your Bitbucket repository with your bitbucket-pipelines.yml file ready.
Example
In this scenario, we'll be testing the following bitbucket-pipelines.yml file:
# You can use a Docker image from Docker Hub or your own container # registry for your build environment. image: python:2.7 pipelines: default: - step: script: # Modify the commands below to build your repository. - python --version - python myScript.py
You can check your bitbucket-pipelines.yml file with our online validator.
Step 1: Install Docker on your machine
The installation procedure differs depending on the operating system that you want to use. Follow the installation guidelines provided by Docker: https://docs.docker.com/engine/installation/
Once you've installed Docker, go to the terminal and run:
$ docker -v
If you have SSH installed, the command returns the version details, which shows it's installed correctly.
Once your Docker installation is in place, you can build your own custom image, or use an existing image (for example the ones downloadable from Docker Hub).
Step 2: Test building a custom Docker image
Building your own image presumes that you already have a dockerfile you've created to define your container. If you don't have a dockerfile, you might find this run through of creating an image useful, or Docker's official Get Started guide.
$ docker build --memory=1g --memory-swap=1g -t account/imageName:tag -f my.dockerfile .
where:
docker build |
Specifies you wish to build a docker image |
--memory=1g --memory-swap=1g |
Builds the container with a 1GB of memory and no swap space (swap is calculated by swap value - memory value). This memory amount simulates the memory restrictions in Pipelines for a service container (both flags are needed). Set the memory limit when debugging locally to replicate Pipelines as closely as possible and so discover whether you are hitting Pipelines memory limits. Many issues with failing pipelines that pass locally are due to memory constraints. Read more about memory management in the Docker docs. |
-t accountName/imageName:tag |
Creates an image for the account with the name provided, with the image name provided and with the tag provided. |
-f my.dockerfile |
Specifies the name of the docker file to use when building the image |
. | Specifies the directory (currently the current one) to use as the root of the docker build context |
Step 3: Test running a Docker container
If you have a built container you can test running it. We are going to use a python 2.7 image from Docker Hub for this example:
$ docker run -it --volume=/Users/myUserName/code/localDebugRepo:/localDebugRepo --workdir="/localDebugRepo" --memory=4g --memory-swap=4g
--memory-swappiness=0 --entrypoint=/bin/bash python:2.7
where:
docker run -it |
Runs a Docker container with a TTY and with STDIN open. It means that the container opens in an interactive mode and you can run commands in it. |
|
Mounts the local directory called Note Any changes to the repository content that occur within the container are reflected in the local clone of your repository. |
--workdir= "/localDebugRepo" |
Sets the directory in which you want to start. This is the directory in which you run the commands. By default, it's the root directory of the container. |
--memory=4g --memory-swap=4g --memory-swappiness=0 |
Runs the container with 4 GB of memory and no swap space, which simulates the memory restrictions in Pipelines for a build container (all 3 flags are needed). Set the memory limit when debugging locally to replicate Pipelines as closely as possible and so discover whether you are hitting Pipelines memory limits. Many issues with failing pipelines that pass locally are due to memory constraints. Read more about memory management in the Docker docs. |
python:
2.7
|
The Docker image that you want to run. You probably want to use the image that you specified in your |
--entrypoint=/bin/bash |
Starts a bash prompt |
If you need to pass environment variables into your container you can use the -e switch, for example adding -e "VAR1=hello" -e "VAR2=world
" to the above command before -it
, or use a file --env-file=env-vars.txt
. Learn more.
Hopefully now your Docker image is running and you can see the ID, for example:
root@1af123ef2211:/localDebugRepo
This means you're inside of your working directory and you can start executing commands.
Testing with build services
If your build would normally use services, for example MySQL, you can use separate containers to test this locally, too.
To use services, start the service container before your main container.
For example with MySQL:
docker run --name my-mysql-name \ -e MYSQL_DATABASE: 'pipelines' \ -e MYSQL_RANDOM_ROOT_PASSWORD: 'yes' \ -e MYSQL_USER: 'test_user' \ -e MYSQL_PASSWORD: 'test_user_password' \ -d mysql:<tag>
Then, when you are running your main container, make sure to link it to the service container, using the --link
option.
The example command in Step 3 would become:
docker run -it --link my-mysql-name:mysql --volume=/Users/myUserName/code/localDebugRepo:/localDebugRepo --workdir="/localDebugRepo" --memory=4g --memory-swap=4g --memory-swappiness=0 --entrypoint=/bin/bash python:2.7
Step 4: Test your script in your local setup
After getting your container built and running, you can run the commands you've listed in your pipelines script.
If you find any problems you can debug them locally, and once you've got them working well, update your bitbucket-pipelines.yml to match.
Let's look at an example to see this in practice:
Example
At this stage, we have a python:2.7
container open and we're inside the repository directory.
From here we can:
run individual commands from the
bitbucket-pipelines.yml
to test themconfigure tools inside the container
We'll be testing the bitbucket-pipelines.yml
file that we mentioned at the beginning of this guide.
Our first step in the script is testing the python version:
$ python --version
Output looks good:
Python
2.7
.
11
Now we'll run the python script:
$ python myScript.py
Note: This example works only if you have the myScript.py script in the repo. Our script contains the version command we ran above.
Again, the output seems to be error free:
Python
2.7
.
11
Then we might configure things inside the container:
$ pip install scipy
Output:
Collecting scipy
Downloading scipy-
0.17
.
1
-cp27-cp27mu-manylinux1_x86_64.whl (
39
.5MB)
100
% |████████████████████████████████|
39
.5MB 34kB/s
Installing collected packages: scipy
Successfully installed scipy-
0.17
.
1
As this runs well, we can add it to our bitbucket-pipelines.yml
file and commit the changes, confident that the pipeline will run error free.
# You can use a Docker image from Docker Hub or your own container
# registry for your build environment.
image: python:2.7
pipelines:
default:
- step:
script: # Modify the commands below to build your repository.
- python --version
- python myScript.py
- pip install scipy