Debug your pipelines locally with Docker

On this page

Still need help?

The Atlassian Community is here for you.

Ask the community

You can test your Bitbucket Pipelines build locally with Docker. This can be helpful to check whether your Docker image is suitable, or if you are having memory issues in Pipelines when you try to build.

This guide will shows 3 levels of testing:

  • test building your container
  • test running your container
  • test running commands inside your container


More information can be found in Docker's documentation:

Before you begin

Prepare your local environment. Make sure you have a local copy of your Bitbucket repository with your bitbucket-pipelines.yml file ready.

Show me how to clone a repo

In short:

$ cd /Users/myUserName/code

$ git clone git@bitbucket.org:myBBUserName/localDebugRepo.git

For the detailed guidelines, see Clone a repository.


Example

In this scenario, we'll be testing the following bitbucket-pipelines.yml file:

# You can use a Docker image from Docker Hub or your own container
# registry for your build environment.
image: python:2.7
 
pipelines:
  default:
    - step:
        script: # Modify the commands below to build your repository.
          - python --version
          - python myScript.py


You can check your bitbucket-pipelines.yml file with our online validator.


Step 1: Install Docker on your machine

The installation procedure differs depending on the operating system that you want to use. Follow the installation guidelines provided by Docker: https://docs.docker.com/engine/installation/

Once you've installed Docker, go to the terminal and run:

$ docker -v

If you have SSH installed, the command returns the version details, which shows it's installed correctly.


Once your Docker installation is in place, you can build your own custom image, or use an existing image (for example the ones downloadable from Docker Hub).

Step 2: Test building a custom Docker image

Building your own image presumes that you already have a dockerfile you've created to define your container. If you don't have a dockerfile, you might find this run through of creating an image useful, or Docker's official Get Started guide.

$ docker build --memory=1g --memory-swap=1g -t account/imageName:tag -f my.dockerfile .


where:

docker build

Specifies you wish to build a docker image


--memory=4g --memory-swap=4g

Builds the container with a 1GB of memory and no swap space, which simulates the memory restrictions in Pipelines (all 2 flags are needed).

Set the memory limit when debugging locally to replicate Pipelines as closely as possible and so discover whether you are hitting Pipelines memory limits. Many issues with failing pipelines that pass locally are due to memory constraints.

Read more about memory management in the Docker docs.

macOS (OS X)

Use the docker stats command to check the actual memory limit of the container.

You can modify the default memory settings in the status bar ( > Preferences > Advanced).

-t accountName/imageName:tag

Creates an image for the account with the name provided, with the image name provided and with the tag provided.

-f my.dockerfile

Specifies the name of the docker file to use when building the image

. Specifies the directory (currently the current one) to use as the root of the docker build context

Step 3: Test running a Docker container

If you have a built container you can test running it. We are going to use a python 2.7 image from Docker Hub for this example:

$ docker run -it --volume=/Users/myUserName/code/localDebugRepo:/localDebugRepo --workdir="/localDebugRepo" --memory=4g --memory-swap=4g --memory-swappiness=0 --entrypoint=/bin/bash python:2.7


where:

docker run -it

Runs a Docker container with a TTY and with STDIN open. It means that the container opens in an interactive mode and you can run commands in it.

--volume=/Users/myUserName/code/localDebugRepo:/localDebugRepo

Mounts the local directory called /Users/myUserName/code/localDebugRepo inside the container as /localDebugRepo.

Note

Any changes to the repository content that occur within the container are reflected in the local clone of your repository.

View more

The command runs with a default setting, which grants the container read and write permissions to the repository. You can limit the access to the local directory to read only by adding :ro at the end of the command:

-- volume=/Users/myUserName/code/localDebugRepo:/localDebugRepo:ro

--workdir="/localDebugRepo"

Sets the directory in which you want to start. This is the directory in which you run the commands. By default, it's the root directory of the container.

--memory=4g --memory-swap=4g --memory-swappiness=0

Runs the container with 4 GB of memory and no swap space, which simulates the memory restrictions in Pipelines (all 3 flags are needed).

Set the memory limit when debugging locally to replicate Pipelines as closely as possible and so discover whether you are hitting Pipelines memory limits. Many issues with failing pipelines that pass locally are due to memory constraints.

Read more about memory management in the Docker docs.

macOS (OS X)

Use the docker stats command to check the actual memory limit of the container.

You can modify the default memory settings in the status bar ( > Preferences > Advanced).

python: 2.7

The Docker image that you want to run. You probably want to use the image that you specified in your bitbucket-pipeline.yml file. For more information, see Use Docker images as build environments.

--entrypoint=/bin/bash

Starts a bash prompt

If you need to pass environment variables into your container you can use the -e switch, for example adding -e "VAR1=hello" -e "VAR2=world" to the above command before -it, or use a file --env-file=env-vars.txt. Learn more.


Hopefully now your Docker image is running and you can see the ID, for example:

root@1af123ef2211:/localDebugRepo

This means you're inside of your working directory and you can start executing commands.


Testing with build services

If your build would normally use services, for example MySQL, you can use separate containers to test this locally, too.

To use services, start the service container before your main container.

For example with MySQL:

docker run --name my-mysql-name  \ 
-e MYSQL_DATABASE: 'pipelines' \ 
-e MYSQL_RANDOM_ROOT_PASSWORD: 'yes' \ 
-e MYSQL_USER: 'test_user' \ 
-e MYSQL_PASSWORD: 'test_user_password' \ 
-d mysql:<tag>


Then, when you are running your main container, make sure to link it to the service container, using the --link option.

The example command in Step 3 would become:

docker run -it --link my-mysql-name:mysql --volume=/Users/myUserName/code/localDebugRepo:/localDebugRepo --workdir="/localDebugRepo" --memory=4g --memory-swap=4g --memory-swappiness=0 --entrypoint=/bin/bash python:2.7

Step 4: Test your script in your local setup

After getting your container built and running, you can run the commands you've listed in your pipelines script.

If you find any problems you can debug them locally, and once you've got them working well, update your bitbucket-pipelines.yml to match.

Let's look at an example to see this in practice:

Example

At this stage, we have a python:2.7 container open and we're inside the repository directory.

From here we can:

  • run individual commands from the bitbucket-pipelines.yml to test them

  • configure tools inside the container

We'll be testing the bitbucket-pipelines.yml file that we mentioned at the beginning of this guide.

Show me the file again
bitbucket-pipelines.yml
# You can use a Docker image from Docker Hub or your own container
# registry for your build environment.
image: python:2.7
 
pipelines:
  default:
    - step:
        script: # Modify the commands below to build your repository.
          - python --version
          - python myScript.py


Our first step in the script is testing the python version:

$ python --version

Output looks good:

Python 2.7.11


Now we'll run the python script:


$ python myScript.py

Note: This example works only if you have the myScript.py script in the repo. Our script contains the version command we ran above.

Again, the output seems to be error free:

Python 2.7.11


Then we might configure things inside the container:


$ pip install scipy

Output:

Collecting scipy
   Downloading scipy- 0.17 . 1 -cp27-cp27mu-manylinux1_x86_64.whl ( 39 .5MB)
     100 % |████████████████████████████████|  39 .5MB 34kB/s
Installing collected packages: scipy
Successfully installed scipy- 0.17 . 1


As this runs well, we can add it to our bitbucket-pipelines.yml file and commit the changes, confident that the pipeline will run error free.

bitbucket-pipelines.yml
# You can use a Docker image from Docker Hub or your own container
# registry for your build environment.
image: python:2.7
 
pipelines:
  default:
    - step:
        script: # Modify the commands below to build your repository.
          - python --version
          - python myScript.py
          - pip install scipy
Last modified on Jul 4, 2018

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.