Debug pipelines locally with Docker

Still need help?

The Atlassian Community is here for you.

Ask the community

Use the Troubleshoot Failed Bitbucket Pipeline article if you are troubleshooting a failed Bitbucket Pipeline locally.


You can test your Bitbucket Pipelines build locally with Docker. This can be helpful to check whether your Docker image is suitable, or if you are having memory issues in Pipelines when you try to build.

This guide will show 3 levels of testing:

  • test building your container

  • test running your container

  • test running commands inside your container

More information can be found in Docker's documentation:

Before you begin

Prepare your local environment. Make sure you have a local copy of your Bitbucket repository with your bitbucket-pipelines.yml file ready.

How to clone a repo

In short:

$ cd /Users/myUserName/code

$ git clone git@bitbucket.org:myBBUserName/localDebugRepo.git

In the above command replace myBBUserName with your workspace ID, and localDebugRepo with your repository slug. For the detailed guidelines, see Clone a repository.

  • If you are emulating your Pipelines build locally, you will need to run a ‘git reset’ command based on your Pipelines build commit hash. This is to make sure your local build is running against the same exact commit as your Pipeline.

$ cd localDebugRepo

$ git reset --hard 58ab3379d12cd394d0cca78d165d3b42625b0750

The commit hash can be found in the Pipelines build’s ‘Build step’.

Example

In this scenario, we'll be testing the following bitbucket-pipelines.yml file:

# You can use a Docker image from Docker Hub or your own container
# registry for your build environment.
image: python:3
 
pipelines:
  default:
    - step:
        script: # Modify the commands below to build your repository.
          - python --version
          - python myScript.py

You can check your bitbucket-pipelines.yml  file with our online validator. Once your local repository is ready, proceed to the steps below.

Step 1: Install Docker on your machine

The installation procedure differs depending on the operating system that you want to use. Follow the installation guidelines provided by Docker.

Once you've installed Docker, go to the terminal and run:

$ docker version

If you have Docker installed, the command returns the version details, which shows it's installed correctly. Confirm that your local version matches the version used in your pipelines when they run in Bitbucket; if not, you may encounter compatibility issues.

Once your Docker installation is in place, you can build your own custom image, or use an existing image (for example the ones downloadable from Docker Hub).

Step 2: Building your Docker image locally using Dockerfile

We recommend building your base Docker image locally even if you’re not using a custom Docker image. We also highly recommend copying your local repository folder location to the base Docker image including the working directory.

If you’re within your local repository folder at the moment, you can go back one folder level and create a file (for example, my.dockerfile or any Dockerfile name).

cd ..
touch my.dockerfile

After that, you can now set up your Dockerfile. Your my.dockerfile should be something like this:

FROM python:3
WORKDIR /localDebugRepo
COPY ./localDebugRepo /localDebugRepo

where:

FROM python:3

The Docker image that you want to run. You probably want to use the image that you specified in your bitbucket-pipeline.yml file. For more information, see Use Docker images as build environments.

WORKDIR /localDebugRepo

Sets the directory in which you want to start. This is the directory in which you run the commands. By default, it's the root directory of the container.

COPY ./localDebugRepo

Copies files from the host machine to the file system of the container.

To get more information about Dockerfile, you might find this run-through of creating an image useful, or Docker's official Get started guide. You can also check out this page about Dockerfile reference.

Once done, you can now build your Docker image by running the command below:

$ docker build --memory=1g --memory-swap=1g -t account/imageName:tag -f my.dockerfile .

where:

docker build

Specifies you wish to build a docker image


--memory=1g --memory-swap=1g

Builds the container with 1GB of memory and no swap space (swap is calculated by swap value - memory value). This memory amount simulates the memory restrictions in Pipelines for a service container (both flags are needed). 

Set the memory limit when debugging locally to replicate Pipelines as closely as possible and so discover whether you are hitting Pipelines memory limits. Many issues with failing pipelines that pass locally are due to memory constraints.

Read more about memory management in the Docker docs.

macOS (OS X)

Use the docker stats command to check the actual memory limit of the container.

You can modify the default memory settings in the status bar ( > Preferences > Advanced).

-t accountName/imageName:tag

Creates an image for the account with the name provided, with the image name provided, and with the tag provided.

-f my.dockerfile

Specifies the name of the docker file to use when building the image

.

Specifies the directory (currently the current one) to use as the root of the docker build context

If you didn't define an image in your Pipelines file, you will be using the Atlassian default image.

Step 3: Test running a Docker container

Once your base Docker image is built, you can now run a container using that built Docker image.

$ docker run -it --memory=4g --memory-swap=4g --memory-swappiness=0 --cpus=4 --entrypoint=/bin/bash account/imageName:tag

where:

docker run -it

Runs a Docker container with a TTY and with STDIN open. It means that the container opens in an interactive mode and you can run commands in it.

--memory=4g --memory-swap=4g --memory-swappiness=0

Runs the container with 4 GB of memory and no swap space, which simulates the memory restrictions in Pipelines for a build container (all 3 flags are needed).

Set the memory limit when debugging locally to replicate Pipelines as closely as possible and so discover whether you are hitting Pipelines memory limits. Many issues with failing pipelines that pass locally are due to memory constraints.

Read more about memory management in the Docker docs.

For macOS (OS X) :

Use the docker stats command to check the actual memory limit of the container.

You can modify the default memory settings by opening Docker Desktop and going to Settings (gear icon) >  Resources > Advanced.

cpus=4

Specify how much of the available CPU resources a container can use. Read more about runtime options in the Docker docs.

imageName:tag

The Docker image that you want to run. You probably want to use the image that you specified in your bitbucket-pipeline.yml file. For more information, see Use Docker images as build environments.

--entrypoint=/bin/bash

Starts a bash prompt when the container starts.

If you need to pass environment variables into your container you can use the -e switch, for example adding -e "VAR1=hello" -e "VAR2=world" to the above command before -it, or use a file --env-file=env-vars.txt. Learn more.

Hopefully, now your Docker image is running and you can see the ID, for example:

root@1af123ef2211:/localDebugRepo

This means you're in your working directory inside the container and you can start executing commands. If you are emulating your Pipelines build locally, you can execute the same sequence of commands you have defined in the script section of your bitbucket-pipeline.yml file.

Testing with build services

If your build would normally use services, for example, MySQL, you can use separate containers to test this locally too.

To use services, start the service container before your main container by adding the --network=host option to use the host's networking directly.

For example with MySQL:

docker run --network=host --name my-mysql-name  \ 
-e MYSQL_DATABASE='pipelines' \ 
-e MYSQL_RANDOM_ROOT_PASSWORD='yes' \ 
-e MYSQL_USER='test_user' \ 
-e MYSQL_PASSWORD='test_user_password' \ 
-d mysql:<tag>

Then, when you are running your main container, make sure to add the --network=host option as well to link it to the service container.

The example command in Step 3 to run the main container would become:

docker run -it --network=host --memory=4g --memory-swap=4g --memory-swappiness=0 --cpus=4 --entrypoint=/bin/bash account/imageName:tag

Step 4: Test your script in your local setup

After getting your container built and running, you can run the commands you've listed in your pipelines script.

If you find any problems you can debug them locally, and once you've got them working well, update your bitbucket-pipelines.yml to match.

Let's look at an example to see this in practice:

Example

At this stage, we have a python:3 container open and we're inside the repository directory.

From here we can:

  • run individual commands from the bitbucket-pipelines.yml to test them

  • configure tools inside the container

We'll be testing the bitbucket-pipelines.yml file that we mentioned at the beginning of this guide.

bitbucket-pipelines.yml
# You can use a Docker image from Docker Hub or your own container
# registry for your build environment.
image: python:3
 
pipelines:
  default:
    - step:
        script: # Modify the commands below to build your repository.
          - python --version
          - python myScript.py

Our first step in the script is testing the python version:

$ python --version

The output looks good:

Python 3.11.0

Now we'll run the python script:

$ python myScript.py

This example works only if you have the myScript.py script in the repo. Our script contains the version command we ran above.

Again, the output seems to be error-free:

Python 3.11.0

Then we might configure things inside the container:

$ pip install scipy

Output:

Collecting scipy
  Downloading scipy-1.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (43.9 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 43.9/43.9 MB 24.1 MB/s eta 0:00:00
Collecting numpy<1.25.0,>=1.18.5
  Downloading numpy-1.23.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 17.1/17.1 MB 24.9 MB/s eta 0:00:00
Installing collected packages: numpy, scipy
Successfully installed numpy-1.23.3 scipy-1.9.1

As this runs well, we can add it to our bitbucket-pipelines.yml file and commit the changes, confident that the pipeline will run error-free.

bitbucket-pipelines.yml
# You can use a Docker image from Docker Hub or your own container
# registry for your build environment.
image: python:3
 
pipelines:
  default:
    - step:
        script: # Modify the commands below to build your repository.
          - python --version
          - python myScript.py
          - pip install scipy
Last modified on Aug 6, 2024

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.