You can define your build pipelines by using a selection of the following keywords. They are arranged in this table in the order in which you might use them, with highlighting to give a rough guide to logical groupings.
|pipelines||Contains all your pipeline definitions.|
Contains the pipeline definition for all branches that don't match a pipeline definition in other sections.
Contains pipeline definitions for specific branches.
|tags||Contains pipeline definitions for specific Git tags and annotated tags.|
|bookmarks||Contains pipeline definitions for specific Mercurial bookmarks.|
|custom||Contains pipelines that can be triggered manually from the Bitbucket Cloud GUI.|
|parallel||Contains steps to run concurrently.|
Defines a build execution unit. This defines the commands executed and settings of a unique container.
|name||Defines a name for a step to make it easier to see what each step is doing in the display.|
|image||The Docker image to use for a step. If you don't specify the image, your pipelines run in the default Bitbucket image. This can also be defined globally to use the same image type for every step.|
|trigger||Specifies whether the step is manual or automatic. If you don't specify a trigger type, it defaults to automatic.|
Sets the type of environment for your deployment step.
Valid values are: 'test', 'staging', or 'production'.
Used to provision extra resources for pipelines and steps.
Valid values are: '1x' or '2x'
|script||Contains the list of commands that are executed to perform the build.|
|artifacts||Defines files that are produced by a step, such as reports and JAR files, that you want to share with a following step.|
|options||Contains global settings that apply to all your pipelines.|
The maximum time (in minutes) a step can execute for.
Use a whole number greater than 0 or less than 120. If you don't specify a max-time, it defaults to 120.
|clone||Contains settings for when we clone your repository into a container|
|lfs||Enables the download of LFS files in your clone. This defaults to
Defines the depth of Git clones for all pipelines.
Use a whole number greater than zero to specify the depth. Use
Note: This keyword is supported only for Git repositories.
|definitions||Defines resources, such as services and custom caches, that you want to use elsewhere in y our pipeline configurations.|
|services||Define services you would like to use with you build, which are run in separate but linked containers.|
|caches||Define dependencies to cache on our servers to reduce load time.|
The start of your pipelines definitions. You must define your build pipelines using at least one of the following:
- default (for all branched that don't match any of the following).
- branches (Git and Mercurial)
- tags (Git)
- bookmarks (Mercurial)
The default pipeline runs on every push to the repository, unless a branch-specific pipeline is defined. You can define a branch pipeline in the branches section.
Note: The default pipeline doesn't run on tags or bookmarks.
Defines a container for all branch-specific build pipelines. The names or expressions in this section are matched against:
- branches in your Git repository
- named branches in your Mercurial repository
You can use glob patterns for handling the branch names.
See Branch workflows for more information about configuring pipelines to build repo branches.
Defines a container for all tag-specific build pipelines. The names or expressions in this section are matched against tags and annotated tags in your Git repository. You can use glob patterns for handling the tag names.
image: node:4.6.0 pipelines: default: - step: name: Build and test script: - npm install - npm test tags: # add the 'tags' section release-*: # specify the tag - step: # define the build pipeline for the tag name: Build and release script: - npm install - npm test - npm run release branches: staging: - step: name: Clone script: - echo "Clone all the things!"
Serves as a container for all bookmark-specific build pipelines. The names or expressions in this section are matched against bookmarks in your Mercurial repository. You can use glob patterns for handling the tag names.
image: node:4.6.0 pipelines: default: - step: name: Build and test script: - npm install - npm test bookmarks: # add the 'bookmarks' section release-*: # specify the bookmark - step: # define the build pipeline for the bookmark name: Build and release script: - npm install - npm test - npm run release branches: staging: - step: name: Clone script: - echo "Clone all the things!"
Defines a container for pipelines that can only be triggered manually or scheduled from the Bitbucket Cloud interface.
image: node:4.6.0 pipelines: custom: # Pipelines that are triggered manually sonar: # The name that is displayed in the list in the Bitbucket Cloud GUI - step: script: - echo "Manual triggers for Sonar are awesome!" deployment-to-prod: # Another display name - step: script: - echo "Manual triggers for deployments are awesome!" branches: # Pipelines that run automatically on a commit to a branch staging: - step: script: - echo "Automated pipelines are cool too."
With a configuration like the one above, you should see the following pipelines in the 'Run pipeline' dialog in Bitbucket Cloud:
For more information, see Run pipelines manually.
Parallel steps enable you to build and test faster, by running a set of steps at the same time.
The total number of build minutes used by a pipeline will not change if you make the steps parallel, but you'll be able to see the results sooner.
There is a limit of 10 for the total number of steps you can run in a pipeline, regardless of whether they are running in parallel or serial.
Indent the steps to define which steps run concurrently:
pipelines: default: - step: # non-parallel step name: Build script: - ./build.sh - parallel: # these 2 steps will run in parallel - step: name: Integration 1 script: - ./integration-tests.sh --batch 1 - step: name: Integration 2 script: - ./integration-tests.sh --batch 2 - step: # non-parallel step script: - ./deploy.sh
Learn more about parallel steps.
Defines a build execution unit. Steps are executed in the order that they appear in the
bitbucket-pipelines.yml file. You can use up to 10 steps in a pipeline.
Each step in your pipeline will start a separate Docker container to run the commands configured in the
script. Each step can be configured to:
- Use a different Docker image.
- Configure a custom max-time.
- Use specific caches and services.
- Produce artifacts that subsequent steps can consume.
Steps can be configured to wait for a manual trigger before running. To define a step as manual, add
trigger: manual to the step in your
bitbucket-pipelines.yml file. Manual steps:
- Can only be executed in the order that they are configured. You cannot skip a manual step.
- Can only be executed if the previous step has successfully completed.
- Can only be triggered by users with "write" access to the repository.
- Are triggered through the Pipelines web interface.
If your build uses both manual steps and artifacts, the artifacts are stored for 7 days following the execution of the step that produced them. After this time, the artifacts expire and any manual steps in the pipeline can no longer be executed. For more information, see Manual steps and artifact expiry .
Note: you can't configure the first step of the pipeline as a manual step.
You can add a name to a step to make it clear in any displays, or reports, which step is being referred to.
Bitbucket Pipelines uses Docker containers to run your builds.
- You can use the default image (
atlassian/default-image:latest) provided by Bitbucket or define a custom image. You can specify any public or private Docker image that isn't hosted on a private network.
- You can define images at the global or step level. You can't define an image at the branch level.
To specify an image, use
For more information about using and creating images, see Use Docker images as build environments.
|Uses the image with the latest openjdk version|
|Uses the image with openjdk version 8|
|Uses the non-official node version with version iojs-2.0.2|
image: openjdk #this image will be used by all steps unless overridden pipelines: default: - step: image: nodesource/node:iojs-2.0.2 #override the global image for this step script: - npm install - npm test - step: #this step will use the global image script: - npm install - npm test
Specifies whether a step will run automatically or only after being manually triggered to run by a user. You can define the trigger type as
automatic. If the trigger type is not defined, the step defaults to running automatically.
pipelines: default: - step: name: Build and test image: node:8.6 script: - npm install - npm test - npm run build artifacts: - dist/** - step: name: Deploy image: python:3.5.1 trigger: manual script: - python deploy.py
Sets the type of environment for your deployment step, used in the Deployments dashboard.
Valid values are 'test', 'staging', or 'production'.
An example which defines a step as deploying to the 'test' environment:
- step: name: Deploy to test image: aws-cli:1.0 deployment: test script: - python deploy.py test
You can allocate additional resources to a step, or to the whole pipeline. By specifying the size of '2x', your pipeline have double the resources available (eg. 4GB memory → 8GB memory).
At this time, valid sizes are '1x' and '2x'.
2x pipelines will use twice the number of build minutes.
Overriding the size of a single step
pipelines: default: - step: script: - echo "All good things..." - step: size: 2x # Double resources applied. script: - echo "Come to those who wait."
Increasing the resources for an entire pipeline
Using the global size , all steps will inherit the '2x' size.
options: size: 2x pipelines: default: - step: name: Clone with more memory script: - echo "Clone all the things!"
Contains a list of commands that are executed in sequence. Scripts are executed in the order in which they appear in a step. We recommend that you move large scripts to a separate script file and call it the from bitbucket-pipelines.yml.
Defines files to be shared from one step to a later step in your pipeline. Artifacts can be defined using glob patterns.
An example showing how to define artifacts:
pipelines: default: - step: name: Build and test image: node:8.5.0 script: - npm install - npm test - npm run build artifacts: - dist/** - step: name: Deploy to production image: python:3.5.1 script: - python deploy-to-production.py
For more information, see using artifacts in steps
Contains global settings that apply to all your pipelines. Currently the only option to define is max-time.
You can define the maximum time a step can execute for (in minutes) at the global level or step level. Use a whole number greater than 0 and less than 120.
If you don't specify a max-time, it defaults to 120.
options: max-time: 60 pipelines: default: - step: name: Sleeping step script: - sleep 120m # This step will timeout after 60 minutes - step: name: quick step max-time: 5 script: - sleep 120m #this step will timeout after 5 minutes
Contains settings for when we clone your repository into a container. Settings here include:
- lfs - Support for Git lfs
- depth - the depth of the Git clone.
A global setting that specifies that Git LFS files should be downloaded with the clone.
Note: This keyword is supported only for Git repositories.
clone: lfs: true pipelines: default: - step: name: Clone and download script: - echo "Clone and download my LFS files!"
You can define the depth of clones at the global level. Use a whole number greater than zero or use
full to specify a full clone.
If you don't specify the Git clone depth, it defaults to 50.
Note: This keyword is supported only for Git repositories.
clone: depth: 5 # include the last five commits pipelines: default: - step: name: Cloning script: - echo "Clone all the things!"
Define resources used elsewhere in y our pipeline configuration. Resources can include:
- caches – see Caching dependencies
Rather than trying to build all the resources you might need into one large image, we can spin up separate docker containers for services. This will tend to make it quicker to build, and makes it very easy to change a single service without having to redo your whole image.
So if we wanted a redis service container we could add:
definitions: services: redis: image: redis
Re-downloading dependencies from the internet for each step of a build can take a lot of time. Using a cache they are downloaded once to our servers and then locally loaded into the build each time.
An example showing how to define a custom bundler cache:
definitions: caches: bundler: vendor/bundle
Glob patterns cheat sheet
Glob patterns don't allow any expression to start with a star. Every expression that starts with a star needs to be put in quotes.