Configure bitbucket-pipelines.yml
The bitbucket-pipelines.yml
file defines your Pipelines builds configuration. If you're new to Pipelines you can learn more about how to get started here.
Basic configuration
With a basic configuration, you can do things like writing scripts to build and deploy your projects and configuring caches to speed up builds. You can also specify different images for each step to manage different dependencies across actions you're performing in your pipeline.
A pipeline is made up of a list of steps, and you can define multiple pipelines in the configuration file. In the following graph, you can see a pipeline configured under the default section. The pipeline configuration file can have multiple sections identified by particular keywords.
Before you begin
- At least, the file must contain one pipeline section containing at least one step and one script inside the step.
- Each step has 4GB of memory available.
- A single pipeline can have up to 100 steps.
- Each step in your pipeline runs a separate Docker container. If you want, you can use different types of containers for each step by selecting different images.
Steps
1. To configure the yaml file, in Bitbucket go to your repo > Pipelines, and click . Alternatively, you can configure your yaml file without using Bitbucket's interface.
2. Choose language.
Note: Pipelines can be configured for building or deploying projects written in any language. Language guides
3. Choose an image.
Note: Edit it directly from the product when you first get to Pipelines or anytime from within your pipeline by clicking , or from your repo.
The file must at least contain one pipeline section containing at least one step and one script inside the step.
Section Description
default - Contains the pipeline definition for all branches that don't match a pipeline definition in other sections.
The default pipeline runs on every push to the repository unless a branch-specific pipeline is defined. You can define a branch pipeline in the branches section.
Note: The default pipeline doesn't run on tags or bookmarks.
branches - Defines a section for all branch-specific build pipelines. The names or expressions in this section are matched against:
- Branches in your Git repository
- Named branches in your Mercurial repository
See Branch workflows for more information about configuring pipelines to build specific branches in your repository.
Check out the glob patterns cheat sheet to define the branch names.
tags - Defines all tag-specific build pipelines. The names or expressions in this section are matched against tags and annotated tags in your Git repository.
Check out the glob patterns to define your tags.
bookmarks - Defines all bookmark-specific build pipelines. The names or expressions in this section are matched against bookmarks in your Mercurial repository.
Check out the glob patterns cheat sheet to define your bookmarks.
pull-requests - A special pipeline that only runs on pull requests initiated from within your repo. It merges the destination branch into your working branch before it runs. Pull requests from a forked repository don't trigger the pipeline. If the merge fails, the pipeline stops.
Important
Pull request pipelines run in addition to any branch and default pipelines that are defined, so if the definitions overlap you may get 2 pipelines running at the same time.
If you already have branches in your configuration, and you want them all to only run on pull requests, replace the keyword branches
with pull-requests.
Check out the glob patterns cheat sheet to define the pull-requests.
custom - Defines pipelines that can only be triggered manually or scheduled from the Bitbucket Cloud interface.
Example:
image: node:10.15.0
pipelines:
default:
- step:
name: Build and test
script:
- npm install
- npm test
tags: # add the 'tags' section
release-*: # specify the tag
- step: # define the build pipeline for the tag
name: Build and release
script:
- npm install
- npm test
- npm run release
branches:
staging:
- step:
name: Clone
script:
- echo "Clone all the things!"
Advanced configuration
Use the advanced options for running services and running tests in parallel. You can also do things such as configuring a manual step and setting a maximum time for each step, configure 2x steps to get 8GB of memory.
Before you begin
- A pipeline YAML file must have at least one section with a keyword and one or more steps.
- Each step has 4GB of memory available.
- A single pipeline can have up to 100 steps.
- Each step in your pipeline runs a separate Docker container. If you want, you can use different types of containers for each step by selecting different images.
Global configuration options
Keywords list
Keyword Description
variables - [Custom pipelines only] Contains variables that are supplied when a pipeline is launched. To enable the variables, define them under the custom pipeline that you want to enter when you run the pipeline:
name - When the keyword name is in the variables section of your yaml, it defines variables that you can add or update when running a custom pipeline. Pipelines can use the keyword inside a step.
parallel - Parallel steps enable you to build and test faster, by running a set of steps at the same time. The total number of build minutes used by a pipeline will not change if you make the steps parallel, but you'll be able to see the results sooner.
There is a limit of 100 for the total number of steps you can run in a pipeline, regardless of whether they are running in parallel or serial.
Indent the steps to define which steps run concurrently:
Learn more about parallel steps.
step - Defines a build execution unit. Steps are executed in the order that they appear in the bitbucket-pipelines.yml
file. You can use up to 100 steps in a pipeline.
Each step in your pipeline will start a separate Docker container to run the commands configured in the script
. Each step can be configured to:
- Use a different Docker image.
- Configure a custom max-time.
- Use specific caches and services.
- Produce artifacts that subsequent steps can consume.
- You can have a clone section here.
Steps can be configured to wait for a manual trigger before running. To define a step as manual, add trigger: manual
to the step in your bitbucket-pipelines.yml
file. Manual steps:
- It can only be executed in the order that they are configured. You cannot skip a manual step.
- It can only be executed if the previous step has successfully completed.
- It can only be triggered by users with write access to the repository.
- Are triggered through the Pipelines web interface.
If your build uses both manual steps and artifacts, the artifacts are stored for 14 days following the execution of the step that produced them. After this time, the artifacts expire and any manual steps in the pipeline can no longer be executed.
Note: You can't configure the first step of a pipeline as a manual step.
name - Defines a name for a step to make it easier to see what each step is doing in the display.
image - Bitbucket Pipelines uses Docker containers to run your builds.
- You can use the default image (
atlassian/default-image:2
) provided by Bitbucket or define a custom image. You can specify any public or private Docker image that isn't hosted on a private network. - You can define images at the global or step level. You can't define an image at the branch level.
To specify an image, use image: <your_account/repository_details>:<tag>
For more information about using and creating images, see Use Docker images as build environments.
trigger - Specifies whether a step will run automatically or only after someone manually triggers it. You can define the trigger type as manual
or automatic
. If the trigger type is not defined, the step defaults to running automatically. The first step cannot be manual. If you want to have a whole pipeline only run from a manual trigger then use a custom pipeline.
deployment - Sets the type of environment for your deployment step, and it is used in the Deployments dashboard. The Valid values are: test
, staging
, or production
.
The following step will display in the test
environment in the Deployments view:
Valid values are: test
, staging
, or production
.
size - You can allocate additional resources to a step, or to the whole pipeline. By specifying the size of 2x
, you'll have double the resources available (eg. 4GB memory → 8GB memory).
At this time, valid sizes are 1x
and 2x
.
2x pipelines will use twice the number of build minutes.
script - Contains a list of commands that are executed in sequence. Scripts are executed in the order in which they appear in a step. We recommend that you move large scripts to a separate script file and call it from the bitbucket-pipelines.yml
.
pipe - Pipes make complex tasks easier, by doing a lot of the work behind the scenes. This means you can just select which pipe you want to use, and supply the necessary variables. You can look at the repository for the pipe to see what commands it is running. Learn more about pipes.
A pipe to send a message to Opsgenie might look like:
after-script - Commands inside an after-script section will run when the step succeeds or fails. This could be useful for clean up commands, test coverage, notifications, or rollbacks you might want to run, especially if your after-script uses the value of BITBUCKET_EXIT_CODE
.
Note: If any commands in the after-script section fail:
- we won't run any more commands in that section
- it will not affect the reported status of the step.
artifacts - Defines files that are produced by a step, such as reports and JAR files, that you want to share with a following step.
Artifacts can be defined using glob patterns.
options - Contains global settings that apply to all your pipelines. The main keyword you'd use here is max-time
.
max-time - You can define the maximum amount of minutes a step can execute at a global level or at a step level. Use a whole number greater than 0 and less than 120.
If you don't specify a max-time, it defaults to 120.
clone - Contains settings for when we clone your repository into a container. Settings here include:
LFS
- Support for Git LFSdepth
- the depth of the Git clone.- Setting
enabled
setting to false will disable git clones.
lfs (GIT only) - Enables the download of LFS files in your clone. If
defaults to false
if not specified. Note that the keyword is supported only for Git repositories.
depth (Git only) - Defines the depth of Git clones for all pipelines. Note that keyword is supported only for Git repositories.
Use a whole number greater than zero to specify the depth. Use full
for a full clone. If you don't specify the Git clone depth, it defaults to 50.
enabled - Setting enabled setting
to false will disable git clones.
condition - This allows steps to be executed only when a condition or rule is satisfied. Currently, the only condition supported is changesets. Use changesets
to execute a step only if one of the modified files matches the expression in includePaths.
Changes that are taken into account:
In a pull-request pipeline, all commits are taken into account, and if you provide an includePath
list of patterns, a step will be executed when at least one commit change matches one of the conditions. The format for pattern matching follows the glob patterns as described in the following page.
If the files have no changes, the step is skipped and the pipeline succeeds.
For other types of pipelines, only the last commit is considered. It should be fine for pull request merge commits in master for example but if you push multiple commits to branch at the same time or if you push multiple times to given branch you might experience non-intuitive behavior when failing pipelines turn green only because the failing step is skipped on the next run.
Conditions and merge checks
If a successful build result is among your pull request merge checks, be aware that conditions on the steps can produce false-positives for branch pipelines. If build result consistency is paramount, consider avoiding conditions entirely or use pull-request pipelines only.
definitions - Define resources used elsewhere in your pipeline configuration. Resources can include:
- Services that run in separate Docker containers – see Use services and databases in Bitbucket Pipelines.
- Caches – see Caching dependencies.
- YAML anchors - a way to define a chunk of your yaml for easy re-use - see YAML anchors.
services - Pipelines can spin up separate docker containers for services, which results in faster builds, and easy service editing.
Learn more about how to use services here.
caches - Re-downloading dependencies from the internet for each step of a build can take a lot of time. Using a cache they are downloaded once to our servers and then locally loaded into the build each time.
YAML anchors - YAML anchors - a way to define a chunk of your yaml for easy re-use - see YAML anchors.