Configure bitbucket-pipelines.yml

robotsnoindex

The bitbucket-pipelines.yml file defines your Pipelines builds configuration. If you're new to Pipelines you can learn more about how to get started here

Basic configuration 

With a basic configuration, you can do things like writing scripts to build and deploy your projects and configuring caches to speed up builds. You can also specify different images for each step to manage different dependencies across actions you're performing in your pipeline.

A pipeline is made up of a list of steps, and you can define multiple pipelines in the configuration file. In the following graph, you can see a pipeline configured under the default section. The pipeline configuration file can have multiple sections identified by particular keywords.

Before you begin

  • At least, the file must contain one pipeline section containing at least one step and one script inside the step.
  • Each step has 4GB of memory available.
  • A single pipeline can have up to 100 steps.
  • Each step in your pipeline runs a separate Docker container. If you want, you can use different types of containers for each step by selecting different images.

Steps 

1. To configure the yaml file, in Bitbucket go to your repo > Pipelines, and click . Alternatively, you can configure your yaml file without using Bitbucket's interface.

2. Choose language.

Note: Pipelines can be configured for building or deploying projects written in any language. Language guides

3. Choose an image.

Note: Edit it directly from the product when you first get to Pipelines or anytime from within your pipeline by clicking ,  or from your repo.

The file must at least contain one pipeline section containing at least one step and one script inside the step.

Section                                                                 Description


default - Contains the pipeline definition for all branches that don't match a pipeline definition in other sections.

The default pipeline runs on every push to the repository unless a branch-specific pipeline is defined. You can define a branch pipeline in the branches section.

Note: The default pipeline doesn't run on tags or bookmarks.



branches - Defines a section for all branch-specific build pipelines. The names or expressions in this section are matched against:

  • Branches in your Git repository
  • Named branches in your Mercurial repository

See Branch workflows for more information about configuring pipelines to build specific branches in your repository.

(grey lightbulb) Check out the glob patterns cheat sheet to define the branch names.



tags - Defines all tag-specific build pipelines. The names or expressions in this section are matched against tags and annotated tags in your Git repository.

(grey lightbulb) Check out the glob patterns to define your tags.

bookmarks - Defines all bookmark-specific build pipelines. The names or expressions in this section are matched against bookmarks in your Mercurial repository.

Example
image: node:10.15.0
   
pipelines:
  default:
    - step:
        name: Build and test
        script:
          - npm install
          - npm test
  bookmarks:                      # add the 'bookmarks' section
    release-*:                    # specify the bookmark
      - step:                     # define the build pipeline for the bookmark
          name: Build and release
          script:
            - npm install
            - npm test
            - npm run release
  branches:
    staging:
      - step:
          name: Clone
          script:
            - echo "Clone all the things!"

(grey lightbulb) Check out the glob patterns cheat sheet to define your bookmarks.


pull-requestsA special pipeline that only runs on pull requests initiated from within your repo. It merges the destination branch into your working branch before it runs. Pull requests from a forked repository don't trigger the pipeline. If the merge fails, the pipeline stops.

Important

Pull request pipelines run in addition to any branch and default pipelines that are defined, so if the definitions overlap you may get 2 pipelines running at the same time.

If you already have branches in your configuration, and you want them all to only run on pull requests, replace the keyword branches with pull-requests.

Example
pipelines:
  pull-requests:
    '**': #this runs as default for any branch not elsewhere defined
      - step:
          script:
            - ...
    feature/*: #any branch with a feature prefix
      - step:
          script:
            - ...
branches:    #these will run on every push of the branch
    staging:
      - step:
          script:
            - ...

(grey lightbulb) Check out the glob patterns cheat sheet to define the pull-requests.

customDefines pipelines that can only be triggered manually or scheduled from the Bitbucket Cloud interface.

Example
image: node:10.15.0
    
pipelines:
  custom: # Pipelines that are triggered manually
    sonar: # The name that is displayed in the list in the Bitbucket Cloud GUI
      - step:
          script:
            - echo "Manual triggers for Sonar are awesome!"
    deployment-to-prod: # Another display name
      - step:
          script:
            - echo "Manual triggers for deployments are awesome!"
  branches:  # Pipelines that run automatically on a commit to a branch
    staging:
      - step:
          script:
            - echo "Auto pipelines are cool too."

With a configuration like the one above, you should see the following pipelines in the Run pipeline dialog in Bitbucket Cloud:

For more information, see Run pipelines manually.


Example:

image: node:10.15.0
   
pipelines:
  default:
    - step:
        name: Build and test
        script:
          - npm install
          - npm test
  tags:                         # add the 'tags' section
    release-*:                  # specify the tag
      - step:                   # define the build pipeline for the tag
          name: Build and release
          script:
            - npm install
            - npm test
            - npm run release
  branches:
    staging:
      - step:
          name: Clone
          script:
            - echo "Clone all the things!"


Advanced configuration

Use the advanced options for running services and running tests in parallel. You can also do things such as configuring a manual step and setting a maximum time for each step, configure 2x steps to get 8GB of memory.

Before you begin

  • A pipeline YAML file must have at least one section with a keyword and one or more steps.
  • Each step has 4GB of memory available.
  • A single pipeline can have up to 100 steps.
  • Each step in your pipeline runs a separate Docker container. If you want, you can use different types of containers for each step by selecting different images.

Global configuration options

Keywords list

Keyword                                                                Description


variables[Custom pipelines only] Contains variables that are supplied when a pipeline is launched. To enable the variables, define them under the custom pipeline that you want to enter when you run the pipeline:

Example
pipelines:
  custom:
    custom-name-and-region: #name of this pipeline
      - variables:          #list variable names under here
          - name: Username
          - name: Region
      - step: 
          script:
            - echo "User name is $Username"
            - echo "and they are in $Region"

Then, when you run a custom pipeline (Branches ⋯ Run pipeline for a branch > Custom:..) you'll be able to fill them in.

The keyword variables can also be part of the definition of a service.


nameWhen the keyword name is in the variables section of your yaml, it defines variables that you can add or update when running a custom pipeline. Pipelines can use the keyword inside a step.

parallel -  Parallel steps enable you to build and test faster, by running a set of steps at the same time. The total number of build minutes used by a pipeline will not change if you make the steps parallel, but you'll be able to see the results sooner.

There is a limit of 100 for the total number of steps you can run in a pipeline, regardless of whether they are running in parallel or serial.

Indent the steps to define which steps run concurrently:

Example
pipelines:
  default:
    - step: # non-parallel step
        name: Build
        script:
          - ./build.sh
    - parallel: # these 2 steps will run in parallel
        - step:
            name: Integration 1
            script:
              - ./integration-tests.sh --batch 1
        - step:
            name: Integration 2
            script:
              - ./integration-tests.sh --batch 2
    - step:          # non-parallel step
        script:
          - ./deploy.sh

Learn more about parallel steps.

step - Defines a build execution unit. Steps are executed in the order that they appear in the bitbucket-pipelines.yml file. You can use up to 100 steps in a pipeline.

Each step in your pipeline will start a separate Docker container to run the commands configured in the script. Each step can be configured to:

  • Use a different Docker image.
  • Configure a custom max-time.
  • Use specific caches and services.
  • Produce artifacts that subsequent steps can consume.
  • You can have a clone section here.

Steps can be configured to wait for a manual trigger before running. To define a step as manual, add trigger: manual to the step in your bitbucket-pipelines.yml file. Manual steps:

  • It can only be executed in the order that they are configured. You cannot skip a manual step.
  • It can only be executed if the previous step has successfully completed.
  • It can only be triggered by users with write access to the repository.
  • Are triggered through the Pipelines web interface.

If your build uses both manual steps and artifacts, the artifacts are stored for 14 days following the execution of the step that produced them. After this time, the artifacts expire and any manual steps in the pipeline can no longer be executed.

Note: You can't configure the first step of a pipeline as a manual step.

name - Defines a name for a step to make it easier to see what each step is doing in the display.


image - Bitbucket Pipelines uses Docker containers to run your builds.

  • You can use the default image (atlassian/default-image:2) provided by Bitbucket or define a custom image. You can specify any public or private Docker image that isn't hosted on a private network.
  • You can define images at the global or step level. You can't define an image at the branch level.

To specify an image, use image: <your_account/repository_details>:<tag>

For more information about using and creating images, see Use Docker images as build environments.

Example
pipelines:
  default:
    - step:          # non-parallel step
        name: Build
        script:
          - ./build.sh
    - parallel:      # these 2 steps will run in parallel
        - step:
            name: Integration 1
            script:
              - ./integration-tests.sh --batch 1
        - step:
            name: Integration 2
            script:
              - ./integration-tests.sh --batch 2
    - step:          # non-parallel step
        script:
          - ./deploy.sh


triggerSpecifies whether a step will run automatically or only after someone manually triggers it. You can define the trigger type as manual or automatic. If the trigger type is not defined, the step defaults to running automatically. The first step cannot be manual. If you want to have a whole pipeline only run from a manual trigger then use a custom pipeline.

Example
pipelines:
  default:
    - step:
        name: Build and test
        image: node:10.15.0
        script:
          - npm install
          - npm test
          - npm run build
        artifacts:
          - dist/**
    - step:
        name: Deploy
        image: python:3.7.2
        trigger: manual
        script:
          - python deploy.py


deployment -  Sets the type of environment for your deployment step, and it is used in the Deployments dashboard. The Valid values are: test, staging, or production.

The following step will display in the test environment in the Deployments view:

Valid values are: test, staging, or production.

Example
- step:
    name: Deploy to test
    image: aws-cli:1.0
    deployment: test
    script:
      - python deploy.py test

size - You can allocate additional resources to a step, or to the whole pipeline. By specifying the size of 2x, you'll have double the resources available (eg. 4GB memory → 8GB memory).

At this time, valid sizes are 1x and 2x.

2x pipelines will use twice the number of build minutes.

Example: Overriding the size of a single step
pipelines:
  default:
    - step:
        script:
          - echo "All good things..."
    - step:
        size: 2x # Double resources available for this step.
        script:
          - echo "Come to those who wait."

scriptContains a list of commands that are executed in sequence. Scripts are executed in the order in which they appear in a step. We recommend that you move large scripts to a separate script file and call it from the bitbucket-pipelines.yml.

pipePipes make complex tasks easier, by doing a lot of the work behind the scenes. This means you can just select which pipe you want to use, and supply the necessary variables. You can look at the repository for the pipe to see what commands it is running. Learn more about pipes.

A pipe to send a message to Opsgenie might look like:

Example
pipelines:
  default:
    - step:
        name: Alert Opsgenie
        script:
          - pipe: atlassian/opsgenie-send-alert:0.2.0
            variables:
              GENIE_KEY: $GENIE_KEY
              MESSAGE: "Danger, Will Robinson!"
              DESCRIPTION: "An Opsgenie alert sent from Bitbucket Pipelines"
              SOURCE: "Bitbucket Pipelines"
              PRIORITY: "P1"

You can also create your own pipes. If you do, you can specify a docker based pipe with the syntax:

 pipe: docker://<DockerAccountName>/<ImageName>:<version>

after-scriptCommands inside an after-script section will run when the step succeeds or fails. This could be useful for clean up commands, test coverage, notifications, or rollbacks you might want to run, especially if your after-script uses the value of BITBUCKET_EXIT_CODE.

Note: If any commands in the after-script section fail:

  • we won't run any more commands in that section
  • it will not affect the reported status of the step.
Example
pipelines:
  default:
    - step:
        name: Build and test
        script:
          - npm install
          - npm test
        after-script:
          - echo "after script has run!"

artifactsDefines files that are produced by a step, such as reports and JAR files, that you want to share with a following step.

Artifacts can be defined using glob patterns.

Example
pipelines:
  default:
    - step:
        name: Build and test
        image: node:10.15.0
        script:
          - npm install
          - npm test
          - npm run build
        artifacts:
          - dist/**
    - step:
        name: Deploy to production
        image: python:3.7.2
        script:
          - python deploy-to-production.py

For more information, see using artifacts in steps.

optionsContains global settings that apply to all your pipelines. The main keyword you'd use here is max-time.

max-timeYou can define the maximum amount of minutes a step can execute at a global level or at a step level. Use a whole number greater than 0 and less than 120.

Example
options:
  max-time: 60
pipelines:
  default:
    - step:
        name: Sleeping step
        script:
          - sleep 120m # This step will timeout after 60 minutes
    - step:
        name: quick step
        max-time: 5
        script:
          - sleep 120m #this step will timeout after 5 minutes

If you don't specify a max-time, it defaults to 120.

clone - Contains settings for when we clone your repository into a container. Settings here include:

  • LFS - Support for Git LFS
  • depth - the depth of the Git clone.
  • Setting  enabled setting to false will disable git clones.

lfs (GIT only)Enables the download of LFS files in your clone. If defaults to false if not specified. Note that the keyword is supported only for Git repositories.

Example
clone:
  lfs: true
   
pipelines:
  default:
    - step:
        name: Clone and download
        script:
          - echo "Clone and download my LFS files!"

depth (Git only)Defines the depth of Git clones for all pipelines. Note that keyword is supported only for Git repositories.

Use a whole number greater than zero to specify the depth. Use full for a full clone. If you don't specify the Git clone depth, it defaults to 50.

Example
clone:
  depth: 5       # include the last five commits
  
pipelines:
  default:
    - step:
        name: Cloning
        script:
          - echo "Clone all the things!"

enabled - Setting enabled setting to false will disable git clones.


Example
pipelines:
  default:
    - step:
        name: No clone
        clone:
          enabled: false
        script:
          - echo "I don't need to clone in this step!"

condition - This allows steps to be executed only when a condition or rule is satisfied. Currently, the only condition supported is changesets. Use changesets to execute a step only if one of the modified files matches the expression in includePaths.

Changes that are taken into account:

In a pull-request pipeline, all commits are taken into account, and if you provide an includePath list of patterns, a step will be executed when at least one commit change matches one of the conditions. The format for pattern matching follows the glob patterns as described in the following page.

Example

In the following example, the step1 will only execute if the commit that triggered the pipeline include changes in XML files inside the path1 directory or any file in the nested directory structure under path2.

Example
 - step:
          name: step1
          script:
             - echo "failing paths"
             - exit 1
          condition:
              changesets:
                 includePaths:
                   # only xml files directly under path1 directory
                   - "path1/*.xml"
                   # any changes in deeply nested directories under path2
                   - "path2/**"

If the files have no changes, the step is skipped and the pipeline succeeds.

For other types of pipelines, only the last commit is considered. It should be fine for pull request merge commits in master for example but if you push multiple commits to branch at the same time or if you push multiple times to given branch you might experience non-intuitive behavior when failing pipelines turn green only because the failing step is skipped on the next run.

Conditions and merge checks

If a successful build result is among your pull request merge checks, be aware that conditions on the steps can produce false-positives for branch pipelines. If build result consistency is paramount, consider avoiding conditions entirely or use pull-request pipelines only.


definitionsDefine resources used elsewhere in your pipeline configuration. Resources can include:

servicesPipelines can spin up separate docker containers for services, which results in faster builds, and easy service editing.

Example of a fully configured service

If you want a MySQL service container (a blank database available on localhost:3306 with a default database pipelines, user root, and password let_me_in) you could add:

definitions:
  services:
    mysql:
      image: mysql
      variables:
        MYSQL_DATABASE: pipelines
        MYSQL_ROOT_PASSWORD: let_me_in
pipelines:
  default:
   - step:
      services:
        - docker
        - mysql
      script:
        - ...
definitions:
  services:
    mysql: mysql:latest

Learn more about how to use services here.

cachesRe-downloading dependencies from the internet for each step of a build can take a lot of time. Using a cache they are downloaded once to our servers and then locally loaded into the build each time.

Example
definitions:
  caches:
    bundler: vendor/bundle
pipelines:
  default:
   - step:
      caches:
        - npm
      script:
        - npm install


YAML anchorsYAML anchors - a way to define a chunk of your yaml for easy re-use - see YAML anchors.







Last modified on Jun 24, 2020

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.