How to write a pipe for Bitbucket Pipelines
A pipe is a custom Docker image for a container, which contains a script to perform a task. There are a bunch of available Pipes, but you can write your own too.
A pipe is made up of a few different files:
- A script, or binary, the code that performs the task.
- A Dockerfile, which tells us how to build the Docker container that runs your script.
- (Optional) metadata and readme docs, to make your pipe easy to understand.
- (Optional) some CI/CDconfiguration so that you can easily update the pipe.
These files are stored in a single place, usually a pipe repository.
Why write a pipe?
There are a few reasons to write a pipe:
- to do the same action in several steps of your pipeline
- to run similar tasks in multiple repositories
- if you are a vendor, to make your software or service easier to use in pipelines
- to perform an action which needs dependencies that your main pipeline doesn't have.
By making a pipe you simplify the configuration of pipelines, and make re-use easy and efficient.
The possibilities for pipes are endless, but already there are pipes to:
- deploy code to Elastic BeanStalk
- deploy a Lambda function
- publish an npm package
- send a notification to Slack
- send an alert to Opsgenie
How to write a pipe
Depending on what you need it for, you can make a simple pipe or a complete pipe. They work in the same way, the only difference is how much detail and configuration you add.
Simple | Complete |
---|---|
Get going fast | Best practice |
Updating later on can be more complex | CI/CD to automate versioning |
Minimal configuration | Good documentation for others or your future self |
Private use only | Eligible to be added to our marketplace |
3 files:
| The 3 files already mentioned and:
|
In this guide, we'll make a simple pipe first, and then show you the steps to make it a complete pipe. We'll build the container and upload it to Dockerhub, so make sure you have an account there (it's free to set up!).
Prerequisites
- Pipes only work with a public image in docker hub.
- If you are skilled in Docker and want to make a pipe only for private use, you can just make your own Docker container containing all the files required.
Step 1 - Create or import a repository
First, we need a place to put your files, so we start by creating a repository.
There are 3 main ways to make a pipe repository:
- create an empty repository (don't worry, we'll guide you as to what to put in it)
- import one of our example repositories
- use our generator to create a local repository (recommended only for complete pipes)
We also have 3 example repositories: a simple pipe repository, and 2 complete pipe repositories (for Bash and Python) which you can use as a reference, or import if you like.
If you already know you want to make a complete pipe, you can use our generator to create the framework, and partially fill out the files.
Step 2 - Create your script or binary
This is the main part of your pipe, which runs when your pipe is called. It contains all the commands and logic to perform the pipe task. Use any coding language of your choice to make a script, or binary file.
A simple script might look like:
#!/usr/bin/env bash
set -e
echo 'Hello World'
But you'll probably want to do more than that! A good next step might be using variables.
You can use any of the default variables available to the pipeline step that calls the pipe (see this list of default variables), and any pipe variables that are provided when the pipe is called. You can only use user defined variables (account and team, repository, or deployment) if you list them in you pipe.yml (more on this later).
#!/usr/bin/env bash
set -e
echo 'Hello $BITBUCKET_REPO_OWNER'
#when you call the pipe from your pipeline
#you can provide variables, for example here: GREETING
echo '$GREETING'
You can make some variables mandatory, and some where you use default values you've specified. There are 2 ways to specify a default value, here we'll show defining it in your script, but later on we'll show your the more powerful way, using a pipe.yml
file. We show both in our complete pipe for bash example:
To make life easiest for the end user of the pipe, we recommend keeping mandatory variables to a minimum. If there are sensible defaults for a variable, provide those in the script and the end user can choose to override them if needed.
We also recommend taking the time to add colors to your log output, and provide clickable links to any external output.
As we are going to be running the script, it needs to be executable (so in your terminal you might run: chmod +x pipe.sh
) . If you are using our example repositories, this is done for you already.
Step 3 - Configure the Dockerfile
To run the script you just wrote, we need to put it into a Docker container. The Dockerfile
defines the details of how this Docker container should be built. At the most basic it needs to have values for FROM
, COPY
, and ENTRYPOINT
The complete pipe for bash example contains a Dockerfile already in it's root directory:
FROM alpine:3.8
RUN apk update && apk add bash
COPY pipe /
ENTRYPOINT ["/pipe.sh"]
This means the container will
- use an Alpine Linux 3.8 image
- run an update command and install bash into the container
- have the contents of the pipe directory copied into it's root directory
- start running pipe.sh
You can edit these to suit your needs, want Alpine Linux 3.9? No problem, just change the FROM
command to read FROM alpine:3.9
Want to install more packages into Linux? Add more to the RUN
command. Before you do, though, have a look in Dockerhub to see if there is an image that already has those packages installed. It will save you precious time!
Sometimes getting the script and the container exactly how you want it can take a few iterations. With this in mind we recommend installing Docker locally on your machine, so you can test building your pipe container and running it, without using build minutes.
Step 4 - Make a basic pipeline to update your pipe container to Dockerhub
The final step in making a simple pipe is to build your container, and upload it to Dockerhub.
Using a pipeline to do that isn't strictly necessary, but it makes future updates easier, and automatically updates the version number so you can quickly make sure you are using the latest version.
The example bitbucket-pipelines.yml
below builds and pushes a new version of your container to Dockerhub whenever you commit. So if you update which image you want to use for your Docker container, or make some changes to your script, this will automatically make sure the version on Dockerhub is up to date. Make sure you have a dockerhub account and then all you need to do is add 2 variables to your pipe repository: DOCKERHUB_USERNAME
and DOCKERHUB_PASSWORD
, and enable pipelines.
image:
name: atlassian/default-image:2
pipelines:
default:
- step:
name: Build and Push
script:
# Build and push image
- VERSION="1.$BITBUCKET_BUILD_NUMBER"
- echo ${DOCKERHUB_PASSWORD} | docker login --username "$DOCKERHUB_USERNAME" --password-stdin
- IMAGE="$DOCKERHUB_USERNAME/$BITBUCKET_REPO_SLUG"
- docker build -t ${IMAGE}:${VERSION} .
- docker tag ${IMAGE}:${VERSION} ${IMAGE}:latest
- docker push ${IMAGE}
# Push tags
- git tag -a "${VERSION}" -m "Tagging for release ${VERSION}"
- git push origin ${VERSION}
services:
- docker
Congratulations you've made a simple pipe!
And that's all you need for a simple pipe! You can now refer to your pipe in a step using the syntax:
pipe: docker://<DockerAccountName>/<ImageName>:<version>
The next steps of pipe creation are designed to make your life easier in the long run, and make it simpler for other people to use your pipe. They are required for anyone who wants to make an officially supported pipe.
If you are making a complete pipe, you'll also need to set up:
- pipe metadata - details of your pipe, for example naming the maintainer
- a readme - details on how to use your pipe, for example, the variables you need
- automated testing - to make sure changes haven't broken anything
- semantic versioning - to make it clear which version of your pipe to use
- debug logging - to make it easier for end users to troubleshoot if something goes wrong
Don't worry, this is already configured in our example repos (Bash and Python), so take a peek at them and we'll guide you through the next steps!
Step 5 - Make pipe.yml
- the metadata file
The pipe.yml file provides handy information to catagorize the pipe
Keyword | Description |
---|---|
| The name or title of the pipe as we should display it. |
| The pipe Docker image you created on Dockerhub in the form: account/repo:tag |
category | Category of the pipe. Can be one of:
|
description | A short summary describing what the pipe does. |
| Bitbucket pipe repository absolute URL. Example: atlassian/demo-pipe-bash |
| Object that contains name, website and email.
|
| Object that contains name, website and email. For vendor pipes this field is mandatory.
|
| Keywords to help users find and categorize the pipe. Options could include the type of function your pipe performs (deploy, notify, test) or your product, or company, name, or specific tools you are integrating with. |
A pipe.yml file might look like this:
Step 6 - Write README.md
- your readme file
Your readme is how your users know how to use your pipe. We can display this in Bitbucket, so it needs to be written with markdown, in a specific format, with the headings listed in order:
# Bitbucket Pipelines Pipe: <pipe_name> <pipe_short_description > ## YAML Definition Add the following snippet to the script section of your `bitbucket-pipelines.yml` file: ```yaml <pipe_code_snippet> ``` ## Variables <pipe_variables_table> ## Details <pipe_long_description> ## Prerequisites <pipe_prerequisites> ## Examples <pipe_code_examples> ## Support <pipe_support> |
Section | Description |
---|---|
<pipe_name> | The pipe name |
<pipe_short_description> | Short summary of what the pipe does - we recommend using the format "[action verb] (to) [destination | vendor | suite]" for example "Deploy to Dockerhub" or "Notify Opsgenie" |
<pipe_code_snippet> | What someone needs to copy and paste into their pipeline to use your pipe |
<pipe_variable_table> | A list of the variables your pipe needs, making it clear if they are mandatory or optional |
<pipe_long_description> | Detailed explanation of usage, configuration, setup, etc. |
<pipe_prerequisites> | Anything that people need to have in place before using the pipe, for example: installed packages, accounts on third party systems, etc. |
<pipe_code_examples> | Code snippets with example variables. It’s recommendable to cover at least:
|
<pipe_support> | Give details on how people can contact you for questions and support. |
Step 7 - Write your tests
It's good practice to add automated integration testing to your pipe, so before you send it out into the world you can make sure it does what you expect it to do. For example, you could test how it deals with variables that are unexpected, or that it can successfully connect to any third party services it needs to. For any pipes that are going to become officially supported, it's essential that they are tested regularly.
Step 8 - Make your pipe easy to debug
In your script we recommend writing in a debug mode which will output extra information about what is going on in the pipe.
We also recommend that you make any links shown in the logs clickable, and that you use colors in your output to highlight key information.
How you do this will depend on the language you are using to write your script, but you can see an example of this in the common.sh file in our bash demo repo.
Step 9 - Set up CI/CD to automate testing and updates
We also recommend using CI/CD to:
- automate testing
- automatically upload it to Dockerhub (or a site of your choice)
- automatically update the version number in
- the changelog
- the readme
- the metadata
Once you have your bitbucket-pipelines.yml
file configured you can enable pipelines: Your repo Settings > Pipelines section > Settings > Enable pipelines
Step 10 - Set up semantic versioning
We encourage you to use semantic versioning (semver) for your pipe, so that it's clear what version is the latest, which version people should use, and if there is any chance of an update breaking things. The version has 3 parts: <major>.<minor>.<patch>, for example 6.5.2
You increase the version number depending on the changes you made. Update:
- major: if you make changes that could break existing users, for example, changing the name of a mandatory variable
- minor: if you add functionality in a backwards-compatible manner
- patch: if you backwards-compatible bug fixes
There are a few places you would want to update when you change the version, so to simplify that there is a tool called semversioner (https://pypi.org/project/semversioner/). This will generate a new entry in your changelog, update the version number, and commit back to the repository.
Step 11 - Use your new complete pipe!
As with the simple version of the pipe, the last step is to build and push your container to Dockerhub.
There are 2 ways to refer to this pipe in other repositories. In your bitbucket-pipelines.yml
file you can:
- refer to the docker image directly
pipe: docker://acct/repo:tag
(whereacct/repo
is the Dockerhub account and repo) - refer to a pipe repo hosted on Bitbucket
pipe: <BB_acct>/repo
:tag
(whereBB_acct/repo
is your Bitbucket account and pipe repo)
This version of the reference looks in your pipe.yml file to find out where to get the image from.
Advanced tips and secrets
- You don't have to use Dockerhub, if you have another service to host Docker images, but the image does have to be public.
- If you install Docker to your local machine you can test out everything works well, before uploading.
- If there are variables that you rely on, make sure your script tests that they have been provided and are valid.
- Try and keep mandatory variables to a minimum, supply default values if you can!
- Make sure you have a process in place so you can quickly and efficiently provide pipe support, in case something unexpected happens. You'll get feedback quicker and your pipe users will have a better experience.
- If you are creating a file you wish to share with other pipes, or use in your main pipeline, you will need to edit the permissions for that file. A common way to do this would be to put
umask 000
at the beginning of your pipe script. If you'd prefer to just modify values for one file you could also use thechown
orchmod
command. - Check out more advanced pipe writing techniques, including the best way to quote things, and passing array variables.
Contributing an official pipe
You can submit your pipe to be considered for our official list - make sure you've done it in the complete way if you do!
Have a look at the full details of how to contribute a pipe.