How to: Use already created secrets manager in Pipelines Runners Autoscaler
Platform Notice: Cloud Only - This article only applies to Atlassian products on the cloud platform.
Summary
When utilizing the Bitbucket Pipelines Runners Autoscaler, it is essential to configure a Kubernetes Deployment YAML file during the initial setup.
A complete guide with a sample Kubernetes Deployment YAML file specification, can be found on the following page:
In this configuration, certain fields contain variables that are intended to be kept secret, such as "OAUTH_CLIENT_ID" and "OAUTH_CLIENT_SECRET." It is advisable to store these variables securely. Currently those secrets are created and used in the same Kubernetes Deployment YAML file.
However, in certain situations customer might have secret manager already created on their cluster and would like to use them. The referenced page provides an example Kubernetes Deployment YAML specification that demonstrates how to incorporate variables from a secrets manager into the Runners Autoscaler configuration.
Environment
Bitbucket Pipelines Runners using the Runners Autoscaler
Solution
This solution requires that a secret, identified by <runner-secret-name>, has already been created. The provided Kubernetes Deployment YAML specification is designed to mount the previously created secrets into a secret volume.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: Secret
metadata:
name: runner-oauth-credentials
# labels:
# accountUuid: # Add your account uuid without curly braces to optionally allow finding the secret for an account
# repositoryUuid: # Add your repository uuid without curly braces to optionally allow finding the secret for a repository
# runnerUuid: # Add your runner uuid without curly braces to optionally allow finding the secret for a particular runner
data:
oauthClientId: # add your base64 encoded oauth client id here
oauthClientSecret: # add your base64 encoded oauth client secret here
- apiVersion: batch/v1
kind: Job
metadata:
name: runner
spec:
template:
# metadata:
# labels:
# accountUuid: # Add your account uuid without curly braces to optionally allow finding the pods for an account
# repositoryUuid: # Add your repository uuid without curly braces to optionally allow finding the pods for a repository
# runnerUuid: # Add your runner uuid without curly braces to optionally allow finding the pods for a particular runner
spec:
containers:
- name: runner
image: docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner
env:
- name: ACCOUNT_UUID
value: # Add your account uuid here
- name: REPOSITORY_UUID
value: # Add your repository uuid here
- name: RUNNER_UUID
value: # Add your runner uuid here
command: []
args:
- /bin/sh
- -c
- |
export OAUTH_CLIENT_ID=$(cat <mount_path>/client_id);
export OAUTH_CLIENT_SECRET=$(cat <mount_path>/client_secret);
./entrypoint.sh
- name: WORKING_DIRECTORY
value: "/tmp"
volumeMounts:
- name: tmp
mountPath: /tmp
- name: docker-containers
mountPath: /var/lib/docker/containers
readOnly: true # the runner only needs to read these files never write to them
- name: var-run
mountPath: /var/run
- name: secret-volume
mountPath: <mount_path>
readOnly: true
- name: docker-in-docker
image: docker:20.10.5-dind
securityContext:
privileged: true # required to allow docker in docker to run and assumes the namespace your applying this to has a pod security policy that allows privilege escalation
volumeMounts:
- name: tmp
mountPath: /tmp
- name: docker-containers
mountPath: /var/lib/docker/containers
- name: var-run
mountPath: /var/run
restartPolicy: OnFailure # this allows the runner to restart locally if it was to crash
volumes:
- name: tmp # required to share a working directory between docker in docker and the runner
- name: docker-containers # required to share the containers directory between docker in docker and the runner
- name: var-run # required to share the docker socket between docker in docker and the runner
- name: secret-volume
secret:
secretName: <runner-secret-name>
# backoffLimit: 6 # this is the default and means it will retry upto 6 times if it crashes before it considers itself a failure with an exponential backoff between
# completions: 1 # this is the default the job should ideally never complete as the runner never shuts down successfully
# parallelism: 1 # this is the default their should only be one instance of this particular runner
⚠️ It is essential to modify the <runner-secret-name> and <mount-path> appropriately. The <runner-secret-name> must correspond with the secret that has already been established in your cluster.
Was this helpful?