How to programmatically archive Jira Projects with a Bash Script
Platform Notice: Cloud and Data Center - This article applies equally to both cloud and data center platforms.
Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.
*Except Fisheye and Crucible
Summary
For long-lived Jira Sites, with many Projects & much activity, performance issues can arise due to data that would best be archived.
This article talks about how to programmatically archive Jira Projects using a Bash Script. The reason this is needed is because the API Endpoint that Archive Projects only takes 1 Project at a time.
Due to this, bulk archival can be laborious without software to cycle through the Projects that need to be archived.
The script in this KB Article can do this, as well as offering a few quality of life features for making that process easier.
The shell commands utilized in this script are:
- curl - Making Web Requests
- jq - Processing JSON Payloads
- The basics: echo, tee, printf, read, base64, sed, let, wc, tr, awk, shift, getopts, & cat.
- Other conventions like: case, if, logical operators, & for.
All of those must be available in the terminal environment for the script to work.
Environment
Jira Cloud or Jira Datacenter - Either work so long as the Script is adjusted to point to the appropriate Endpoints.
Solution
First, I will post the script in its entirety here in a collapsed code block. Then I will give a brief overview of its functionality, followed by an analysis of what it achieves.
Usage
This Script takes either a Date or a File. If given a Date, it will look up the Projects on the given Site, and any who lack Issues updated after the Date will be archived. If given a File, it will just archive the Projects in said File. Examples:
1. ./ArchiveProjects.sh -f ./Projects.csv -s "https://site.atlassian.net"
2. ./ArchiveProjects.sh -d "3 years ago" -s "https://site.atlassian.net"
3. ./ArchiveProjects.sh -d "1 year ago" -t "US/Pacific" -s "https://site.atlassian.net"
4. ./ArchiveProjects.sh -v -d "1 hour ago" -t "UTC+5" -s "https://site.atlassian.net"
5. ./ArchiveProjects.sh -h
The Script does require being updated with the credentials that will be used to access the Site (Email Address & API Token).
Note the permissions required by the User as mentioned here: Jira KB - Archive a project.
Optionally, the Time zone can be specified via -t if hours matter.
The 1st takes the file "Projects.csv" which should be a single line with comma-separated project keys (like ABC,XYZ,PROJ,ITSM) and archives those for the given site.
The 2nd gets the list of projects that don't have issues with updates newer than 3 years ago and archives those.
The 3rd does the same but for 1 year and uses the specified time zone, overriding your system time zone.
The 4th archives any projects that lack updates on their issues within the last hour and verbosely logs that process, and while overriding the system time zone.
The last just outputs the help usage text.
Analysis
The Script goes through this logic, in words:
- Establishes some baseline Variables like verbose logging, the logFile, etc along with setting up defaults.
- Creates a few functions we'll be utilizing to "make the magic happen":
- The first is just a Usage to explain how to use the script.
- The second is a simple check to make sure the user wants to proceed with the action being taken, based on the information provided to the script.
- The third is a simple verbose checker & logging for it.
- The fourth/last (after the option checking) verifies that the credentials are correct by making an API Request using them to see what is returned.
- There are a few lines to verify that the appropriate flags have been provided along with their values.
- The flags provided are then cycled through to get some setup done, such as setting some base variables to a value (like process=CSV or process=API), along with logging said values.
- After building the URL we'll be using, some final logging & checks are done along with some variables being set. Like the Site being provided and the Date being parsed.
- Lastly, we actually run the authorization check before diving into the cycling that makes this process work.
I'm going to separately go over the logic in the final case function because it is a bit in-depth:
- For the final case that actually archives the projects, we start with checking which Process we ended up going with between CSV or API. For API:
- We get the first page of Project results.
- We check if it's the last page, and set some variables based on what we received.
- We do some simple checking to make sure we didn't receive 0 results and warn if we did.
- We cycle through grabbing more pages if the first page isn't the only page, building the results from there until we get to the last page.
- We process those results into a simple JSON Object of the Keys only, then format the content into a comma-separate string of keys.
- We get a count of the total projects returned, do another check to make sure that returned >0, then do a final check to make sure the user wants to proceed with archiving what was returned.
- The For cycles through each Key in the list, sending the request to archive it, checking for an error message, and stating the result.
- For CSV:
- We get a count of projects from the provided CSV.
- We do a check to make sure the user wants to proceed with archiving what was returned from processing the CSV.
- The For cycles through in the same manner as API, archiving the project, checking for an error, and stating the result.
- For neither of those, we error out and convey next steps.