Export Over 1000 Results to Excel from JIRA Cloud

Still need help?

The Atlassian Community is here for you.

Ask the community

Platform Notice: Cloud Only - This article only applies to Atlassian products on the cloud platform.

Summary

From the Issue Navigator, users are unable to export to Excel more than 1000 rows.


Cause

The 1000-row export limit within JIRA Cloud is hardcoded specifically to prevent a performance hit to your instance


Solution

Workaround 1

Export issues by using Google Sheets or Microsoft Excel.

All steps can be found here: How to export more than 1000 issues using Google Sheets

Workaround 2

Use the pager/start URL parameter to work around this issue by adjusting the index value to export issues from 1001 onwards. You will need to merge the results manually.

  1. Run a search on the issue navigator to get all the issues that need to be exported (The example below contains 1000+ issues).
  2. Export the first 1000 issues using the standard export feature (Export > Export Excel CSV (all fields))
  3. For the next thousand issues, right-click on the same option above and open in a new tab, you should get a URL string like below:

    https://<instance name>/sr/jira.issueviews:searchrequest-csv-all-fields/temp/SearchRequest.xls?jqlQuery=&tempMax=1000

    Cancel the download, as we only require the URL string.

  4. Append the &pager/start=1000 at the end of the URL string to instruct JIRA to export from index 1001 and above. Example:

    https://<instance name>/sr/jira.issueviews:searchrequest-csv-all-fields/temp/SearchRequest.xls?jqlQuery=&tempMax=1000&pager/start=1000

    Visit this URL in your browser and save the file when prompted. For the subsequent 1000 issues, adjust the index value accordingly, for example &pager/start=2000 (and so forth).

  5. Open the exported CSV files and manually consolidate them if necessary.

Workaround 3

You can also automate workaround 2 with scripting. You'll need to create an API key for authorization first. Here's an example Python script.

(info) Credit goes to Stephen from GLiNTECH Pty Ltd for a script which takes into account variable custom field column numbers.

## 
# Description: Export more than 1000 jira issues
#   Originally taken from https://confluence.atlassian.com/jirakb/export-over-1000-results-to-excel-from-jira-cloud-779160833.html
# Author: GLiNTECH Pty Ltd - www.glintech.com
# How to run:
#   python3 export-issues.py email api_token 'url'
## 

import os
import sys
import requests
import glob
import csv
import time
from requests.auth import HTTPBasicAuth

def count_headers(headers):
    headercounts = []
    for header in headers:
        counted = False
        for headercount in headercounts:
            if headercount[0] == header:
                headercount[1] += 1
                counted = True
        if not counted:
            headercounts.append([header,1])
    return headercounts

def get_row_count(file):
    with open(file, 'r', encoding='utf-8') as infile:
        reader = csv.reader(infile)
        return sum(1 for row in reader)

def get_headers(file):
    with open(file, 'r', encoding='utf-8') as infile:
        reader = csv.reader(infile)
        for i, row in enumerate(reader):
            if i==0:
                return row

def insert_empty_column(file, header, index):
    with open(file, 'r', encoding='utf-8') as infile:
        with open('temp_' + file, 'w', newline='', encoding='utf-8') as outfile:
            reader = csv.reader(infile)
            writer = csv.writer(outfile, delimiter=',', quotechar='\"', quoting=csv.QUOTE_MINIMAL)
            for i, row in enumerate(reader):
                if i==0:
                    row.insert(index, header)
                    writer.writerow(row)
                else:
                    row.insert(index, '')
                    writer.writerow(row)
    os.remove(file)
    os.rename('temp_' + file, file)

def remove_columns(file, indices):
    indices.reverse()
    with open(file, 'r', encoding='utf-8') as infile:
        with open('temp_' + file, 'w', newline='', encoding='utf-8') as outfile:
            reader = csv.reader(infile)
            writer = csv.writer(outfile, delimiter=',', quotechar='\"', quoting=csv.QUOTE_MINIMAL)
            for i, row in enumerate(reader):
                for j in range(len(indices)):
                    row.pop(indices[j])
                writer.writerow(row)
    os.remove(file)
    os.rename('temp_' + file, file)

def find_empty_column_indices(file):
    column_count = len(get_headers(file))
    row_count = get_row_count(file)
    empty_column_indices = []
    for i in range(column_count):
        with open(file, 'r', encoding='utf-8') as infile:
            reader = csv.reader(infile)
            for j, row in enumerate(reader):
                if j>0 and row[i] != '':
                    break
            if j == row_count - 1:
                empty_column_indices.append(i)
    return empty_column_indices

def dl_export():
    args = sys.argv[1:]
    
    if len(args) != 3:
        print("Not enough arguments provided. Need email, API token and URL")
        print(f"Example:")
        print(f"python3 {sys.argv[0]} email@example.com api_token 'https://example.com/link'")
        sys.exit()
    
    page = 1000 # issue count per page, default 1000

    email = args[0]
    token = args[1]
    url = args[2].replace("tempMax=1000",f"tempMax={page}")

    auth = HTTPBasicAuth(email,token)

    start = 0
    while True:
        response = requests.get(
            f"{url}&pager/start={start}",
            auth=auth
        )

        if response.ok:
            with open(f"file{start}.csv", 'wb') as f:
                f.write(response.content)
            if os.path.getsize(f"file{start}.csv") == 0:
                os.remove(f"file{start}.csv")
                break
            start = start + page

    files = glob.glob("file*.csv")

    #generate maximum header counts across all csv exports
    maxheadercounts = []
    for i, file in enumerate(files):
        headercounts = []
        with open(file, 'r', encoding='utf-8') as f:
            reader = csv.reader(f)
            for j, row in enumerate(reader):
                if j==0:
                    headercounts = count_headers(row)
                else:
                    break
        if len(maxheadercounts) == 0:
            maxheadercounts = headercounts
        
        newheaders = []
        for headercount in headercounts:
            for k in range(len(maxheadercounts)):
                if headercount[0]==maxheadercounts[k][0]:
                    if headercount[1] > maxheadercounts[k][1]:
                        maxheadercounts[k][1] = headercount[1]
                    for newheader in reversed(newheaders):
                        maxheadercounts.insert(k, newheader)
                    newheaders=[]
                    break
                elif k == len(maxheadercounts) - 1:
                    newheaders.append(headercount)
        if len(newheaders) > 0:
            for newheader in newheaders:
                maxheadercounts.append(newheader)

    #generate the header for the merged csv exports
    finalheaders = []
    for header in maxheadercounts:
        for i in range(header[1]):
            finalheaders.append(header[0])


    #insert empty columns for multivalue fields
    for file in files:
        while True:
            headers = get_headers(file) 
            if len(finalheaders) == len(headers):
                break
            for i in range(len(headers)):
                if headers[i] != finalheaders[i]:
                    insert_empty_column(file, finalheaders[i], i)
                    break

	# Combining csv files
    output_csv = "full_export.csv"
    with open(output_csv, 'w', encoding='utf-8') as outfile:
        for i, file in enumerate(files):
            with open(file, 'r', encoding='utf-8') as f:
                if i == 0:
                    outfile.write(f.read())
                else :
                    for k, line in enumerate(f):
                        if k :
                            outfile.write(line)
            os.remove(file)

    # Remove empty columns from the combined csv
    empty_column_indices = find_empty_column_indices(output_csv)
    remove_columns(output_csv, empty_column_indices)        
    

    print("Export completed to file : " + output_csv)

if __name__ == "__main__":
    start = time.time()
    dl_export()
    end = time.time()


    print('took')
    print(end - start)
    print('seconds')
Last modified on Apr 12, 2023

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.