Data pipeline
Requirements
To trigger data exports through the REST API, you’ll need:
- A valid Jira Data Center license
- Jira system administrator permissions. See Security overview for more information about supported API authentication methods.
Considerations
There are a number of security and performance impacts you’ll need to consider before getting started.
Security
If you need to filter out data based on security and confidentiality, this must be done after the data is exported.
Exported files are saved in your shared home directory, so you’ll also want to check this is secured appropriately.
Performance impact
To minimize the risk of performance problems, we strongly recommend that you:
- Perform the data export during hours of low activity, or on a node with no activity.
- Limit the amount of data exported through the
fromDate
parameter, as a date further in the past will export more data, resulting in a longer data export.
Number of issues | Approximate export duration | |
---|---|---|
Jira Software installed | Jira Software+Jira Service Management installed | |
1 Million | 15 minutes | 30 minutes to 2 hours |
7 Million | 2 hours | 3-6 hours |
30 Million | 9 hours | 12-24 hours |
Test performance VS production
The performance data presented here is based on our own internal regression testing. The actual duration and impact of data export on your own environment will likely differ depending on your infrastructure, applications installed (as in, Jira Software and Jira Service Management), configuration, and load.
We used Jira Performance Tests to test a data export's performance on a Jira Data Center environment on AWS. This environment had one c5.9xlarge Jira node and one PostgreSQL database. To test user load, we used 24 virtual users across 2 virtual user nodes.
Performing the data export
/export
REST API endpoint:
https://<base-url>/rest/datapipeline/latest/export?
fromDate=<yyyy-MM-ddTHH:mmTZD>
The fromDate
parameter limits the amount of data exported. That is, only data on entities created or updated after the fromDate
value will be exported.
If you trigger the export without the fromDate
parameter, all data from the last 365 days will be exported.
If your application is configured to use a context path, such as /jira
or /confluence
, remember to include this in the <base-url>
.
The /export
REST API endpoint has three methods:
Automatic data export cancellations
CANCELLED
.However, if the JVM is not notified after a crash or hardware-level failure occurs, the export process may get locked. This means you'll need to manually mark the export as CANCELLED by making a DELETE
request. This releases the process lock, allowing you to perform another data export.
Configuring the data export
You can configure the format of the export data through the following system properties.
Default value | Description |
---|---|
plugin.data.pipeline.embedded.line.break.preserve | |
false | Specifies whether embedded line breaks should be preserved in the output files. Line breaks can be problematic for some tools such as Hadoop. This property is set to |
plugin.data.pipeline.embedded.line.break.escape.char | |
\\n | Escaping character for embedded line breaks. By default, we'll print |
Check the status of an export
You can check the status of an export and view when your last export ran from within your application’s admin console. To view data export status:
- In the upper-right corner of the screen, select Administration > System.
- Select Data pipeline
- Not started - no export is currently running
- Started - the export is currently running
- Completed - the export has completed
- Cancellation requested - a cancellation request has been sent
- Cancelled - the export was cancelled
- Failed - the export failed.
For help resolving failed or cancelled exports, see Data pipeline troubleshooting.
Output files
Each time you perform a data export, we assign a numerical job ID to the task (starting with 1 for your first ever data export). This job ID is used in the file name, and location of the files containing your exported data.
Location of exported files
Exported data is saved as separate CSV files. The files are saved to the following directory:
<shared-home>/data-pipeline/export/<job-id>
if you run Jira in a cluster<local-home>/data-pipeline/export/<job-id>
you are using non-clustered Jira
Within the <job-id>
directory you will see the following files:
issues_job<job_id>_<timestamp>.csv
(for issues)
(for Jira Software and Jira Service Management fields)issue_fields_job<job_id>_<timestamp>.csv
sla_cycles_job<job_id>_<timestamp>.csv
(for SLA cycle information, if Jira Service Management is installed)
To load and transform the data in this export, you'll need to understand its schema. See Data pipeline export schema.
Sample Spark and Hadoop import configurations
If you have an existing Spark or Hadoop instance, use the following references to configure how to import your data for further transformation: