Data pipeline

Data pipeline provides an easy way to export data from your Jira or Confluence site, and feed it into your existing data platform (like  Tableau  or  PowerBI ). This allows you to:
  • generate richer reports and visualizations of site activity
  • better understand how your teams are using your application
  • make better decisions on optimizing the use of Jira or Confluence in your organization

You can trigger a data export of the current state data through the REST API, and view the status of your exports in your application’s admin console. Data will be exported in CSV format. You can only perform one data export at a time.

 For a detailed reference of the exported data's schema, see Data pipeline export schema.

Data pipeline is available in Data Center editions of:

  • Jira 8.14 and later
  • Confluence 7.12 and later

On this page:

Requirements

To trigger data exports through the REST API, you’ll need:

Considerations

There are a number of security and performance impacts you’ll need to consider before getting started.

Security

The export will include all data, including PII (Personally Identifiable Information) and restricted content. This is to provide you with as much data as possible, so you can filter and transform to generate the insights you’re after.

If you need to filter out data based on security and confidentiality, this must be done after the data is exported.

Exported files are saved in your shared home directory, so you’ll also want to check this is secured appropriately.

Performance impact

Exporting data is a resource-intensive process impacting application nodes, your database, and indexes. In our internal testing, we observed performance degradation in all product functions on the node actively performing an export.

To minimize the risk of performance problems, we strongly recommend that you:

  • Perform the data export during hours of low activity, or on a node with no activity.
  • Limit the amount of data exported through the fromDate parameter, as a date further in the past will export more data, resulting in a longer data export. 

Our test results also showed the following approximate durations for the export per number of issues:
Number of issues Approximate export duration
Jira Software installedJira Software+Jira Service Management installed
1 Million15 minutes30 minutes to 2 hours
7 Million2 hours3-6 hours
30 Million9 hours12-24 hours

Test performance VS production

The performance data presented here is based on our own internal regression testing. The actual duration and impact of data export on your own environment will likely differ depending on your infrastructure, applications installed (as in, Jira Software and Jira Service Management), configuration, and load. 

We used Jira Performance Tests to test a data export's performance on a Jira Data Center environment on AWS. This environment had one c5.9xlarge Jira node and one PostgreSQL database. To test user load, we used 24 virtual users across 2 virtual user nodes.

Performing the data export

To export the current state data, use the /export REST API endpoint:
https://<base-url>/rest/datapipeline/latest/export?
fromDate=<yyyy-MM-ddTHH:mmTZD>

The fromDate parameter limits the amount of data exported. That is, only data on entities created or updated after the fromDate value will be exported.

If you trigger the export without the fromDate parameter, all data from the last 365 days will be exported. 

If your application is configured to use a context path, such as /jira or /confluence, remember to include this in the <base-url>

The /export REST API endpoint has three methods:

POST method...

POST method

When you use the POST method, specify a fromDate value. This parameter only accepts date values set in ISO 8601 format (yyyy-MM-ddTHH:mmTZD). For example:

  • 2020-12-30T23:01Z

  • 2020-12-30T22:01+01:00
    (you'll need to use URL encoding in your request, for example 2020-12-30T22%3A03%2B01%3A00)

Here is an example request, using cURL and a personal access token for authentication:

curl -H "Authorization:Bearer ABCD1234" -H "X-Atlassian-Token: no-check" 
-X POST https://myexamplesite.com/rest/datapipeline/latest/
export?fromDate=2020-10-22T01:30:11Z

The "X-Atlassian-Token: no-check" header is only required for Confluence. You can omit this for Jira.

The POST request has the following responses:

Code

Description

202

Data export started. For example:

{
  "startTime":"2021-03-03T12:08:24.045+11:00",
  "nodeId":"node1",
  "jobId":124,
  "status":"STARTED",
  "config":{
     "exportFrom":"2020-03-03T12:08:24.036+11:00",
     "forcedExport":false
  }
}

409

Another data export is already running:

{

  "startTime":"2021-03-03T12:08:24.045+11:00",
  "nodeId":"node1",
  "jobId":124,
  "status":"STARTED",
  "config":{
     "exportFrom":"2020-03-03T12:08:24.036+11:00",
     "forcedExport":false
  }
}

422

Data export failed due to an inconsistent index:

{
  "startTime": "2021-01-13T09:01:01.917+11:00",
  "completedTime": "2021-01-13T09:01:01.986+11:00",
  "nodeId": "node2",
  "jobId": 56,
  "status": "FAILED",
  "config": {
    "exportFrom": "2020-07-17T08:00:00+10:00",
    "forcedExport": false
  },
  "errors": [
    {
      "key": "export.pre.validation.failed",
      "message": "Inconsistent index used for export job."
    }
  ]
}

If this occurs, you may need to reindex and then retry the data export.

Alternatively, you can force a data export using the forceExport=true query parameter. However, forcing an export on an inconsistent index could result in incomplete data.

The following response is returned when you force an export an an inconsistent index to warn you that the data might be incomplete.

{
  "startTime": "2021-01-13T09:01:42.696+11:00",
  "nodeId": "node2",
  "jobId": 57,
  "status": "STARTED",
  "config": {
    "exportFrom": "2020-07-17T08:01:00+10:00",
    "forcedExport": true
  },
  "warnings": [
    {
      "key": "export.pre.validation.failed",
      "message": "Inconsistent index used for export job."
    }
  ]
}
GET method...

GET method

The GET request returns a 200 code, but the response will be different depending on what stage the export is in:

StatusSample response
Before you start the first export
{}
During an export
{
  "startTime": "2020-11-01T06-35-41-577+11",
  "nodeId": "node1",
  "jobId": 125,
  "status": "STARTED"
  "config":{
     "exportFrom":"2020-03-03T12:08:24.036+11:00",
     "forcedExport":false
  }
}
After a successful export
{
  "startTime":"2021-03-03T12:08:24.045+11:00",
  "completedTime":"2021-03-03T12:08:24.226+11:00",
  "nodeId":"node3",
  "jobId":125,
  "status":"COMPLETED",
  "config": {
    "exportFrom":"2020-03-03T12:08:24.036+11:00",
    "forcedExport":false 
  },
  "statistics" {
    "exportedEntities":23,
    "writtenRows":54
  }
}
After a cancellation request,
but before the export is
actually cancelled
{
  "startTime":"2021-03-03T12:08:24.045+11:00",
  "completedTime":"2021-03-03T12:08:24.226+11:00",
  "nodeId":"Node1",
  "jobId":125,
  "status":"CANCELLATION_REQUESTED",
  "config": {
    "exportFrom":"2020-03-03T12:08:24.036+11:00",
    "forcedExport":false 
  },
}
After an export is cancelled
{
  "startTime": "2020-11-02T04-20-34-007+11",
  "cancelledTime": "2020-11-02T04-24-21-717+11",
  "completedTime": "2020-11-02T04-24-21-717+11",
  "nodeId":"node2",
  "jobId":125,
  "status":"CANCELLED",
  "config": {
    "exportFrom":"2020-03-03T12:08:24.036+11:00",
    "forcedExport":false 
  },
  "statistics" {
    "exportedEntities":23,
    "writtenRows":12
  }
}
DELETE method...

DELETE method

The DELETE request has the following responses:

CodeDescription
200

Cancellation accepted.

{
  "status": "OK",
  "message": "Cancellation request successfully received.
 Currently running export job will be stopped shortly."
}
409

Request discarded, because there is no ongoing export:

{
  "status": "WARNING",
  "message": "Cancellation request aborted. There is no
export job running to cancel."
}

Automatic data export cancellations

If a node running a data export is gracefully shut down, the export will be automatically marked as CANCELLED.

However, if the JVM is not notified after a crash or hardware-level failure occurs, the export process may get locked. This means you'll need to manually mark the export as CANCELLED by making a DELETE request. This releases the process lock, allowing you to perform another data export.

Configuring the data export

You can configure the format of the export data through the following system properties.

Default valueDescription
plugin.data.pipeline.embedded.line.break.preserve
false

Specifies whether embedded line breaks should be preserved in the output files. Line breaks can be problematic for some tools such as Hadoop.

This property is set to False by default, which means that line breaks are escaped.

plugin.data.pipeline.embedded.line.break.escape.char
\\n

Escaping character for embedded line breaks. By default, we'll print \n for every embedded line break.

Check the status of an export

You can check the status of an export and view when your last export ran from within your application’s admin console. To view data export status:

  1. In the upper-right corner of the screen, select Administration  > System
  2. Select Data pipeline

There are a number of export statuses:
  • Not started - no export is currently running
  • Started - the export is currently running
  • Completed - the export has completed
  • Cancellation requested - a cancellation request has been sent
  • Cancelled - the export was cancelled
  • Failed - the export failed.

For help resolving failed or cancelled exports, see Data pipeline troubleshooting

Output files 

Each time you perform a data export, we assign a numerical job ID to the task (starting with 1 for your first ever data export). This job ID is used in the file name, and location of the files containing your exported data. 

Location of exported files 

Exported data is saved as separate CSV files. The files are saved to the following directory:

  • <shared-home>/data-pipeline/export/<job-id> if you run Jira in a cluster
  • <local-home>/data-pipeline/export/<job-id> you are using non-clustered Jira

Within the <job-id> directory you will see the following files:

  • issues_job<job_id>_<timestamp>.csv (for issues)

    View sample content
    id,instance_url,key,url,project_key,project_name,project_type,project_category,issue_type,summary,description,environment,creator_id,creator_name,reporter_id,reporter_name,assignee_id,assignee_name,status,status_category,priority_sequence,priority_name,resolution,watcher_count,vote_count,created_date,resolution_date,updated_date,due_date,estimate,original_estimate,time_spent,parent_id,security_level,labels,components,affected_versions,fix_versions
    10022,http://localhost:8090/jira,TP-23,http://localhost:8090/jira/browse/TP-23,TP,Test Project,Software,,Story,"As a user, I'd like a historical story to show in reports",,,10000,Marek Szczepański,10000,Marek Szczepański,10000,Marek Szczepański,Done,Done,3,Medium,Done,0,0,2020-10-10T08:44:19Z,2020-10-22T17:19:19Z,2020-10-22T17:19:19Z,,,,,,,,,,"[""Version 1.0""]"
    10021,http://localhost:8090/jira,TP-22,http://localhost:8090/jira/browse/TP-22,TP,Test Project,Software,,Story,"As a user, I'd like a historical story to show in reports",,,10000,Marek Szczepański,10000,Marek Szczepański,10000,Marek Szczepański,Done,Done,3,Medium,Done,0,0,2020-10-10T08:44:19Z,2020-10-20T15:41:19Z,2020-10-20T15:41:19Z,,,,,,,,,,"[""Version 1.0""]"
    10020,http://localhost:8090/jira,TP-21,http://localhost:8090/jira/browse/TP-21,TP,Test Project,Software,,Story,"As a user, I'd like a historical story to show in reports",,,10000,Marek Szczepański,10000,Marek Szczepański,10000,Marek Szczepański,Done,Done,3,Medium,Done,0,0,2020-10-10T08:44:19Z,2020-10-18T00:21:19Z,2020-10-18T00:21:19Z,,,,,,,,,,"[""Version 1.0""]"
    10019,http://localhost:8090/jira,TP-20,http://localhost:8090/jira/browse/TP-20,TP,Test Project,Software,,Story,"As a user, I'd like a historical story to show in reports",,,10000,Marek Szczepański,10000,Marek Szczepański,10000,Marek Szczepański,Done,Done,3,Medium,Done,0,0,2020-10-10T08:44:19Z,2020-10-15T17:43:19Z,2020-10-15T17:43:19Z,,,,,,,,,,"[""Version 1.0""]"
    10018,http://localhost:8090/jira,TP-19,http://localhost:8090/jira/browse/TP-19,TP,Test Project,Software,,Story,"As a user, I'd like a historical story to show in reports",,,10000,Marek Szczepański,10000,Marek Szczepański,10000,Marek Szczepański,Done,Done,3,Medium,Done,0,0,2020-10-10T08:44:19Z,2020-10-14T05:08:19Z,2020-10-14T05:08:19Z,,,,,,,,,,"[""Version 2.0""]"
    10017,http://localhost:8090/jira,TP-18,http://localhost:8090/jira/browse/TP-18,TP,Test Project,Software,,Story,"As a user, I'd like a historical story to show in reports",,,10000,Marek Szczepański,10000,Marek Szczepański,10000,Marek Szczepański,Done,Done,3,Medium,Done,0,0,2020-10-10T08:44:19Z,2020-10-11T05:14:19Z,2020-10-11T05:14:19Z,,,,,,,,,,"[""Version 2.0""]"
    10016,http://localhost:8090/jira,TP-17,http://localhost:8090/jira/browse/TP-17,TP,Test Project,Software,,Bug,"Instructions for deleting this sample board and project are in the description for this issue >> Click the ""TP-17"" link and read the description tab of the detail view for more","*To delete this Sample Project _(must be performed by a user with Administration rights)_* \n- Open the administration interface to the projects page by using the keyboard shortcut 'g' then 'g' and typing 'Projects' in to the search dialog\n- Select the ""Delete"" link for the ""Test Project"" project\n\n*To delete the Sample Project workflow and workflow scheme _(must be performed by a user with Administration rights)_* \n- Open the administration interface to the workflow schemes page by using the keyboard shortcut 'g' then 'g' and typing 'Workflow Schemes' in to the search dialog\n- Select the ""Delete"" link for the ""TP: Software Simplified Workflow Scheme"" workflow scheme\n- Go to the workflows page by using the keyboard shortcut 'g' then 'g' and typing 'Workflows' in to the search dialog(_OnDemand users should select the second match for Workflows_)\n- Expand the ""Inactive"" section\n- Select the ""Delete"" link for the ""Software Simplified Workflow  for Project TP"" workflow\n\n*To delete this Board _(must be performed by the owner of this Board or an Administrator)_*\n- Click the ""Tools"" cog at the top right of this board\n- Select ""Delete""",,10000,Marek Szczepański,10000,Marek Szczepański,10000,Marek Szczepański,Done,Done,3,Medium,Done,0,0,2020-10-24T09:54:19Z,2020-10-28T02:30:19Z,2020-10-28T02:30:19Z,,,,,,,,,,
    10015,http://localhost:8090/jira,TP-16,http://localhost:8090/jira/browse/TP-16,TP,Test Project,Software,,Story,"As a team, we can finish the sprint by clicking the cog icon next to the sprint name above the ""To Do"" column then selecting ""Complete Sprint"" >> Try closing this sprint now",,,10000,Marek Szczepański,10000,Marek Szczepański,,,Done,Done,3,Medium,Done,0,0,2020-10-03T13:50:19Z,2020-10-25T16:26:19Z,2020-10-25T16:26:19Z,,,,,,,,,,
    10014,http://localhost:8090/jira,TP-15,http://localhost:8090/jira/browse/TP-15,TP,Test Project,Software,,Story,"As a scrum master, I can see the progress of a sprint via the Burndown Chart >> Click ""Reports"" to view the Burndown Chart",,,10000,Marek Szczepański,10000,Marek Szczepański,,,Done,Done,3,Medium,Done,0,0,2020-10-24T09:54:19Z,2020-10-29T00:30:19Z,2020-10-29T00:30:19Z,,,,,,,,,,
    10013,http://localhost:8090/jira,TP-14,http://localhost:8090/jira/browse/TP-14,TP,Test Project,Software,,Story,"As a user, I can find important items on the board by using the customisable ""Quick Filters"" above >> Try clicking the ""Only My Issues"" Quick Filter above",*Creating Quick Filters*\n\nYou can add your own Quick Filters in the board configuration (select *Board > Configure*),,10000,Marek Szczepański,10000,Marek Szczepański,10000,Marek Szczepański,To Do,To Do,3,Medium,,0,0,2020-10-30T14:54:19Z,,2020-10-30T14:54:19Z,,,,,,,,,,
    10012,http://localhost:8090/jira,TP-13,http://localhost:8090/jira/browse/TP-13,TP,Test Project,Software,,Bug,"As a developer, I can update details on an item using the Detail View >> Click the ""TP-13"" link at the top of this card to open the detail view","*Editing using the Detail View*\n\nMany of the fields in the detail view can be inline edited by simply clicking on them. \n\nFor other fields you can open Quick Edit, select ""Edit"" from the cog drop-down.",,10000,Marek Szczepański,10000,Marek Szczepański,10000,Marek Szczepański,To Do,To Do,3,Medium,,0,0,2020-10-24T09:54:19Z,,2020-10-24T09:54:19Z,,,,,,,,,,"[""Version 2.0""]"
    10011,http://localhost:8090/jira,TP-12,http://localhost:8090/jira/browse/TP-12,TP,Test Project,Software,,Sub-task,"When the last task is done, the story can be automatically closed >> Drag this task to ""Done"" too",,,10000,Marek Szczepański,10000,Marek Szczepański,,,In Progress,In Progress,3,Medium,,0,0,2020-10-28T19:21:19Z,,2020-10-28T19:21:19Z,,,,,10009,,,,,"[""Version 2.0""]"
    10010,http://localhost:8090/jira,TP-11,http://localhost:8090/jira/browse/TP-11,TP,Test Project,Software,,Sub-task,"Update task status by dragging and dropping from column to column >> Try dragging this task to ""Done""",,,10000,Marek Szczepański,10000,Marek Szczepański,10000,Marek Szczepański,In Progress,In Progress,3,Medium,,0,0,2020-10-27T21:56:19Z,,2020-10-27T21:56:19Z,,,,,10009,,,,,"[""Version 2.0""]"
    10009,http://localhost:8090/jira,TP-10,http://localhost:8090/jira/browse/TP-10,TP,Test Project,Software,,Story,"As a developer, I can update story and task status with drag and drop (click the triangle at far left of this story to show sub-tasks)",,,10000,Marek Szczepański,10000,Marek Szczepański,10000,Marek Szczepański,In Progress,In Progress,3,Medium,,0,0,2020-10-24T09:54:19Z,,2020-10-24T09:54:19Z,,,,,,,,,,"[""Version 2.0""]"
    10008,http://localhost:8090/jira,TP-9,http://localhost:8090/jira/browse/TP-9,TP,Test Project,Software,,Story,"As a developer, I'd like to update story status during the sprint >> Click the Active sprints link at the top right of the screen to go to the Active sprints where the current Sprint's items can be updated",,,10000,Marek Szczepański,10000,Marek Szczepański,,,To Do,To Do,3,Medium,,0,0,2020-10-31T19:04:23Z,,2020-10-31T19:04:23Z,,,,,,,,,,
    10007,http://localhost:8090/jira,TP-8,http://localhost:8090/jira/browse/TP-8,TP,Test Project,Software,,Bug,"As a product owner, I'd like to include bugs, tasks and other issue types in my backlog >> Bugs like this one will also appear in your backlog but they are not normally estimated","*Estimation of Bugs*\n\nScrum teams do not normally apply story point estimates to bugs because bugs are considered to be part of the ongoing work that the team must deal with (i.e the overhead). If you view the story points completed in a sprint as a measure of progress, then bugs also have no value because they do not deliver anything additional to the customer. \n\nHowever, you can apply estimates to bugs if you wish by configuring the ""Story Points"" field to apply to other Issue Types (by default it only applies to Stories and Epics). Some more information on this is in the [documentation|https://confluence.atlassian.com/display/GH/Story+Point].",,10000,Marek Szczepański,10000,Marek Szczepański,,,To Do,To Do,3,Medium,,0,0,2020-10-31T19:04:23Z,,2020-10-31T19:04:23Z,,,,,,,,,,"[""Version 2.0""]"
    10006,http://localhost:8090/jira,TP-7,http://localhost:8090/jira/browse/TP-7,TP,Test Project,Software,,Sub-task,This is a sample task. Tasks are used to break down the steps to implement a user story,,,10000,Marek Szczepański,10000,Marek Szczepański,10000,Marek Szczepański,To Do,To Do,3,Medium,,0,0,2020-10-31T19:04:23Z,,2020-10-31T19:04:23Z,,,,,10005,,,,,
    10005,http://localhost:8090/jira,TP-6,http://localhost:8090/jira/browse/TP-6,TP,Test Project,Software,,Story,"As a scrum master, I'd like to break stories down into tasks we can track during the sprint >> Try creating a task by clicking the Sub-Tasks tab in the Detail View on the right",*Task Breakdown*\n\nMany teams choose to break down user stories into a set of tasks needed to implement the story. They then update the status of these tasks during a sprint to track progress. The completion of the last task signals the end of the story. \n\nYou can add sub-tasks to a story on the sub-task tab (folder icon) above.,,10000,Marek Szczepański,10000,Marek Szczepański,,,To Do,To Do,3,Medium,,0,0,2020-10-31T19:04:22Z,,2020-10-31T19:04:22Z,,,,,,,,,,
    10004,http://localhost:8090/jira,TP-5,http://localhost:8090/jira/browse/TP-5,TP,Test Project,Software,,Story,"As a team, I'd like to commit to a set of stories to be completed in a sprint (or iteration) >> Click ""Create Sprint"" then drag the footer down to select issues for a sprint (you can't start a sprint at the moment because one is already active)","*Starting a Sprint*\n\nDuring the Planning Meeting the team will examine the stories at the top of the backlog and determine which they can commit to completing during the coming sprint. Based on this information the Product Owner might break down stories into smaller stories, adjust story priorities or otherwise work with the team to define the ideal sprint outcome. When the sprint is started the stories are moved into the sprint backlog.",,10000,Marek Szczepański,10000,Marek Szczepański,,,To Do,To Do,3,Medium,,0,0,2020-10-31T19:04:21Z,,2020-10-31T19:04:21Z,,,,,,,,,,
    10003,http://localhost:8090/jira,TP-4,http://localhost:8090/jira/browse/TP-4,TP,Test Project,Software,,Story,"As a team, I'd like to estimate the effort of a story in Story Points so we can understand the work remaining >> Try setting the Story Points for this story in the ""Estimate"" field","This story is estimated at 5 Story Points (as shown in the ""Estimate"" field at the top right of the Detail View). \n\nTry updating the Story Point estimate to 4 by clicking on the ""Estimate"" then typing.\n\n*Estimating using Story Points*\n\nBecause the traditional process of estimating tasks in weeks or days is often wildly inaccurate, many Scrum teams estimate in Story Points instead. Story Points exist merely as a way to estimate a task's difficulty compared to some other task (for example, a 10-point story would probably take double the effort of a 5-point story). As teams mature with Scrum they tend to achieve a consistent number of Story Points from Sprint to Sprint -- this is termed the team's _velocity_. This allows the Product Owner to use the velocity to predict how many Sprints it will take to deliver parts of the backlog. \n\nMany teams use Planning Poker to achieve consensus on Story Point estimates.\n\n*Using Other Estimation Units*\n\nYou can configure JIRA Software to use time-based estimates if you wish. In the configuration for the board, on the ""Estimation"" tab, select ""Original Time Estimate"" as your Estimation Statistic. If you also wish to track the time used during the Sprint, select ""Remaining Estimate and Time Spent"" to enable Time Tracking in JIRA Software.",,10000,Marek Szczepański,10000,Marek Szczepański,,,To Do,To Do,3,Medium,,0,0,2020-10-31T19:04:21Z,,2020-10-31T19:04:21Z,,,,,,,,,,"[""Version 3.0""]"
    10002,http://localhost:8090/jira,TP-3,http://localhost:8090/jira/browse/TP-3,TP,Test Project,Software,,Story,"As a product owner, I'd like to rank stories in the backlog so I can communicate the proposed implementation order >> Try dragging this story up above the previous story",*About the Product Backlog*\n\nThe backlog is the source of truth for the order of work to be completed. It is expected that the Product Owner will work with the team to make sure that the backlog represents the current approach to delivering the product. JIRA Software makes it easy to prioritise (rank) Stories by dragging them up and down the backlog.,,10000,Marek Szczepański,10000,Marek Szczepański,,,To Do,To Do,3,Medium,,0,0,2020-10-31T19:04:21Z,,2020-10-31T19:04:21Z,,,,,,,,,,"[""Version 3.0""]"
    10001,http://localhost:8090/jira,TP-2,http://localhost:8090/jira/browse/TP-2,TP,Test Project,Software,,Story,"As a product owner, I'd like to express work in terms of actual user problems, aka User Stories, and place them in the backlog >> Try creating a new story with the ""+ Create Issue"" button (top right of screen)","When you click ""+ Create Issue"" you will be asked for the correct project (select ""Test Project"") and Issue Type (select ""Story"").\n\n*About User Stories*\n\nThe Scrum methodology drops traditional software requirement statements in favour of real world problems expressed as User Stories. Stories describe the task a particular user is trying to achieve and its value. They are typically of the form ""As a (role) I want (something) so that (benefit)"". This approach focuses the team on the core user need rather than on implementation details. \n\nStories are ""placeholders for a conversation"" -- they do not need to be especially detailed since it is expected that the team will work together to resolve ambiguity as the story is developed. \n\nStories to be implemented in the future are stored in the Product Backlog. The backlog is ranked by the Product Owner so that the next items to be completed are at the top.",,10000,Marek Szczepański,10000,Marek Szczepański,,,To Do,To Do,3,Medium,,0,0,2020-10-31T19:04:21Z,,2020-10-31T19:04:21Z,,,,,,,,,,"[""Version 2.0""]"
    10000,http://localhost:8090/jira,TP-1,http://localhost:8090/jira/browse/TP-1,TP,Test Project,Software,,Story,"As an Agile team, I'd like to learn about Scrum >> Click the ""TP-1"" link at the left of this row to see detail in the Description tab on the right","*About Scrum*\n\nScrum is an iterative approach to Agile software development. The methodology has been around since the 1980s but was popularised by Jeff Sutherland and Ken Schwaber. \n\nScrum breaks the development of a product down in to discrete iterations (termed Sprints) that each deliver functionality that could potentially be shipped to users.\n\nThe Scrum Alliance offers an excellent [introduction to Scrum|http://www.scrumalliance.org/resources/47] that provides an overview of key Scrum concepts, stakeholders, processes and artefacts.",,10000,Marek Szczepański,10000,Marek Szczepański,,,To Do,To Do,3,Medium,,0,0,2020-10-31T19:04:19Z,,2020-10-31T19:04:19Z,,,,,,,,,,"[""Version 2.0""]"
  • issue_fields_job<job_id>_<timestamp>.csv (for Jira Software and Jira Service Management fields)

    View sample content
    issue_id,field_id,field_name,field_value
    10022,story_points,Story Points,2.0
    10022,sprint,Sprint,"[{""id"":2,""name"":""Sample Sprint 1"",""goal"":null,""boardId"":1,""state"":""CLOSED"",""startDate"":""2020-10-10T08:44:26Z"",""endDate"":""2020-10-24T08:44:26Z"",""completeDate"":""2020-10-24T07:24:26Z""}]"
    10021,story_points,Story Points,2.0
    10021,sprint,Sprint,"[{""id"":2,""name"":""Sample Sprint 1"",""goal"":null,""boardId"":1,""state"":""CLOSED"",""startDate"":""2020-10-10T08:44:26Z"",""endDate"":""2020-10-24T08:44:26Z"",""completeDate"":""2020-10-24T07:24:26Z""}]"
    10020,story_points,Story Points,1.0
    10020,sprint,Sprint,"[{""id"":2,""name"":""Sample Sprint 1"",""goal"":null,""boardId"":1,""state"":""CLOSED"",""startDate"":""2020-10-10T08:44:26Z"",""endDate"":""2020-10-24T08:44:26Z"",""completeDate"":""2020-10-24T07:24:26Z""}]"
    10019,story_points,Story Points,3.0
    10019,sprint,Sprint,"[{""id"":2,""name"":""Sample Sprint 1"",""goal"":null,""boardId"":1,""state"":""CLOSED"",""startDate"":""2020-10-10T08:44:26Z"",""endDate"":""2020-10-24T08:44:26Z"",""completeDate"":""2020-10-24T07:24:26Z""}]"
    10018,story_points,Story Points,5.0
    10018,sprint,Sprint,"[{""id"":2,""name"":""Sample Sprint 1"",""goal"":null,""boardId"":1,""state"":""CLOSED"",""startDate"":""2020-10-10T08:44:26Z"",""endDate"":""2020-10-24T08:44:26Z"",""completeDate"":""2020-10-24T07:24:26Z""}]"
    10017,story_points,Story Points,3.0
    10017,sprint,Sprint,"[{""id"":2,""name"":""Sample Sprint 1"",""goal"":null,""boardId"":1,""state"":""CLOSED"",""startDate"":""2020-10-10T08:44:26Z"",""endDate"":""2020-10-24T08:44:26Z"",""completeDate"":""2020-10-24T07:24:26Z""}]"
    10016,sprint,Sprint,"[{""id"":1,""name"":""Sample Sprint 2"",""goal"":null,""boardId"":1,""state"":""ACTIVE"",""startDate"":""2020-10-24T09:54:23Z"",""endDate"":""2020-11-07T10:14:23Z"",""completeDate"":null}]"
    10015,story_points,Story Points,2.0
    10015,sprint,Sprint,"[{""id"":1,""name"":""Sample Sprint 2"",""goal"":null,""boardId"":1,""state"":""ACTIVE"",""startDate"":""2020-10-24T09:54:23Z"",""endDate"":""2020-11-07T10:14:23Z"",""completeDate"":null},{""id"":2,""name"":""Sample Sprint 1"",""goal"":null,""boardId"":1,""state"":""CLOSED"",""startDate"":""2020-10-10T08:44:26Z"",""endDate"":""2020-10-24T08:44:26Z"",""completeDate"":""2020-10-24T07:24:26Z""}]"
    10014,story_points,Story Points,4.0
    10014,sprint,Sprint,"[{""id"":1,""name"":""Sample Sprint 2"",""goal"":null,""boardId"":1,""state"":""ACTIVE"",""startDate"":""2020-10-24T09:54:23Z"",""endDate"":""2020-11-07T10:14:23Z"",""completeDate"":null}]"
    10013,story_points,Story Points,3.0
    10013,sprint,Sprint,"[{""id"":1,""name"":""Sample Sprint 2"",""goal"":null,""boardId"":1,""state"":""ACTIVE"",""startDate"":""2020-10-24T09:54:23Z"",""endDate"":""2020-11-07T10:14:23Z"",""completeDate"":null}]"
    10012,sprint,Sprint,"[{""id"":1,""name"":""Sample Sprint 2"",""goal"":null,""boardId"":1,""state"":""ACTIVE"",""startDate"":""2020-10-24T09:54:23Z"",""endDate"":""2020-11-07T10:14:23Z"",""completeDate"":null}]"
    10011,sprint,Sprint,"[{""id"":1,""name"":""Sample Sprint 2"",""goal"":null,""boardId"":1,""state"":""ACTIVE"",""startDate"":""2020-10-24T09:54:23Z"",""endDate"":""2020-11-07T10:14:23Z"",""completeDate"":null}]"
    10010,sprint,Sprint,"[{""id"":1,""name"":""Sample Sprint 2"",""goal"":null,""boardId"":1,""state"":""ACTIVE"",""startDate"":""2020-10-24T09:54:23Z"",""endDate"":""2020-11-07T10:14:23Z"",""completeDate"":null}]"
    10009,story_points,Story Points,5.0
    10009,sprint,Sprint,"[{""id"":1,""name"":""Sample Sprint 2"",""goal"":null,""boardId"":1,""state"":""ACTIVE"",""startDate"":""2020-10-24T09:54:23Z"",""endDate"":""2020-11-07T10:14:23Z"",""completeDate"":null}]"
    10008,story_points,Story Points,3.0
    10005,story_points,Story Points,1.0
    10004,story_points,Story Points,1.0
    10003,story_points,Story Points,5.0
    10002,story_points,Story Points,5.0
    10001,story_points,Story Points,2.0
    10000,story_points,Story Points,2.0
  • sla_cycles_job<job_id>_<timestamp>.csv (for SLA cycle information, if Jira Service Management is installed)

    View sample content
    issue_id, sla_id, sla_name, cycle_type, start_time, stop_time, paused, goal_duration, elapsed_time, remaining_time
    
    10000, 1, Time to first response, Ongoing, 2020-01-10T12:50:30Z, 2020-01-10T12:50:30Z, true, 14400000, 14400000, 14400000
    10000, 1, Time to first response, Completed, 2020-01-10T12:50:30Z, 2020-01-10T12:50:30Z,, 14400000, 14400000, 14400000
    10000, 1, Time to first response, Completed, 2020-01-10T12:50:30Z, 2020-01-10T12:50:30Z,, 14400000, 14400000, 14400000
    10000, 1, Time to first response, Completed, 2020-01-10T12:50:30Z, 2020-01-10T12:50:30Z,, 14400000, 14400000, 14400000
    10000, 2, Time to approve normal change, Completed, 2020-01-10T12:50:30Z, 2020-01-10T12:50:30Z,, 14400000, 14400000, 14400000
    10000, 2, Time to approve normal change, Completed, 2020-01-10T12:50:30Z, 2020-01-10T12:50:30Z,, 14400000, 14400000, 14400000

To load and transform the data in this export, you'll need to understand its schema. See Data pipeline export schema.

Sample Spark and Hadoop import configurations

If you have an existing Spark or Hadoop instance, use the following references to configure how to import your data for further transformation:

Spark/Databricks

Sample Notebook Configuration
%python
# File location
file_location = "/FileStore/**/export_2020_09_24T03_32_18Z.csv" 

# Automatically set data type for columns
infer_schema = "true"
# Skip first row as it's a header
first_row_is_header = "true"
# Ignore multiline within double quotes
multiline_support = "true"

# The applied options are for CSV files. For other file types, these will be ignored. Note escape & quote options for RFC-4801 compliant files
df = spark.read.format("csv") \
  .option("inferSchema", infer_schema) \
  .option("header", first_row_is_header) \
  .option("multiLine", multiline_support) \
  .option("quote", "\"") \
  .option("escape", "\"") \
  .option("encoding", "UTF-8").load(file_location)

display(df)

Hadoop

Create table script
CREATE EXTERNAL TABLE IF NOT EXISTS some_db.datapipeline_export (
  `id` string,
  `instance_url` string,
  `key` string,
  `url` string,
  `project_key` string,
  `project_name` string,
  `project_type` string,
  `project_category` string,
  `issue_type` string,
  `summary` string,
  `description` string,
  `environment` string,
  `creator_id` string,
  `creator_name` string,
  `reporter_id` string,
  `reporter_name` string,
  `assignee_id` string,
  `assignee_name` string,
  `status` string,
  `status_category` string,
  `priority_sequence` string,
  `priority_name` string,
  `resolution` string,
  `watcher_count` string,
  `vote_count` string,
  `created_date` string,
  `resolution_date` string,
  `updated_date` string,
  `due_date` string,
  `estimate` string,
  `original_estimate` string,
  `time_spent` string,
  `parent_id` string,
  `security_level` string,
  `labels` string,
  `components` string,
  `affected_versions` string,
  `fix_versions` string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
  "escapeChar" = "\\",
  'quoteChar' = '"',
  'separatorChar' = ','
) LOCATION 's3://my-data-pipeline-bucket/test-exports/'
TBLPROPERTIES ('has_encrypted_data'='false');

Troubleshooting failed exports

Exports can fail for a number of reasons, for example if your search index isn’t up to date. For guidance on common failures, and how to resolve them, see Data pipeline troubleshooting in our knowledge base. 

Last modified on Feb 15, 2024

Was this helpful?

Yes
No
Provide feedback about this article

In this section

Powered by Confluence and Scroll Viewport.