Data pipeline

Administer Bitbucket Data Center and Server

On this page

In this section

Still need help?

The Atlassian Community is here for you.

Ask the community

This feature is available with a Bitbucket Data Center license.

Data pipeline provides an easy way to export data from Jira, Confluence, or Bitbucket, and feed it into your existing data platform (like Tableau or PowerBI). This allows you to:
  • generate richer reports and visualizations of site activity
  • better understand how your teams are using your application
  • make better decisions on optimizing the use of Jira or Confluence in your organization

You can trigger a data export in your application’s admin console or through the REST API. Data will be exported in CSV format. You can only perform one data export at a time.

For a detailed reference of the exported data's schema, see Data pipeline export schema.

Data pipeline is available in Data Center editions of:

  • Jira 8.14 and later
  • Confluence 7.12 and later
  • Bitbucket 7.13 and later

On this page:

Requirements

To trigger data exports through the REST API, you’ll need:

Considerations

There are a number of security and performance impacts you’ll need to consider before getting started.

Security

The export will include all data, including PII (Personally Identifiable Information) and restricted content. This is to provide you with as much data as possible, so you can filter and transform to generate the insights you’re after.

If you need to filter out data based on security and confidentiality, this must be done after the data is exported.

Exported files are saved in your shared home directory, so you’ll also want to check this is secured appropriately. 

Export performance

Exporting data can take a long time in large instances. We intentionally export data at a limited rate to keep any performance impact to your site under a 5% threshold. It’s important to note that there is no impact to performance unless an export is in progress.

When scheduling your exports, we recommend that you:

  • Limit the amount of data exported using the fromDate parameter, as a date further in the past will export more data, resulting in a longer data export.
  • Schedule exports during hours of low activity, or on a node with no activity, if you do observe any performance degradation during the export.

Our test results showed the following approximate durations for the export...
Amount of dataApproximate export duration

Small data set

  • 27 million commits

  • 250,000 pull requests

  • 1.5 million pull request activity records

  • 6,500 repositories

  • 2,000 users

10 hours

Large data set

  • 207 million commits

  • 1 million pull requests

  • 6.8 million pull request activity records

  • 52,000 repositories

  • 25,000 users

35 hours

Test performance vs production

The data presented here is based on our own internal testing. The actual duration and impact of a data export on your own environment will likely differ depending on:

  • your infrastructure, configuration, and load
  • amount of pull request activity to be exported.

Our tests were conducted on Data Center instances in AWS:

  • Small - EC2 instance type m5d.4xlarge and RDS instance type db.m4.4xlarge
  • Large - EC2 instance type c5.2xlarge and RDS instance type db.m5.large

Access the data pipeline

To access the data pipeline select  > Data pipeline.

Schedule regular exports

The way to get the most value out of the data pipeline is to schedule regular exports. The data pipeline performs a full export every time, so if you have a large site, you may want to only export once a week.

To set the export schedule:

  1. From the Data pipeline screen, select Schedule settings.
  2. Select the Schedule regular exports checkbox.
  3. Select the date to include data from. Data from before this date won’t be included. This is usually set to 12 months or less.
  4. Choose how often to repeat the export.
  5. Select a time to start the export. You may want to schedule the export to happen outside working hours.
  6. Select the Schema version to use (if more than one schema is available).
  7. Save your schedule.

Timezones and recurring exports

We use your server timezone to schedule exports (or system timezone if you’ve overridden the server time in the application). The export schedule isn’t updated if you change your timezone. If you do need to change the timezone, you’ll need to edit the schedule and re-enter the export time.

You can schedule exports to happen as often as you need. If you choose to export on multiple days, the first export will occur on the nearest day after you save the schedule. Using the example in the screenshot above, if you set up your schedule on Thursday, the first export would occur on Saturday, and the second export on Monday. We don’t wait for the start of the week.

Export schema

The export schema defines the structure of the export. We version the schema so that you know your export will have the same structure as previous exports. This helps you avoid problems if you’ve built dashboards or reports based on this data.

We only introduce new schema versions for breaking changes, such as removing a field, or if the way the data is structured changes. New fields are simply added to the latest schema version.

Older schema versions will be marked as ‘deprecated’, and may be removed in future versions. You can still export using these versions, just be aware we won’t update them with any new fields.

Check the status of an export

You can check the status of an export and view when your last export ran from the data pipeline screen. 

The Export details table will show the most recent exports, and the current status.

Select   > View details to see the full details of the export in JSON format. Details include the export parameters, status, and any errors returned if the export failed.

For help resolving failed or cancelled exports, see Data pipeline troubleshooting

Cancel an export

To cancel an export while it is in progress:
  • Go to the Data pipeline screen.
  • Select  next to the export, and choose Cancel export.
  • Confirm you want to cancel the export.

It can take a few minutes for the processes to be terminated. Any files already written will remain in the export directory. You can delete these files if you don’t need them.

Automatic data export cancellations

If you shut down a node running a data export, the export will be cancelled. However, if the JVM is not notified after a crash or hardware-level failure, the export process may get locked. This means you'll need to manually mark the export as cancelled (through the UI, or via the REST API by making a DELETE request). This releases the process lock, allowing you to perform another data export.

Exclude projects from the export

You can also exclude projects from the export by adding them to an opt-out list. This is useful if you don’t need to report on that particular project, or if it contains sensitive content that you’d prefer not to export.

To add projects to the opt-out list, make a POST request to <base-url>/rest/datapipeline/1.0/config/optout and pass the project keys as follows.

{ 
 "type": "PROJECT", 
 "keys": ["HR","TEST"] 
}

These projects will be excluded from all future exports. Note that the opt-out feature was introduced in the Data Pipeline version 2.3.0+.

For full details, including how to remove projects from the opt-out list, refer to the Data pipeline REST API reference

Configuring the data export

You can configure the format of the export data through the following configuration properties.

Default valueDescription
plugin.data.pipeline.embedded.line.break.preserve
false

Specifies whether embedded line breaks should be preserved in the output files. Line breaks can be problematic for some tools such as Hadoop.

This property is set to False by default, which means that line breaks are escaped.

plugin.data.pipeline.embedded.line.break.escape.char
\\n

Escaping character for embedded line breaks. By default, we'll print \n for every embedded line break.

plugin.data.pipeline.minimum.usable.disk.space.after.export
5GB

To prevent you from running out of disk space, the data pipeline will check before and during an export that there is at least 5GB free disk space.

Set this property, in gigabytes, to increase or decrease the limit. To disable this check, set this property to -1 (not recommended).

The following additional properties only apply to Bitbucket. 

Default valueDescription
plugin.data.pipeline.bitbucket.export.personal.forked.repository.commits
false

Specifies whether commits from forked repositories in personal projects should be exported. Set this property to True to include commits from forked repositories in personal projects.

plugin.data.pipeline.bitbucket.export.build.statuses
false

Specifies whether build statuses should be included in the export. Exporting build statuses can take a significant amount of time if you have a lot of builds. 

Set this property to true to export build statuses. 

plugin.data.pipeline.bitbucket.commit.queue.polling.timeout.seconds
20

Time, in seconds, it takes to receive the first commit from git process.

You should only need to change this if you see a CommitStreamingException (this error is usually caused by another underlying problem).

plugin.data.pipeline.bitbucket.commit.git.execution.timeout.seconds
3600

Sets the idle and execution timeout for the git ref-list command. You should only need to change this if you see "an error occurred while executing an external process: process timed out" error.

plugin.data.pipeline.bitbucket.export.pull.request.activities
true

Specifies whether historical data about pull request activity data should be included in the export. Exporting activity data will significantly increase your export duration. 

Set this property to false to exclude pull request activity from your export.

Use the data pipeline REST API

You can use the data pipeline REST API to export data.

To start a data pipeline export, make a POST request to <base-url>/rest/datapipeline/latest/export.

Here is an example request, using cURL and a personal access token for authentication:

curl -H "Authorization:Bearer ABCD1234" -H "X-Atlassian-Token: no-check" 
-X POST https://myexamplesite.com/rest/datapipeline/latest/
export?fromDate=2020-10-22T01:30:11Z

You can also use the API to check the status, change the export location, and schedule or cancel an export. 

For full details, refer to the Data pipeline REST API reference

Output files 

Each time you perform a data export, we assign a numerical job ID to the task (starting with 1 for your first ever data export). This job ID is used in the file name, and location of the files containing your exported data. 

Location of exported files

Exported data is saved as separate CSV files. The files are saved to the following directory:

  • <shared-home>/data-pipeline/export/<job-id> if you run Bitbucket in a cluster

  • <local-home>/shared/data-pipeline/export/<job-id> you are using non-clustered Bitbucket.

Within the <job-id> directory you will see the following files:

  • build_statuses_<job_id>_<schema_version>_<timestamp>.csv 
  • commits_<job_id>_<schema_version>_<timestamp>.csv
  • pull_request_activities_<job_id>_<schema_version>_<timestamp>.csv
  • pull_requests_<job_id>_<schema_version>_<timestamp>.csv
  • repositories_<job_id>_<schema_version>_<timestamp>.csv
  • users_<job_id>_<schema_version>_<timestamp>.csv

To load and transform the data in these files, you'll need to understand the schema. See Data pipeline export schema.

Set a custom export path

By default, the data pipeline exports the files to the home directory, but you can use the REST API to set a custom export path.

To change the root export path, make a PUT request to <base-url>/rest/datapipeline/1.0/config/export-path.

In the body of the request pass the absolute path to your preferred directory. 

For full details, including how to revert back to the default path, refer to the Data pipeline REST API reference

Sample Spark and Hadoop import configurations

If you have an existing Spark or Hadoop instance, use the following references to configure how to import your data for further transformation:

Spark/Databricks

%python
# File location
file_location = "/FileStore/**/export_2020_09_24T03_32_18Z.csv" 

# Automatically set data type for columns
infer_schema = "true"
# Skip first row as it's a header
first_row_is_header = "true"
# Ignore multiline within double quotes
multiline_support = "true"

# The applied options are for CSV files. For other file types, these will be ignored. Note escape & quote options for RFC-4801 compliant files
df = spark.read.format("csv") \
  .option("inferSchema", infer_schema) \
  .option("header", first_row_is_header) \
  .option("multiLine", multiline_support) \
  .option("quote", "\"") \
  .option("escape", "\"") \
  .option("encoding", "UTF-8").load(file_location)

display(df)

Hadoop

CREATE EXTERNAL TABLE IF NOT EXISTS some_db.datapipeline_export (
  `repository_id` string, 
  `instance_url` string,
  `url` string,
  `repository_name` string,
  `description` string,
  `hierarchy_id` string,
  `origin` string,
  `project_id` string,
  `project_key` string,
  `project_name` string,
  `project_type` string,
  `forkable` string,
  `fork` string,
  `public` string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
  "escapeChar" = "\\",
  'quoteChar' = '"',
  'separatorChar' = ','
) LOCATION 's3://my-data-pipeline-bucket/test-exports/'
TBLPROPERTIES ('has_encrypted_data'='false');

Troubleshooting failed exports

Exports can fail for a number of reasons, for example if your search index isn’t up to date. For guidance on common failures, and how to resolve them, see Data pipeline troubleshooting in our knowledge base. 

Last modified on Dec 6, 2021

Was this helpful?

Yes
No
Provide feedback about this article

In this section

Powered by Confluence and Scroll Viewport.