Data pipeline
Requirements
To trigger data exports through the REST API, you’ll need:
- A valid Confluence Data Center license
- Systems Administrator global permissions
Considerations
There are a number of security and performance impacts you’ll need to consider before getting started.
Security
If you need to filter out data based on security and confidentiality, this must be done after the data is exported.
Exported files are saved in your shared home directory, so you’ll also want to check this is secured appropriately.
Performance impact
When scheduling your exports, we recommend that you:
- Limit the amount of data exported using the
fromDate
parameter, as a date further in the past will export more data, resulting in a longer data export. - Schedule exports during hours of low activity, or on a node with no activity, if you do observe any performance degradation during the export.
Number | Approximate export duration | |
---|---|---|
Users | 100,000 | 8 minutes |
Spaces | 15,000 | 12 minutes |
Pages | 25 million | 12 hours |
Comments | 15 million | 1 hour |
Analytics events | 20 million | 2 hours |
The total export time was around 16 hours.
Test performance VS production
The data presented here is based on our own internal testing. The actual duration and impact of data export on your own environment will likely differ depending on your infrastructure, configuration, and load.
Our tests were conducted on a single node Data Center instance in AWS:
- EC2 instance type:
c5.4xlarge
- RDS instance type:
db.m5.4xlarge
Performing the data export
/export
REST API endpoint:https://<base-url>/rest/datapipeline/latest/export?
fromDate=<yyyy-MM-ddTHH:mmTZD>
The fromDate
parameter limits the amount of data exported. That is, only data on entities created or updated after the fromDate
value will be exported.
If you trigger the export without the fromDate
parameter, all data from the last 365 days will be exported.
If your application is configured to use a context path, such as /jira
or /confluence
, remember to include this in the <base-url>
.
The /export
REST API endpoint has three methods:
Automatic cancellation
CANCELLED
.However, if the JVM is not notified after a crash or hardware-level failure occurs, the export process may get locked. This means you'll need to manually mark the export as CANCELLED by making a DELETE
request. This releases the process lock, allowing you to perform another data export.
Configuring the data export
You can configure the format of the export data using the following system properties.
Default value | Description |
---|---|
plugin.data.pipeline.embedded.line.break.preserve | |
false | Specifies whether embedded line breaks should be preserved in the output files. Line breaks can be problematic for some tools such as Hadoop. This property is set to |
plugin.data.pipeline.embedded.line.break.escape.char | |
\\n | Escaping character for embedded line breaks. By default, we'll print |
Check the status of an export
You can check the status of an export and view when your last export ran from within your application’s admin console. To view data export status, go to > General Configuration > Data pipeline.
- Not started - no export is currently running
- Started - the export is currently running
- Completed - the export has completed
- Cancellation requested - a cancellation request has been sent
- Cancelled - the export was cancelled
- Failed - the export failed.
For help resolving failed or cancelled exports, see Data pipeline troubleshooting.
Output files
Each time you perform a data export, we assign a numerical job ID to the task (starting with 1 for your first ever data export). This job ID is used in the file name, and location of the files containing your exported data.
Location of exported files
Exported data is saved as separate CSV files. The files are saved to the following directory:
<shared-home>/data-pipeline/export/<job-id>
if you run Confluence in a cluster<local-home>/data-pipeline/export/<job-id>
you are using non-clustered Confluence
Within the <job-id>
directory you will see the following files:
users_job<job_id>_<timestamp>.csv
spaces_job<job_id>_<timestamp>.csv
pages_job<job_id>_<timestamp>.csv
comments_job<job_id>_<timestamp>.csv
analytics_events_job<job_id>_<timestamp>.csv
To load and transform the data in these files, you'll need to understand the schema. See Data pipeline export schema.
Sample Spark and Hadoop import configurations
If you have an existing Spark or Hadoop instance, use the following references to configure how to import your data for further transformation.