Deploy the DevOps dashboard in PowerBI

Still need help?

The Atlassian Community is here for you.

Ask the community

The data pipeline allows you to export data from your Jira instance for analysis in your favorite business intelligence tool.

To get you started, we’ve developed a DevOps template in Tableau which provides useful insights into the health of your engineering teams, and should provide a great jumping off point for creating your own dashboards and reports.

Learn how to make the most of the data pipeline with the DevOps dashboard

On this page:

This page will guide you through how to deploy our sample DevOps template in Microsoft PowerBI, and connect it to your data source. 

Download the DevOps dashboard template for PowerBI

Import data pipeline CSVs

Before you can use the template with your own data, you need to import the CSV files exported by the data pipeline in Jira Data Center into a database or blob storage.

The sample DevOps template uses the following files:

  • issues_job<job_id>_<timestamp>.csv file.
  • issue_history_job<job_id>_<timestamp>.csv file.

How you import this data depends on where it will be stored.

Azure blob storage

Upload the two CSV files into Container.

You will need to rename the files as follows:

  • issues.csv
  • issue_history.csv

Azure SQL and PostgreSQL

In your database, create an issues and issue_history table as follows.

CREATE TABLE issues (
id varchar(50),
instance_url varchar(1000),
"key" varchar(1000),
url varchar(1000),
project_key varchar(1000),
project_name varchar(1000),
project_type varchar(1000),
project_category varchar(1000),
issue_type varchar(1000),
summary varchar(1000),
description varchar(2000),
environment varchar(2000),
creator_id varchar(50),
creator_name varchar(1000),
reporter_id varchar(50),
reporter_name varchar(1000),
assignee_id varchar(50),
assignee_name varchar(1000),
status varchar(1000),
status_category varchar(1000),
priority_sequence varchar(1000),
priority_name varchar(1000),
resolution varchar(1000),
watcher_count varchar(50),
vote_count varchar(50),
created_date varchar(50),
resolution_date varchar(50),
updated_date varchar(50),
due_date varchar(50),
estimate varchar(50),
original_estimate varchar(50),
time_spent varchar(50),
parent_id varchar(50),
security_level varchar(1000),
labels varchar(1000),
components varchar(1000),
affected_versions varchar(1000),
fix_versions varchar(100));

CREATE TABLE issue_history (
issue_id varchar(50),
changelog_id varchar(50),
author_id varchar(50),
author_key varchar(1000),
created_date varchar(50),
field_type varchar(1000),
field varchar(1000),
"from" varchar(1000),
from_string varchar(1000),
"to" varchar(1000),
to_string varchar(1000),
additional_information varchar(2000));

Import the appropriate CSV file into each table.

For SQL, see Load data from CSV into Azure SQL Database or SQL Managed Instance (flat files) in the Microsoft documentation.

For PostgreSQL, there are several methods you can use. See Import CSV File Into PostgreSQL Table for some suggested methods.

Launch the template and connect to your data

Now that you have imported your data, we can launch the .pbix template in PowerBI, and connect it to your data source.

To connect to your data:

  1. Open PowerBI Desktop and launch the file.
  2. Select Get data.
  3. Choose your data source and follow the prompts to enter your database details and credentials.
  4. Choose issues and issue_history tables to include and select Load.
  5. Select Refresh to update the dashboard data.
  6. The dashboard should display your data.

For more information on connecting or replacing a data source, refer to the Microsoft PowerBI documentation.

Last modified on Aug 10, 2021

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.