Deploy the DevOps dashboard in Tableau

Still need help?

The Atlassian Community is here for you.

Ask the community

The data pipeline allows you to export data from your Jira instance for analysis in your favorite business intelligence tool.

To get you started, we’ve developed a DevOps template in Tableau which provides useful insights into the health of your engineering teams, and should provide a great jumping off point for creating your own dashboards and reports.

Learn how to make the most of the data pipeline with the DevOps dashboard

On this page:

This page will guide you through how to deploy our sample DevOps template in Tableau Desktop, and connect it to your data source.

Download the DevOps dashboard template for Tableau

The template has been tested with PostgreSQL and Microsoft SQL Server. It requires Tableau Desktop 2021.1 or later.

Import data pipeline CSVs into your database

Before you can use the template, you need to import the CSV files exported by the data pipeline in Jira Data Center into a database. You can also import the files directly into Tableau, but for this guide we’ll assume you’ll use an external database.

The sample DevOps template expects your database to contain the following tables:

  • issues table containing data from the issues_job<job_id>_<timestamp>.csv file.
  • issue_history table containing data from the issue_history_job<job_id>_<timestamp>.csv file.

PostgreSQL example

In your database, create an issues, issue_history and users table as follows.

CREATE TABLE issues (
id varchar(50),
instance_url varchar(1000),
"key" varchar(1000),
url varchar(1000),
project_key varchar(1000),
project_name varchar(1000),
project_type varchar(1000),
project_category varchar(1000),
issue_type varchar(1000),
summary varchar(1000),
description varchar(2000),
environment varchar(2000),
creator_id varchar(50),
creator_name varchar(1000),
reporter_id varchar(50),
reporter_name varchar(1000),
assignee_id varchar(50),
assignee_name varchar(1000),
status varchar(1000),
status_category varchar(1000),
priority_sequence varchar(1000),
priority_name varchar(1000),
resolution varchar(1000),
watcher_count varchar(50),
vote_count varchar(50),
created_date timestamp,
resolution_date varchar(50),
updated_date varchar(50),
due_date varchar(50),
estimate varchar(50),
original_estimate varchar(50),
time_spent varchar(50),
parent_id varchar(50),
security_level varchar(1000),
labels varchar(1000),
components varchar(1000),
affected_versions varchar(1000),
fix_versions varchar(100));

CREATE TABLE issue_history (
issue_id varchar(50),
changelog_id varchar(50),
author_id varchar(50),
author_key varchar(1000),
created_date timestamp,
field_type varchar(1000),
field varchar(1000),
"from" varchar(1000),
from_string varchar(1000),
"to" varchar(1000),
to_string varchar(1000),
additional_information varchar(2000));

CREATE TABLE users (
user_id varchar(50),
instance_url varchar(1000),
user_name varchar(1000),
user_fullname varchar(1000),
user_email varchar(1000)
);

Import the appropriate CSV file into each table. You can adjust the above script and CSV methods as appropriate for other databases.

For PostgreSQL, there are several methods you can use. See Import CSV File Into PostgreSQL Table for some suggested methods.

Launch the template and connect to your database

Now that you have imported your data, you can launch the template in Tableau Desktop, and connect it to your data.

To launch the template and connect it to your database:

  1. Download the DevOps dashboard template for Tableau.
  2. In Tableau Desktop 2021.1 or later, open the Atlassian DevOps Tableau packaged workbook. The template will be populated with some sample data.
  3. Select New Data Source, and follow the prompts to connect to your database.
  4. Drag the New Custom SQL placeholder to the canvas.If your database doesn’t have Custom SQL option, create a view in your database using the custom SQL provided below, then drag that view to the canvas instead.
  5. Connect with the following SQL query. Note that you will need to escape any reserved words using the appropriate escape characters for your database.

    SELECT h.issue_id as h_issue_id
      ,h.changelog_id as h_changelog_id
      ,h.author_id as h_author_id
      ,h.author_key as h_author_key
      ,h.created_date as h_created_date
      ,h.field as h_field
      ,h.field_type as h_field_type
      ,h.from as h_from
      ,h.from_string as h_from_string
      ,h.to as h_to
      ,h.to_string as h_to_string
      ,h.additional_information as h_additional_information
    ,i.id as id
      ,i.instance_url
      ,i.key as key
      ,i.url
      ,i.project_key
      ,i.project_name
      ,i.project_type
      ,i.project_category
      ,i.issue_type
      ,i.summary
      ,i.description
      ,i.environment
      ,i.creator_id
      ,u1.user_name as creator_name
      ,i.reporter_id
      ,u2.user_name as reporter_name
      ,i.assignee_id
      ,u3.user_name as assignee_name
      ,i.status
      ,i.status_category
      ,i.priority_sequence
      ,i.priority_name
      ,i.resolution
      ,i.watcher_count
      ,i.vote_count
      ,i.created_date as created_date
      ,i.resolution_date
      ,i.updated_date 
      ,i.due_date
      ,i.estimate
      ,i.original_estimate
      ,i.time_spent
      ,i.parent_id
      ,i.security_level
      ,i.labels
      ,i.components
      ,i.affected_versions
      ,i.fix_versions
    FROM issue_history h 
    join issues i on h.issue_id = i.id
    join users u1 on u1.user_id = i.creator_id
    join users u2 on u2.user_id = i.reporter_id
    join users u3 on u3.user_id = i.assignee_id
    where h.field_type = 'jira' and h.field= 'status'
  6. Once the connection is made, select Dashboard to start the extract process.
  7. Once the extract has been created, go to any sheet and select Data > Replace Data Source.
  8. Select your new data source name.
  9. The new data source will now be used by all sheets.

To learn more about how to configure the dashboard, and the metrics used, go to Make the most of the data pipeline with the DevOps dashboard.


Good to know

  • Don’t change the connection type to ‘Live’. The dashboard is designed to be database agnostic, and some functions aren’t supported by all databases. Keeping the connection as ‘Extract’ keeps the calculations intact. 
  • Dashboard formatting will default to Tableau default colors if the connection type is changed.
  • There are some reserved words in the Table names. The reserved words are “from”, “to”, “key”, “status”. If you change the connection type you will need to escape these reserved words using the appropriate escape characters for your database. A good approach would be to run the custom SQL query directly in the database interface to ensure the syntax of the SQL is correct.
  • Not all database connections have a Custom SQL option. If your Database doesn’t support Custom SQL you will need to create a view with the SQL provided. Creating a view ensures the columns are renamed, and we filter records as required by the dashboard.
  • The dashboard was built to use an ODBC connection to make it database agnostic. Learn more about ODBC connections in Tableau. If you choose to use the ODBC connection option, you will need to install the appropriate ODBC driver for your database, and then set up a data source name(DSN). 

Known issues and limitations

ODBC driver does not support all the capabilities used by Tableau warning

When connecting to the database, Tableau may warn you that not all capabilities will be available with that driver. See Tableau and ODBC in the Tableau documentation for more information on these warnings.

Last modified on Mar 14, 2022

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.