JIRA Software 7.6 Long Term Support release performance report

On this page

Still need help?

The Atlassian Community is here for you.

Ask the community

This page compares the performance of JIRA 7.2 and JIRA 7.6 Long Term Support release. 

About Long Term Support releases
We recommend upgrading JIRA regularly, however if your organisation's process means you only upgrade about once a year, upgrading to a Long Term Support release may be a good option, as it provides continued access to critical security, stability, data integrity and performance issues until this version reaches end of life. 


JIRA 7.6 was not focused solely on performance, however we do aim to provide the same, if not better, performance with each release. In this section, we’ll compare JIRA 7.2 to JIRA 7.6 Long Term Support release, both Server and Data Center. We ran the same extensive test scenario for both JIRA versions. 

The following chart presents mean response times of individual actions performed in JIRA. To check the details of these actions and the JIRA instance they were performed in, see Testing methodology.

Response times for JIRA actions

Testing methodology

The following sections detail the testing environment, including hardware specification, and methodology we used in our performance tests.

How we tested...

Before we started the test, we needed to determine what size and shape of data set represents a typical large JIRA instance. To achieve that, we used our Analytics data to form a picture of our customers' environments and what difficulties they face when scaling JIRA in a large organization.

The following table presents rounded values of the 999th permille of each data dimension. We used these values to generate a sample dataset with random test data in JIRA Data Generator.

Baseline JIRA data set

JIRA data dimensionValue
Custom Fields1400
Agile Boards1,450
Security Levels170
Actions performed...

We chose a mix of actions that would represent a sample of the most common user actions. An action in this context is a complete user operation like opening of an Issue in the browser window. The following table details the actions that we included in the script for our testing persona, indicating how many times each action is repeated during a single test run.

Action nameDescriptionNumber of times an action is performed during a single test run
View DashboardOpening the Dashboard page.10
Create IssueSubmitting a Create Issue dialog.5
View IssueOpening an individual issue in a separate browser window.55
Edit IssueEditing the Summary, Description and other fields of an existing Issue.5
Add CommentAdding a Comment to an Issue.2
Search with JQL

Performing a search query using JQL in the Issue Navigator interface.

The following JQL queries were used...
description ~ 'totally' or summary ~ 'hobos' and comment !~ 'propel' ORDER BY key
reporter was in (admin) and status = Closed order by createdDate


comment ~ 'contest* saw' and reporter was admin order by assignee desc


text ~ 'a* ba*' ORDER BY Status, summary


priority was in (Low, Lowest) or (status = 'In Progress' and assignee changed) or createdDate > '2016/07/02 00:00'


resolution = Unresolved and priority = Low


text ~ 'witch* doctrine' and status was not Closed order by priority


project = novemcinctus and assignee = admin ORDER BY assignee


assignee in membersOf('users_1') order by project


not (status = closed and resolution = Done and priority = High)

Half of these queries are very heavyweight, which explains high average response time.

View BoardOpening of Agile Board10
Browse ProjectsOpening of the list of Projects (available under Projects > View All Projects menu)5
Browse BoardsOpening of the list of Agile Boards (available under Agile > Manage Boards menu)2
All ActionsA mean of all actions performed during a single test run.-
Test environment...

The performance tests were all run on a set of AWS EC2 instances. For each test, the entire environment was reset and rebuilt, and then each test started with some idle cycles to warm up instance caches. Below, you can check the details of the environments used for JIRA Server and JIRA Data Center, as well as the specifications of the EC2 instances.

To run the tests, we used 10 scripted browsers and measured the time taken to perform the actions. Each browser was scripted to perform a random action from a predefined list of actions and immediately move on to the next action (ie. zero think time). Please note that it resulted in each browser performing substantially more tasks than would be possible by a real user and you should not equate the number of browsers to represent the number of real-world concurrent users.

Each test was run for 20 minutes, after which statistics were collected. 

Here are the details of our test environment:

JIRA ServerJIRA Data Center

The environment with JIRA Server consisted of:

  • 1 JIRA node
  • Database on a separate node
  • Load generator on a separate node

The environment with JIRA Data Center consisted of:

  • 2 JIRA nodes
  • Database on a separate node
  • Load generator on a separate node
  • Shared home directory on a separate node
  • Load balancer (AWS ELB HTTP load balancer)
EC2 type:

c4.8xlarge (see EC2 types)  

JIRA Server: 1 node

JIRA Data Center: 2 nodes

Operating system:Ubuntu 16.04 LTS
CPU:Intel Xeon E5-2666 v3 (Haswell) Java platform:Java 1.8.0
CPU cores:36 Java options:

8 GB heap

Memory:60 GB
Disk:AWS EBS 100 GB gp2
EC2 type:c4.8xlarge (see EC2 types)   Database:MySQL 5.5
CPU:Intel Xeon E5-2666 v3 (Haswell)Operating system:

Ubuntu 16.04 LTS

CPU cores:36
Memory:60 GB

JIRA Server: AWS EBS 100 GB gp2

JIRA Data Center: AWS EBS 60 GB gp2

Load generator
EC2 type:c4.8xlarge (see EC2 types)   Operating system:Ubuntu 16.04 LTS
CPU:Intel Xeon E5-2666 v3 (Haswell)Browser:Google Chrome 62

CPU cores:36Automation script:

Chromedriver 2.33

WebDriver 3.4.0

Java JDK 8u131

Memory:60 GB
Disk:AWS EBS 30 GB gp2
Last modified on Jun 26, 2020

Was this helpful?

Provide feedback about this article
Powered by Confluence and Scroll Viewport.