Performance and scale testing

Still need help?

The Atlassian Community is here for you.

Ask the community

With every Jira release, we’re publishing a performance and scaling report that compares performance of the current Jira version with the previous one. The report also contains results of how various data dimensions (number of custom fields, issues, projects, and so on) affect Jira, so you can check which of these data dimensions should be limited to have best results when scaling Jira.

This report is for Jira 8.5. If you’re looking for other reports, select your version at the top-right.

Introduction

When some Jira administrators think about how to scale Jira, they often focus on the number of issues a single Jira instance can hold. However, the number of issues is not the only factor that determines the scale of a Jira instance. To understand how a large instance may perform, you need to consider multiple factors.

This page explains how Jira performs across different versions and configurations. So whether you are a new Jira evaluator that wants to understand how Jira can scale to your growing needs or you're a seasoned Jira administrator that is interested in taking Jira to the next level, this page is here to help.

There are two main approaches, which can be used in combination to scale Jira across your entire organization:  

  1. Scale a single Jira instance. 
  2. Use Jira Data Center which provides Jira clustering.

Here we'll explore techniques to get the most out of Jira that are common to both approaches. For additional information on Jira Data Center and how it can improve performance under concurrent load, please refer to our Jira Data Center page.

Determining the scale of a single Jira instance

There are multiple factors that may affect Jira's performance in your organization.  These factors fall into the following categories (in no particular order):

  • Data size
    • The number of issues, comments, and attachments.
    • The number of projects.
    • The number of Jira project attributes, such as custom fields, issue types, and schemes.
    • The number of users registered in Jira and groups.
    • The number of boards, and the number of issues on the board (when you're using Jira Software).
  • Usage patterns
    • The number of users concurrently using Jira.
    • The number of concurrent operations.
    • The volume of email notifications.
  • Configuration
    • The number of plugins (some of which may have their own memory requirements).
    • The number of workflow step executions (such as Transitions and Post Functions).
    • The number of jobs and scheduled services.
  • Deployment environment
    • Jira version used.
    • The server Jira runs on.
    • The database used and connectivity to the database.
    • The operating system, including its file system.
    • JVM configuration.

This page will show how the speed of Jira can be influenced by the size and characteristics of data stored in the database.

Jira 8.5 performance

Jira 8.5 was not focused solely on performance, however we do aim to provide the same, if not better, performance with each release. In this section, we'll compare Jira 8.5 to Jira 7.13 and Jira 7.6. We ran the same extensive test scenario for both Jira versions. The only difference between the scenarios was the Jira version.

The following chart presents mean response times of individual actions performed in Jira. To check the details of these actions and the Jira instance they were performed in, see Testing methodology.

Jira 8.5 vs Jira 7.13

Response times for Jira actions

1m issues


2m issues

All actions (mean of all actions) improved by: Server 17%, DC 16%

Click here for a summary...
  • Viewing boards improved by:
    • Server: 34% | 0.397s
    • DC: 34% | 0.407s
  • Viewing backlogs improved by:
    • Server: 72% | 2.062s
    • DC: 70% | 1.929s
  • Viewing issues declined by:
    • Server: 5% | 0.044s
    • DC: 5% | 0.040s
      (info) Reduced from 7%. We’ve been reducing these declines since Jira 8.0.
  • Viewing dashboards declined by:
    • Server: 8% | 0.044s
    • DC: 7% | 0.036s
  • Searching with JQL improved by:
    • Server: 16% | 0.444s
    • DC: 14% | 0.372s
  • Adding comments declined by:
    • Server: 7% | 0.072s
    • DC: 6% | 0.055s
  • Creating issues:
    • Server: improved by 2% | 0.013s
    • DC: declined by 2% | 0.017s
  • Editing issues declined by:
    • Server: 8% | 0.082s
    • DC: 9% | 0.085s
  • Viewing project summary declined by:
    • Server: 8% | 0.022s
    • DC: 11% | 0.031s
  • Browsing projects improved by:
    • Server 7% | 0.051s
    • DC: 7% | 0.050s
  • Browsing boards improved by:
    • Server: 9% | 0.062s
    • DC: 10% | 0.075s
  • All Actions (mean of all actions) improved by:
    • Server: 17% | 0.216s
    • DC: 16% | 0.128s

The declines are a result of updating Lucene (Jira's search-based engine) and Jira’s front-end, which you can read more about here . We've been constantly reducing them since Jira 8.0.

All actions (mean of all actions) improved by: Server 43%, DC 47%

Click here for a summary...
  • Viewing boards improved by:
    • Server: 62% | 1.630s
    • DC: 58% | 1.413s
  • Viewing backlogs improved by:
    • Server: 85% | 4.523s
    • DC: 86% | 4.776s
  • Viewing issues:
    • Server: didn't change
    • DC: declined by 4% | 0.035s
      (info) Reduced from 7%. We’ve been reducing these declines since Jira 8.0.
  • Viewing dashboards improved by:
    • Server: 4% | 0.024s
    • DC: didn't change
  • Searching with JQL improved by:
    • Server: 51% | 3.281s
    • DC: 56% | 3.941s
  • Adding comments declined by:
    • Server: 9% | 0.090s
    • DC: 7% | 0.070s
  • Creating issues improved by:
    • Server: 3% | 0.025s
    • DC: 6% | 0.050s
  • Editing issues declined by:
    • Server: 6% | 0.057s
    • DC: 8% | 0.076s
  • Viewing project summary declined by:
    • Server: 1% | 0.002s
    • DC: 8% | 0.022s
  • Browsing projects improved by:
    • Server: 14% | 0.101s
    • DC: 11% | 0.81s
  • Browsing boards improved by:
    • Server: 17% | 0.120s
    • DC: 11% | 0.079s
  • All Actions (mean of all actions) improved by:
    • Server: 43% | 0.880s
    • DC: 47% | 1.014s

The declines are a result of updating Lucene (Jira's search-based engine) and Jira’s front-end, which you can read more about here . We've been constantly reducing them since Jira 8.0.

Jira 8.5 vs Jira 7.6

Response times for Jira actions

1m issues

2m issues


All actions (mean of all actions) improved by: Server 18%, DC 18%

Click here for a summary...
  • Viewing boards improved by:
    • Server: 52% | 0.826s
    • DC: 53% | 0.878s
  • Viewing backlogs improved by:
    • Server: 74% | 2.193s
    • DC: 74% | 2.254s
  • Viewing issues declined by:
    • Server: 7% | 0.060s
    • DC: 8% | 0.067s
  • Viewing dashboards declined by:
    • Server: 12% | 0.061s
    • DC: 11% | 0.058s
  • Searching with JQL improved by:
    • Server: 13% | 0.338s
    • DC: 13% | 0.319s
  • Adding comments declined by:
    • Server: 7% | 0.069s
    • DC: 6% | 0.061s
  • Creating issues:
    • Server: declined by 1% | 0.006s
    • DC: didn't change
  • Editing issues declined by:
    • Server: 8% | 0.075s
    • DC: 9% | 0.083s
  • Viewing project summary improved by:
    • Server: 34% | 0.145s
    • DC: 29% | 0.127s
  • Browsing projects improved by:
    • Server: 5% | 0.034s
    • DC: 6% | 0.039s
  • Browsing boards improved by:
    • Server: 5% | 0.032s
    • DC: 13% | 0.088s
  • All Actions (mean of all actions) improved by:
    • Server: 18% | 0.223s
    • DC: 18% | 0.225s

The declines are a result of updating Lucene (Jira's search-based engine) and Jira’s front-end, which you can read more about here . We've been constantly reducing them since Jira 8.0.

All actions (mean of all actions) improved by: Server 50%, DC 50%

Click here for a summary...
  • Viewing boards improved by:
    • Server: 76% | 3.104s
    • DC: 74% | 2.858s
  • Viewing backlogs improved by:
    • Server: 87% | 5.044s
    • DC: 86% | 4.722s
  • Viewing issues declined by:
    • Server: 5% | 0.041s
    • DC: 7% | 0.057s
  • Viewing dashboards: 
    • Server: didn't change
    • DC: declined by 3% | 0.017s
  • Searching with JQL improved by:
    • Server: 56% | 4.119s
    • DC: 58% | 4.353s
  • Adding comments declined by:
    • Server: 7% | 0.068s
    • DC: 8% | 0.074s
  • Creating issues improved by:
    • Server: 7% | 0.059s
    • DC: 1% | 0.010s
  • Editing issues declined by:
    • Server: 3% | 0.025s
    • DC: 6% | 0.060s
  • Viewing project summary improved by:
    • Server: 41% | 0.199s
    • DC: 38% | 0.184s
  • Browsing projects improved by:
    • Server: 15% | 0.111s
    • DC: 10% | 0.075s
  • Browsing boards improved by:
    • Server: 16% | 0.118s
    • DC: 12% | 0.085s
  • All Actions (mean of all actions) improved by:
    • Server: 50% | 1.155s
    • DC: 50% | 1.143s

The declines are a result of updating Lucene (Jira's search-based engine) and Jira’s front-end, which you can read more about here . We've been constantly reducing them since Jira 8.0.

Testing methodology

The following sections detail the testing environment, including hardware specification, and methodology we used in our performance tests.

How we tested...

Before we started the test, we needed to determine what size and shape of data set represents a typical large Jira instance. 

In order to achieve that, we used our Analytics data to form a picture of our customers' environments and what difficulties they face when scaling Jira in a large organization.

The following table presents rounded values of the 999th permille of each data dimension. We used these values to generate a sample dataset with random test data in Jira Data Generator.

Baseline Jira data set

Jira data dimensionValue
Issues1,000,000
Projects1500
Custom Fields1400
Workflows450
Attachments660,000
Comments2,900,000
Agile Boards1,450
Users100,000
Groups22,500
Security Levels170
Permissions200
Actions performed...

We chose a mix of actions that would represent a sample of the most common user actions. An "action" in this context is a complete user operation like opening of an Issue in the browser window. The following table details the actions that we included in the script for our testing persona, indicating how many times each action is repeated during a single test run.

Action nameDescriptionNumber of times an action is performed during a single test run
View DashboardOpening the Dashboard page.10
Create IssueSubmitting a Create Issue dialog.5
View IssueOpening an individual issue in a separate browser window.55
Edit IssueEditing an issue, submitting the changes, and waiting until they appear in the issue.5
Add CommentAdding a comment, and waiting until it appears in the issue.2
Search with JQL

Performing a search query using JQL in the Issue Navigator interface.

The following JQL queries were used...
resolved is not empty order by description
text ~ \"a*\" order by summary

 

priority in [priorityname] order by reporter

 

project = [projectname] order by status

 

assignee = [username] order by project

 

reporter was [username] order by description

 

project = [projectname] and assignee = [username] order by reporter

 

text ~ "[text]" order by key

 

10
View BoardOpening of Agile Board10
Browse ProjectsOpening of the list of Projects (available under Projects > View All Projects menu)5
Browse BoardsOpening of the list of Agile Boards (available under Agile > Manage Boards menu)2
All ActionsA mean of all actions performed during a single test run.-
Test environment...

The performance tests were all run on a set of AWS EC2 instances, deployed in the eu-central-1 region. For each test, the entire environment was reset and rebuilt, and then each test started with some idle cycles to warm up instance caches. Below, you can check the details of the environments used for Jira Server and Jira Data Center, as well as the specifications of the EC2 instances.

To run the tests, we used 20 scripted browsers and measured the time taken to perform the actions. Each browser was scripted to perform a random action from a predefined list of actions and immediately move on to the next action (ie. zero think time). Please note that it resulted in each browser performing substantially more tasks than would be possible by a real user and you should not equate the number of browsers to represent the number of real-world concurrent users.

Each test was run for 20 minutes, after which statistics were collected. 


Jira ServerJira Data Center

The environment with Jira Server consisted of:

  • 1 Jira node
  • Database on a separate node
  • Load generator on a separate node

The environment with Jira Data Center consisted of:

  • 2 Jira nodes
  • Database on a separate node
  • Load generator on a separate node
  • Shared home directory on a separate node
  • Load balancer (AWS ELB HTTP load balancer)
Jira
HardwareSoftware
EC2 type:

c5d.9xlarge (see EC2 types)  

Jira Server: 1 node

Jira Data Center: 2 nodes

Operating system:Ubuntu 16.04 LTS
CPU:

3.0 GHz Intel Xeon Platinum 8000-series

Java platform:Java 1.8.0
CPU cores:36 Java options:

8 GB heap


Memory:72 GB
Disk:

900 GB NVMe SSD

Database
HardwareSoftware
EC2 type:c4.8xlarge (see EC2 types)   Database:MySQL 5.5
CPU:Intel Xeon E5-2666 v3 (Haswell)Operating system:


Ubuntu 16.04 LTS


CPU cores:36
Memory:60 GB
Disk:

Jira Server: AWS EBS 100 GB gp2

Jira Data Center: AWS EBS 60 GB gp2

Load generator
HardwareSoftware
EC2 type:c4.8xlarge (see EC2 types)   Operating system:Ubuntu 16.04 LTS
CPU:Intel Xeon E5-2666 v3 (Haswell)Browser:Google Chrome 62


CPU cores:36Automation script:

Chromedriver 2.33

WebDriver 3.4.0

Java JDK 8u131

Memory:60 GB
Disk:AWS EBS 30 GB gp2

Jira 8.5 scalability

Jira's flexibility causes tremendous diversity in our customer's configurations. Analytics data shows that nearly every customer dataset displays a unique characteristic. Different Jira instances grow in different proportions of each data dimension. Frequently, a few dimensions become significantly bigger than the others. In one case, the issue count may grow rapidly, while the project count remains constant. In another case, the custom field count may be huge, while the issue count is small.

Many organizations have their own unique processes and needs. Jira's ability to support these various use cases explains the dataset diversity. However, each data dimension can influence Jira's speed. This influence is often not constant nor linear.

In order to provide individual Jira instance users with an optimum experience and avoid performance degradation, it is important to understand how specific Jira data dimensions influence the speed of the application. In this section we will present the results of the Jira 8.5 scalability tests that investigated the relative impact of various configuration values. 

How we tested

  1. As a reference for the test we used a Jira 8.5 instance with the baseline test data set specified in "Testing methodology" and ran the full performance test cycle on it. 
  2. To focus on data dimensions and their effect on performance, we didn't test individual actions, but instead used a mean of all actions from the performance tests. 
  3. Next, in the baseline data set we doubled each attribute and ran independent performance tests for each doubled value (i.e. we ran the test with a doubled number of issues, or doubled number of custom fields) while leaving all the other attributes in the baseline data set unchanged. 
  4. Then, we compared the response times from the doubled data set test cycles with the reference results. With this approach we could isolate and observe how the growing size of individual Jira configuration items affects the speed of an (already large) Jira instance. 

Jira 8.5 vs Jira 7.13

Response times for Jira data sets

All scalability tests improved by 30+%

Click here for a summary...
  • Baseline data set improved by 17% (DC: 16%)
  • Issues data set improved by 43% (DC: 47%)
  • Custom fields data set improved by 40% (DC: 40%)
  • Attachments data set improved by 33% (DC: 33%)
  • Agile boards data set improved by 43% (DC: 42%)
  • Users & groups data set improved by 45% (DC: 44%)
  • Workflows data set improved by 44% (DC: 42%)
  • Permissions & security levels data set improved by 39% (DC: 38%)
  • Comments data set improved by 40% (DC: 40%)
  • Projects data set improved by 45% (DC: 44%)
  • All Datasets (mean of all data sets) improved by 40% (DC: 40%)

Jira 8.5 vs Jira 7.6

Response times for Jira data sets

All scalability tests improved by 30+%

Click here for a summary...
  • Baseline data set improved by 18% (DC: 18%)
  • Issues data set improved by 50% (DC: 50%)
  • Custom fields data set improved by 41% (DC: 40%)
  • Attachments data set improved by 36% (DC: 35%)
  • Agile boards data set improved by 45% (DC: 43%)
  • Users & groups data set improved by 47% (DC: 46%)
  • Workflows data set improved by 43% (DC: 43%)
  • Permissions & security levels data set improved by 43% (DC: 49%)
  • Comments data set improved by 44% (DC: 43%)
  • Projects data set improved by 44% (DC: 45%)
  • All Datasets (mean of all data sets) improved by 42% (DC: 41%)

Further resources

Archiving issues

The number of issues affects Jira's performance, so you might want to archive issues that are no longer needed. You may also come to conclusion that the massive number of issues clutters the view in Jira, and therefore you still may wish to archive the outdated issues from your instance. See Archiving projects.

User Management

As your Jira user base grows you may want to take a look at the following:

Jira Knowledge Base

For detailed guidelines on specific performance-related topics refer to the Troubleshoot performance issues in Jira server article in the Jira Knowledge Base.

Jira Enterprise Services

For help with scaling Jira in your organization directly from experienced Atlassians, reach out to our Premier Support and Technical Account Management services.

Atlassian Experts

The Atlassian Experts in your local area can also help you scale Jira in your own environment. 

Last modified on Oct 14, 2022

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.