Jira Software 7.13 Long Term Support release performance report
This page compares the performance of Jira 7.6 and Jira 7.13 Long Term Support release.
About Long Term Support releases
We recommend upgrading Jira regularly, however if your organisation's process means you only upgrade about once a year, upgrading to a Long Term Support release may be a good option, as it provides continued access to critical security, stability, data integrity and performance issues until this version reaches end of life.
This is an excerpt from the Jira 7.13 performance and scaling report, focusing on performance results for Jira 7.13. You can see the full report here.
Jira 7.13 was not focused solely on performance, however we do aim to provide the same, if not better, performance with each release. In this section, we’ll compare Jira 7.6 to Jira 7.13 Long Term Support release, both Server and Data Center. We ran the same extensive test scenario for both Jira versions.
The following chart presents mean response times of individual actions performed in Jira. To check the details of these actions and the Jira instance they were performed in, see Testing methodology.
Response times for Jira actions
- Jira 7.13.2 shows significant improvement in response time when viewing boards (-30% Server, -30% Data Center), viewing backlogs (-10% Server, -12% Data Center), and viewing project summary (-40% Server, -37% Data Center).
- Most of the actions looks similar between the two versions. We’ve observed small performance degradation when viewing boards in Server (+3% Server, -9% Data Center), viewing dashboards (+5% Server, +1% Data Center), browsing projects (+6% Server, +1% Data Center), and adding comments in DC (0% Server, 1% Data Center).
- The mean of all actions has improved in Jira 7.13.2 (-5% Server, -5% Data Center)
The following sections detail the testing environment, including hardware specification, and methodology we used in our performance tests.
Before we started the test, we needed to determine what size and shape of data set represents a typical large Jira instance. To achieve that, we used our Analytics data to form a picture of our customers' environments and what difficulties they face when scaling Jira in a large organization.
Baseline Jira data set
|Jira data dimension||Value|
We chose a mix of actions that would represent a sample of the most common user actions. An action in this context is a complete user operation like opening of an Issue in the browser window. The following table details the actions that we included in the script for our testing persona, indicating how many times each action is repeated during a single test run.
|Action name||Description||Number of times an action is performed during a single test run|
|View Dashboard||Opening the Dashboard page.||10|
|Create Issue||Submitting a Create Issue dialog.||5|
|View Issue||Opening an individual issue in a separate browser window.||55|
|Edit Issue||Editing the Summary, Description and other fields of an existing Issue.||5|
|Add Comment||Adding a Comment to an Issue.||2|
|Search with JQL|
Performing a search query using JQL in the Issue Navigator interface.
The following JQL queries were used...
Half of these queries are very heavyweight, which explains high average response time.
|View Board||Opening of Agile Board||10|
|Browse Projects||Opening of the list of Projects (available under Projects > View All Projects menu)||5|
|Browse Boards||Opening of the list of Agile Boards (available under Agile > Manage Boards menu)||2|
|All Actions||A mean of all actions performed during a single test run.||-|
The performance tests were all run on a set of AWS EC2 instances. For each test, the entire environment was reset and rebuilt, and then each test started with some idle cycles to warm up instance caches. Below, you can check the details of the environments used for Jira Server and Jira Data Center, as well as the specifications of the EC2 instances.
To run the tests, we used 10 scripted browsers and measured the time taken to perform the actions. Each browser was scripted to perform a random action from a predefined list of actions and immediately move on to the next action (ie. zero think time). Please note that it resulted in each browser performing substantially more tasks than would be possible by a real user and you should not equate the number of browsers to represent the number of real-world concurrent users.
Each test was run for 20 minutes, after which statistics were collected.
Here are the details of our test environment:
|Jira Server||Jira Data Center|
The environment with Jira Server consisted of:
The environment with Jira Data Center consisted of:
Jira Server: 1 node
Jira Data Center: 2 nodes
|CPU cores:||Java options:|
|CPU:||Operating system: |
Jira Data Center:
|EC2 type:||Operating system:|
|CPU cores:||Automation script:|