Jira Service Desk 3.16.x Long Term Support release performance report

On this page

Still need help?

The Atlassian Community is here for you.

Ask the community

This page compares the performance of Jira Service Desk 3.9.10 and Jira Service Desk 3.16 Long Term Support release.


About Long Term Support releases
We recommend upgrading Jira Service Desk regularly. That said, if your organisation's process means you only upgrade about once a year, upgrading to a Long Term Support release may be a good option, as it provides continued access to critical security, stability, data integrity and performance issues until this version reaches end of life. 

Performance

Jira Service Desk 3.16 was not focused solely on performance, but we aim to provide the same, if not better, performance with each release. In this section, we’ll compare Jira Service Desk 3.9.10 to Jira Service Desk 3.16 Long Term Support release, for both Server and Data Center. We ran the same extensive test scenario for both Jira versions. 

The following table presents mean response times of individual actions performed in Jira Service Desk. To check the details of these actions and the Jira instance they were performed in, see Testing methodology.

The performance was measured under a user load we estimate to be peak traffic, on a 5,000 agent instance.

Response times for Jira Service Desk actions (in seconds)

Action3.9.10 Server3.16.1 Server3.9.10 Data Center3.16.1 Data Center
View workload report (medium)39.393.8522.743.54
View workload report (small)1.0540.9081.0940.793
View requests1.6911.2891.1401.099
View portals page1.6501.5941.4421.564
View welcome guide0.6930.6200.6830.562
View created vs. resolved report1.4401.3351.3201.141
View time to resolution report3.0602.9392.7692.619
Invite team2.6712.6982.6232.636
View customers page1.6491.6881.6031.609
View queues with SLAs19.1220.1511.4211.86
View queue: All open issues37.8538.9033.7534.44
Create customer request23.6124.9717.7218.85
View service desk issue1.7802.1331.3521.469

In summary

It's great news for reports (thumbs up), with some showing huge improvements! Other key actions have also substantially improved. Highlights:

  • Viewing workload report improved by 80-90%
  • Viewing created vs resolved report improved by 5-15%
  • Viewing time to resolution report improved by 5-15%
  • Viewing requests improved 20-30%
  • Viewing portals page improved by 5-15%
  • View welcome guide improved by 5-15%

For all remaining actions, performance looks similar between the two versions, with slight degradations, of around one second or less, observed when viewing the following: queues, issues, customers, and when creating a customer request.

We'll continue to invest in improving future performance, so that service desk teams can move with ease through their workspace, and our largest customers can confidently scale.  



Testing methodology

The following sections detail the testing environment, including hardware specification, and methodology we used in our performance tests.


How we tested...

Before we started the test, we needed to determine what size and shape of dataset, represents a typical large Jira Service Desk instance. To achieve that, we used our Analytics data to form a picture of our customers' environments and what difficulties they face when scaling Jira Service Desk in a large organization. Learn more

The following table presents the rounded values of the 99th percentile of each data dimension. We used these values to generate a sample dataset with random test data.

Baseline data set

Jira Service Desk data dimensionValue
Issues300,000
Projects1,000
Agents1,000
Customers100,000
Actions performed...

We chose a mix of actions that would represent a sample of the most common user actions. An action in this context is a complete user operation, like opening an issue in the browser window. The following table details the actions that we included in the script, for our testing persona, indicating how many times each action is repeated during a single test run.

Action nameDescriptionNumber of times an action is performed during a single test run
Create customer request
Open a customer portal, type in the issue summary and description, them submit the request.~850
Invite teamSelect Invite team in the left-hand-side menu, search for an agent on a 1,000 agent instance, choose an agent, click the Invite button, and wait for success confirmation.~350
View workload report (small)Display the workload report for a project with no open issues.~85
View workload report (medium)
Display the workload report for a project with 1,000 assigned issues.~85
View queue: all open issues
Display the default service desk queue, in a project with over 10,000 open issues.~340
View queue: with SLAs
Display a custom service desk queue, in a project with over 10,000 open issues, with 6 SLA values for each issue.


~170
View customers pageDisplay the Customers page, in a project that has 100,000 customers.~350
View portals page
Display the help center, with all customer portals, by selecting the unique help center link.~340
View report: created vs resolved
Display the Created vs Resolved report (in the past year), with over 10,000 issues in the timeline.~330
View report: time to resolution
Display the Time to resolution report (in the past year), with over 10,000 issues in the timeline.~340
View requestsDisplay the My requests screen from the customer portal. ~340
View service desk issueDisplay a service desk issue with 6 SLA values.~520
View welcome guideDisplay the Welcome guide from the left-hand-side menu.~340
Test environment...

The performance tests were all run on a set of AWS EC2 instances. For each test, the entire environment was reset and rebuilt, and then each test started with some idle cycles to warm up instance caches. Below, you can check the details of the environments used for Jira Service Desk Server and Data Center, as well as the specifications of the EC2 instances.

To run the tests, we used 20 scripted browsers and measured the time taken to perform the actions. Each browser was scripted to perform a random action from a predefined list of actions, and immediately move on to the next action. Please note that it resulted in each browser performing substantially more tasks than would be possible by a real user, and you should not equate the number of browsers to represent the number of real-world concurrent users.

Each test was run for 40 minutes, after which statistics were collected.

Here are the details of our test environment:

Jira ServerJira Data Center

The environment consisted of:

  • 1 Jira node
  • Database on a separate node
  • Load generator on a separate node

The environment consisted of:

  • 2 JIRA nodes
  • Database on a separate node
  • Load generator on a separate node
  • Shared home directory on a separate node
  • Load balancer (AWS ELB HTTP load balancer)
Jira
HardwareSoftware
EC2 type:

c4.8xlarge (see EC2 types)

Jira Service Desk Server: 1 node

Jira Service Desk Data Center: 2 nodes

Operating system:Ubuntu 16.04 LTS
CPU:Intel Xeon E5-2666 v3 (Haswell)Java platform:Java 1.8.0
CPU cores:36Java options:

8 GB heap


Memory:60 GB
Disk:AWS EBS 100 GB gp2
Database
HardwareSoftware
EC2 type:c4.8xlarge (see EC2 types)   Database:MySQL 5.5
CPU:Intel Xeon E5-2666 v3 (Haswell)Operating system:


Ubuntu 16.04 LTS


CPU cores:36
Memory:60 GB
Disk:

Jira Service Desk Server: AWS EBS 100 GB gp2

Jira Service Desk Data Center: AWS EBS 60 GB gp2


Load generator
HardwareSoftware
EC2 type:c4.8xlarge (see EC2 types)   Operating system:Ubuntu 16.04 LTS
CPU:Intel Xeon E5-2666 v3 (Haswell)Browser:Google Chrome 62


CPU cores:36Automation script:

Chromedriver 2.33

WebDriver 3.4.0

Java JDK 8u131

Memory:60 GB
Disk:AWS EBS 30 GB gp2
Last modified on Jun 26, 2020

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.