Jira Service Desk 4.5.x Long Term Support release performance report

On this page

Still need help?

The Atlassian Community is here for you.

Ask the community

This page compares the performance of Jira Service Desk 3.16.6 and Jira Service Desk 4.5 Long Term Support release.

About Long Term Support releases

We recommend upgrading Jira Service Desk regularly, but if your organisation's process means you only upgrade about once a year, upgrading to a Long Term Support release is a good option. It provides continued access to critical security, stability, data integrity and performance issues until this version reaches end of life.

Performance

Jira Service Desk 4.5 was not focused solely on performance, but we aim to provide the same, if not better, performance with each release. In this section, we’ll compare Jira Service Desk 3.16.6 to Jira Service Desk 4.5, for both Server and Data Center. We ran the same extensive test scenario for both versions.

The following graphs present the difference in mean response times of individual actions performed in Jira Service Desk, separated into 3 categories: Heavier actions (takes longer to run), Medium actions and lighter actions (faster to run). Under the graphs, you can click on the "Click to see the data in a table" link to see the mean response times.

The performance was measured under a user load we estimate to be peak traffic, on a 5000 users instance.

To check the details of these actions and the Jira instance they were performed in, see Testing methodology.

Click here to see the data in a table

The following table presents mean response times of individual actions performed in Jira Service Desk.


Action

Average response times (in seconds, lower is better)
3.16.6 Server4.5.0 Server3.16.6 Data Center4.5.0 Data Center
View welcome guide0.7230.511 (plus)0.6750.464 (plus)
View workload report (small)1.0200.632 (plus)1.1740.555 (plus)
View requests1.0640.599 (plus)0.8520.403 (plus)
View requests: with filter0.6620.431 (plus)0.4820.279 (plus)
View a customer request on the customer portal0.3400.402 (minus)0.3550.403 (minus)
Search for an organization to share a request with0.4580.379 (plus)0.4650.334 (plus)
Search for a customer to share a request with0.5460.338 (plus)0.4720.339 (plus)
View queue: all open issues69.61723.33 (plus)45.34416.306 (plus)
View queue: with SLAs22.0881.963 (plus)16.8871.585 (plus)
Share a request with a customer on the customer portal8.9112.053 (plus)6.9321.870 (plus)
Share a request with an organization on the customer portal8.9341.963 (plus)5.6901.736 (plus)
Remove an organization from a request6.6641.644 (plus)4.2811.194 (plus)
Remove a customer from a request9.9281.957 (plus)8.5411.612 (plus)
Create customer request4.2621.787 (plus)4.0921.363 (plus)
Add a comment to a request on the customer portal3.9771.145 (plus)3.3831.069 (plus)
View report: time to resolution4.7583.989 (plus)3.5493.046 (plus)
View organizations page8.8933.419 (plus)9.5222.551 (plus)
View workload report (medium) 5.7204.526 (plus)5.1793.506 (plus)
Invite team2.7522.643 (plus)2.7142.55 (plus)
View queue: small2.3550.998 (plus)2.4920.835 (plus)
View service desk issue2.6211.131 (plus)2.2040.972 (plus)
View report: created vs resolved0.9930.702 (plus)1.0340.640 (plus)
View portals page1.3321.067 (plus)1.0170.723 (plus)
View customers page1.6490.709 (plus)1.5470.606 (plus)

In summary

We have performance improvements in almost all scenarios across the product, under high load. This is our fastest version ever! Highlights:

  • Viewing queues with SLAs is now 10x faster
  • Adding a comment in the customer portal is now 3x faster 
  • Viewing queue: all open issues is 3x faster
  • Creating a customer request is 2.5x faster 
  • Viewing customers/organizations page is now 2.5x faster
  • Viewing a service desk issue in the agent view is 2x faster

Overall, we have cut down the response time by 20% to 90% in all scenarios with the exception of "Invite team" with no improvements, and a slight 50ms degradation in "View a customer request on the customer portal". 

We'll continue to invest in improving future performance, so that service desk teams can move with ease through their workspace, and our largest customers can scale confidently.

Testing methodology

The following sections detail the testing environment, including hardware specification, and methodology we used in our performance tests.

How we tested

Before we started the test, we needed to determine what size and shape of dataset represents a typical large Jira Service Desk instance. To achieve that, we used our Analytics data to form a picture of our customers' environments and what difficulties they face when scaling Jira Service Desk in a large organization.

The following table presents the rounded values of the 99th of each data dimension. We used these values to generate a sample dataset with random test data.

Baseline data set

DataValue
Comments609570
Components7195
Custom Fields42
Groups3
Issue Types13
Issues302109
Priorities5
Projects1001
Resolutions8
Screen Schemas2395
Screens14934
Statuses23
Users101003
Versions3
Workflows3717

Actions performed

We chose a mix of actions that would represent a sample of the most common user actions. An action in this context is a complete user operation, like opening an issue in the browser window. The following table details the actions that we included in the script, for our testing persona, indicating how many times each action is repeated during a single test run.

ActionDescriptionNumber of times an action is performed in a single test run
Add a comment to a request on the customer portalOpen a random customer request in the portal and, as an agent, add a random comment to it~200
Create customer requestOpen a customer portal, type in the issue summary and description, then submit the request.~200
Invite teamSelect Invite team in the left-hand-side menu, search for an agent on a 1,000 agent instance, choose an agent, click the Invite button, and wait for success confirmation.~300
Remove a customer from a requestOpen a random customer request in the portal, and remove a random customer on the "shared with" column~100
Remove an organization from a requestOpen a random customer request in the portal, and remove a random organization on the "shared with" column~100
Search for an organization to share a request withOpen a random customer request in the portal, and search for a random organization to share the request with~100
Search for a customer to share a request withOpen a random customer request in the portal, and search for a random customer to share the request with~100
Share a request with an organization on the customer portalOpen a random customer request in the portal, and share the request with a random organization~100
Share a request with a customer on the customer portalOpen a random customer request in the portal, and share the request with a random customer~100
View workload report (small)Display the workload report for a project with no open issues.~1000
View workload report (medium) 

Display the workload report for a project with 1,000 assigned issues and 700 agents.

~1500
View queue: all open issuesDisplay the default service desk queue, in a project with over 10,000 open issues.~930
View queue: smallDisplay a custom service desk queue that will filter out most of the issues, in a project with over 10,000 open issues.~2500
View queue: with SLAsDisplay a custom service desk queue, in a project with over 10,000 open issues, with 6 SLA values for each issue.~2500
View customers pageDisplay the Customers page, in a project that has 100,000 customers.~1000
View organizations pageDisplay the Customers page, in a project that has 50 organizations and 300 customers.~1000
View portals pageDisplay the help center, with all customer portals, by selecting the unique help center link.~2000
View report: created vs resolvedDisplay the Created vs Resolved report (in the past year), with over 10,000 issues in the timeline.~2000
View report: time to resolutionDisplay the Time to resolution report (in the past year), with over 10,000 issues in the timeline.~2000
View requestsDisplay the My requests screen from the customer portal. ~3000
View requests: with filterDisplay the My requests screen from the customer portal, filtering the results with a single word in the summary. ~3000
View service desk issueDisplay a service desk issue with 6 SLA values in the Agent view.~3000
View a customer request on the customer portalDisplay a random issue in the customer portal~400
View welcome guideDisplay the Welcome guide from the left-hand-side menu.~1000

Test environment for user actions

The performance tests were all run on a set of AWS EC2 instances. For each test, the entire environment was reset and rebuilt, and then each test started with some idle cycles to warm up instance caches. Below, you can check the details of the environments used for Jira Service Desk Server and Data Center, as well as the specifications of the EC2 instances.

To run the tests, we used 21 scripted browsers and measured the time taken to perform the actions. Each browser was scripted to perform a random action from a predefined list of actions and immediately move on to the next action (ie. zero think time). Please note that it resulted in each browser performing substantially more tasks than would be possible by a real user, and you should not equate the number of browsers to represent the number of real-world concurrent users.

Each test was run for 40 minutes, after which statistics were collected.

Here are the details of our test environment:

Jira Service Desk ServerJira Service Desk Data Center

The environment consisted of:

  • 1 Jira node
  • Database on a separate node
  • Load generator on a separate node

The environment consisted of:

  • 3 Jira nodes
  • Database on a separate node
  • Load generator on a separate node
  • Shared home directory on a separate node
  • Load balancer (AWS ELB HTTP load balancer)
Jira Service Desk for Server
HardwareSoftware
EC2 type:

c4.8xlarge for Server

1 node

Operating systemUbuntu 16.04 LTS
CPU:Intel Xeon E5-2666 v3 (Haswell)Java platformJava 1.8.0
CPU cores:36Java options16 GB heap
Memory:60 GB
Disk:AWS EBS 100 GB gp2
Jira Service Desk for DC
HardwareSoftware
EC2 type:

c5.4xlarge for Server

1 node

Operating systemUbuntu 16.04 LTS
CPU:Intel Xeon Platinum 8000 series (Skylake-SP)Java platformJava 1.8.0
CPU cores:16Java options16 GB heap
Memory:32 GB
Disk:AWS EBS 100 GB gp2
Database
HardwareSoftware
EC2 type:m4.2xlarge (see EC2 typesDatabase:MySQL 5.5
CPU:Intel Xeon E5-2666 v3 (Haswell)Operating system:Ubuntu 16.04 LTS
CPU cores:8
Memory:32 GB
Disk:

Jira Service Desk Server: AWS EBS 100 GB gp2

Jira Service Desk Data Center: AWS EBS 60 GB gp2

Load generator
HardwareSoftware
EC2 type:c4.8xlarge (see EC2 typesOperating system:

Ubuntu 16.04 LTS

CPU:Intel Xeon E5-2666 v3 (Haswell)Browser:

Headless Chrome

CPU cores:36Automation script:

Chromedriver 3.11.0

WebDriver 3.4.0

Java JDK 8u131

Memory:60 GB
Disk:AWS EBS 30 GB gp2

Test environment for indexing measures

Jira Service Desk for Server
HardwareSoftware
EC2 type:

c4.8xlarge for Server

1 node

Operating systemUbuntu 16.04 LTS
CPU:Intel Xeon E5-2666 v3 (Haswell)Java platformJava 1.8.0
CPU cores:36Java options16 GB heap
Memory:60 GBIndexing threads


default (10 on 3.16 and 20 on 4.5)
Disk:AWS EBS 100 GB gp2

Database
HardwareSoftware
EC2 type:m4.2xlarge (see EC2 typesDatabase:MySQL 5.5
CPU:2.4 GHz Intel Xeon E5-2676 v3Operating system:Ubuntu 16.04 LTS 
CPU cores:8
Memory:32 GB
Disk:

Jira Service Desk Server: AWS EBS 100 GB gp2

Jira Service Desk Data Center: AWS EBS 60 GB gp2

Last modified on Jun 26, 2020

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.