JIRA Performance Testing with Grinder

Do not do this on production data

The Grinder tests are destructive as they modify data in random ways, make sure you only run these instructions on test instances.

Please note that Atlassian provides these Grinder tests scripts and instructions at no cost for anyone to use, however it is not supported by Atlassian. For support you might like to visit Grinder Support or seek professional assistance.

Two main reasons to run these performance tests:

1. You are evaluating JIRA and want to size up your hardware requirements. In this case, you can use the JIRA Data Generator plugin to create a JIRA instance of any size on your available hardware, then run these Grinder tests to see if your hardware is performant.

2. You have a current JIRA instance and want to know if your current hardware will meet your future needs. In this case, create a replica JIRA instance and then run these Grinder tests. Use the JIRA Data Generator plugin to create your projected JIRA instance and again run these Grinder scripts and compare the results.

On this page:

Our Test Architecture

As part of our JIRA 5.1 efforts we have closely examined the performance of JIRA to try and eliminate bottlenecks.  The main tool we have used is The Grinder, which allows us to generate performance load and then provide us with a set of results that we can analyze.  The Grinder is an agent based load testing and measurement tool, in addition to this we have defined "personas" that we use to drive the load.  Each persona represents a particular type of real world use cases that we expect - we based these use cases on analysis of our public facing JIRA instances: jira.atlassian.com; support.atlassian.com; as well as several of our own internal instances. 

There are four personas which we will briefly explain below:

  • Issue Browser - predominantly views issues and comments on them.
  • Issue Creator - creates tickets mainly, and also performs a lot of searching.
  • Project Manager - mainly searching, transitioning issues, and viewing reports.
  • Administrator - makes changes to projects, screen configurations, adds and removes users.

In our architecture we can define the number of personas, the mix of personas and how many agents will run.  All agents run remotely and then ssh their results back to a bamboo agent. The architecture is similar the diagram on the right:


Click image to expand.

In true Atlassian fashion we want to share these scripts with you so that test your own JIRA instance in your own environment, however there are some points you should bear in mind:

  1. The configuration for local tests is to run the agents and server under test in the same machine - this can skew results if the machine is not powerful enough with enough memory.  You should carefully monitor your JVM heaps for signs of agent swapping, and the results obtained may vary significantly from run to run if you do not provide enough memory and CPU resources to the agents.
  2. Grinder tests tend to show much worse results than browser based timing tests - this is because it uses HttpClient in single threaded mode so requests are submitted sequentially, so no benefit is made of any parallelism that a browser might have - this is offset a little by not having to render but you should consider any timings from Grinder to be very pessimistic - it is much more about the relative change in these timings.
  3. Although you will use your own data for tests, you will have to make sure there is some commonality with our test assumptions, and this may involve some configuration of your test instance - for instance our tests assume a default dashboard with certain gadgets present, and it relies on the group jira-developers having assign permissions to all projects.

Preparation for Running the Grinder Tests

Prerequisites

(warning) You should have a JIRA test server setup for using these performance tests. Do not perform these tests on your production server!

Hardware

A fairly powerful computer to run the Grinder agents - as a minimum we suggest a dual core with 4GB of RAM.  It is a good idea to run the agents on a separate machine to the JIRA server, but if you have plenty of memory and CPU resource, you can run the agents on the same machine, but you will be more prone to test timing jitter.

Software

  • Java (Java 5 minimum, 6 preferred).
  • Maven 2 (this is optional, it is the easiest way to run the tests, but instructions are provided to manually run the tests).
  • Linux like OS (If running on Windows you will probably have to install Cygwin as all the scripts are bash shell scripts).
  • If you install the Grinder yourself rather than using the embedded version provided, make sure it is Grinder 3.4.
  • Similarly we embed Jython, but if you wish to use your own it must be Jython 2.2.1.

Get the Test Code

Check out the grinder tests:

#Using SSH
$ git clone git@bitbucket.org:atlassian/jira-performance-tests.git
#Using HTTPS
$ git clone https://bitbucket.org/atlassian/jira-performance-tests.git

 

If you examine this you'll find a POM that controls the build and dependency management, a set of python scripts and a properties file that controls the run of the test.  There is a directory for each of the supported versions of the tests, so you should make any modifications in the appropriate folder.

(info) All the dependencies are included in the POM, so you shouldn't have to install grinder.

The Grinder.properties file

For now you shouldn't have to modify anything, but you may want to examine grinder.properties - as you can see it is pretty straightforward, it specifies a log folder and the number of test runs each agent should perform.

It includes the following properties:

grinder.logDirectory=../log                                # location of the log folder
grinder.dcrinstrumentation=true                            # whether to use DCR instrumentation, you must have Java 6 to use this, set to false if you only have Java 5
grinder.runs = 10                                          # each 'persona' will run through it's tests this many times
grinder.jira.config.issues=jira_issues.txt                 # the file listing issues to be tested
grinder.jira.config.projects=jira_projects.txt             # the file listing projects to be tested 
grinder.jira.config.users=jira_users.txt                   # the file listing users to use, they must have assign and create rights to the issues and projects listed above
grinder.jira.http.record=true				               # whether to measure individual HTTP requests, this is best left true as it gives the best fidelity
grinder.jira.cache.record=true                             # whether to measure individual cache request times
grinder.jira.thinktime=100                                 # the pause time between requests, Grinder randomizes the pauses around this central value

To verify everything is working you may want to modify the property grinder.runs to a low value (like 1) before running your first test as this will allow it to complete more quickly.

Prepare Your JIRA Instance

You can run the tests against an existing JIRA test instance but you must make sure that the data is in a shape that the tests expect which is detailed below.

Users

You will need to specify the users that are to be used in a test by providing a file called jira_users.txt in the test_scripts folder (you can change the name and location of the file by editing the file grinder.properties) - this file lists the users you want to use for the test run. These users must have full rights to all the projects that you wish to use during the test run - in the tests we run internally we simply use all the users in the jira-developers group.  The jira-users.txt file is a comma separated value file in the form username,password. It looks like what is shown on the right:

jira_users.txt

admin, admin
dev,dev
dev1,dev1

The tests expect that the admin account is admin, admin and this account must exist.  You should have at least 10 users for the tests.

 

 

Projects

Any projects you wish to use in the test must be specified in the file jira_projects.txt in the form project name, project id, project key. It looks like what is shown on the right:

jira_projects.txt

derbyanus,10111,DBYS
desertor,10238,DERR
desmarestianus,10223,DESS

 

 

Issues

The tests will use the issues in the file jira_issues.txt to browse to them.  This file has the form Issue Key, Issue Id and looks like this on the right:

jira_issues.txt 

HANI-1540,176339
HANI-1541,176340
HANI-1542,176341
HANI-1543,176342
HANI-1544,176343
HANI-1545,176344

 

The downloaded resources include a plugin FixGrinderData in the plugins folder. Install the appropriate plugin for your JIRA version (the plugins are of the form FixGrinderData-JIRAXX.jar, where XX indicates the JIRA version).

You must run this plugin as an administrative user,and it will be available in the administration menu under the Import section, when you run it, it will check for the admin/admin account and create it if not present - it will then change all the passwords of all users in the jira-developers group.  Finally it will create the files jira_users.txt, jira_projects.txt and jira_issues.txt in your JIRA home folder.

 

Dashboards

You must have a default system dashboard, and this should include the following gadgets:

 

 

  • Login gadget
  • Assigned to me gadget
  • Favorite Filters gadget
  • Activity Streams gadget

 

 

Filters

The tests uses filters with ids 10000-10004 inclusive.  If you wish to use a wider range of filters then open the interactions.py file and find the useSavedFilter method, you can then edit the line:

filterId = 10000 + randint(0, 4)



- to something like:

filterId = 10000 + randint(2, 6)

 This example will now use the filters 10002 -10006 for testing. You are free to use any other mechanism to pick a filter ID, for example you may wish to specify a list to select from.

Running the tests

Make sure you have started your JIRA instance, and that you have copied the generated issues, users and projects files to the location you specified in your grinder.properties file.

You can  then launch the tests by providing the version information and the host address as shown below:

mvn clean verify -Djira.version=5.1 -Djira.host=localhost -Dhttp.port=8080 -Djira.context.path=/

The allowable versions are: 4.0, 4.4, 5.0 and  5.1.

Alternatively you can run the tests without maven. To do this:

  • Copy the script files that match your JIRA Version - for instance, if you are running 4.4.5 then copy the scripts from jira_4.4 into the folder /src/main/resources/test_scripts
  • Run the script file harness.sh jira.url , where jira.url is the base url of the server under test.

What You Should Get

There are ten tests made up of four 'personas' that are run by these scripts (the personas are described further up the page). By default they are:

  • 6 issue browsers.
  • 2 issue creators.
  • 1 project manager.
  • 1 administrator.

(info) You can change this mix by editing the PERSONAS_COUNT array in common.sh

You will find the output logs in target/classes/logs - each of the personas puts it's logs in a folder named after itself, with the agent number.  For example: issue_creator_6

There are usually three logs in these folders, but if there are any errors then a fourth error log will be present -

  1. agent_host.log - the grinder agent output log.  The main item of interest is the garbage collection stats
  2. data_host.log - the raw timing data for each request.
  3. out_host.log - the aggregate timing data.
  4. error_host.log - any errors are reported here.

(info) The most important log is: out_host.log, as this includes the timing information from the test.  You can tie up the tests reported in the out_xxxx.log with the data_xxxx. log, but it is a slow and laborious task and doesn't add a great deal of value.

agent_host.log mainly lists the garbage collection, but if there is an error log, then it provides some extra helpful information.

A sample log file may look like this:

output log for a single run for a single persona
5/30/12 8:26:05 PM (process scmods-0): elapsed time is 47430 ms
5/30/12 8:26:05 PM (process scmods-0): Final statistics for this process:
             Tests        Errors       Mean Test    Test Time    TPS          Mean         Response     Response     Mean time to Mean time to Mean time to 
                                       Time (ms)    Standard                  response     bytes per    errors       resolve host establish    first byte   
                                                    Deviation                 length       second                                 connection                
                                                    (ms)                                                                                                    
(Test 1      1            0            249.00       0.00         0.02         369.00       7.78         0            0.00         6.00         51.00)        "view dashboard - not logged in"
(Test 2      1            0            555.00       0.00         0.02         7225.00      152.33       0            0.00         10.00        49.00)        "login & view dashboard"
(Test 4      1            0            2439.00      0.00         0.02         0.00         0.00         0            0.00         0.00         0.00)         "edit 5 issues"
(Test 5      1            0            347.00       0.00         0.02         0.00         0.00         0            0.00         0.00         0.00)         "add comment to 2 issues"
(Test 6      1            0            3212.00      0.00         0.02         165160.00    3482.18      0            1.00         4.00         373.00)       "do 10 workflow transitions"
(Test 7      1            0            9313.00      0.00         0.02         0.00         0.00         0            0.00         0.00         0.00)         "perform 20 searches in the issue navigator"
(Test 8      1            0            1515.00      0.00         0.02         0.00         0.00         0            0.00         0.00         0.00)         "do 10 searches via saved filters"
(Test 9      1            0            5360.00      0.00         0.02         0.00         0.00         0            0.00         0.00         0.00)         "view 15 projects with random tabs"
(Test 11     1            0            9984.00      0.00         0.02         0.00         0.00         0            0.00         0.00         0.00)         "create 20 issues"
(Test 12     1            0            1991.00      0.00         0.02         72250.00     1523.30      0            0.00         25.00        404.00)       "view dashboard 10 times"
Test 100     1            0            28.00        0.00         0.02         42093.00     887.48       0            0.00         2.00         27.00         "HTTP-REQ : home not logged in"
Test 101     1            0            115.00       0.00         0.02         295.00       6.22         0            0.00         2.00         115.00        "HTTP-REQ : perform login"
Test 102     11           0            26.64        2.87         0.23         55845.64     12951.76     0            0.00         0.64         25.18         "HTTP-REQ : home logged in"
Test 103     11           0            82.64        6.58         0.23         15981.45     3706.43      0            0.00         0.64         82.45         "HTTP_REQ : activity streams"
Test 104     58           0            9.72         14.12        1.22         13793.10     16866.96     0            0.00         0.71         9.34          "HTTP_REQ : gadgets"
Test 200     20           0            178.25       6.73         0.42         399493.45    168456.02    0            0.00         0.00         175.55        "HTTP-REQ : issue navigator"
Test 201     2            0            351.00       47.00        0.04         840564.00    35444.40     0            0.00         0.00         344.00        "HTTP-REQ : issue navigator simple search"
Test 202     18           0            269.67       53.09        0.38         369656.28    140287.01    0            0.00         0.00         267.06        "HTTP-REQ : issue navigator advanced search"
Test 300     10           0            11.90        0.54         0.21         22911.00     4830.49      0            0.00         0.00         10.80         "HTTP-REQ : manage saved filters"
Test 301     10           0            123.30       62.33        0.21         198584.40    41868.94     0            0.00         0.00         121.70        "HTTP-REQ : use a saved filter"
Test 500     15           0            351.60       23.53        0.32         70633.07     22338.10     0            0.00         0.00         350.13        "HTTP-REQ : view project"
Test 600     33           0            82.36        17.17        0.70         74435.33     51789.29     0            0.06         0.48         81.21         "HTTP-REQ : view issue"
Test 601     2            0            102.50       1.50         0.04         0.00         0.00         0            0.00         0.00         102.50        "HTTP-REQ : add comment to issue"
Test 602     5            0            133.60       23.04        0.11         409735.00    43193.65     0            0.00         0.80         132.60        "HTTP-REQ : edit issue dialog"
Test 603     5            0            22.40        19.31        0.11         0.00         0.00         0            0.00         0.80         22.40         "HTTP-REQ : edit issue populate dropdowns"
Test 604     5            0            174.40       37.39        0.11         0.00         0.00         0            0.00         0.80         174.20        "HTTP-REQ : edit issue"
Test 605     10           0            6.70         0.46         0.21         0.00         0.00         0            0.10         0.40         6.70          "HTTP-REQ : workflow transition"
Test 606     10           0            89.70        16.92        0.21         0.00         0.00         0            0.10         0.40         89.40         "HTTP-REQ : workflow completion"
Test 700     20           0            128.10       9.32         0.42         489373.75    206356.21    0            0.00         0.00         126.05        "HTTP-REQ : create issue dialog"
Test 701     20           0            9.50         9.08         0.42         0.00         0.00         0            0.00         0.00         9.30          "HTTP-REQ : create issue populate dropdowns"
Test 702     20           0            333.75       55.41        0.42         488311.30    205908.20    0            0.00         0.00         331.65        "HTTP-REQ : create issue"
Test 10000   116          0            2.81         3.97         2.45         35395.91     86568.10     0            0.00         1.38         2.18          "CACHE : dashboard"
Test 20000   56           0            3.05         5.05         1.18         47400.52     55965.19     0            0.00         0.00         2.46          "CACHE : issue navigator"
Test 30000   51           0            3.02         5.28         1.08         51285.49     55145.69     1            0.00         0.00         2.67          "CACHE : filters"
Test 50000   22           0            3.45         5.47         0.46         58836.59     27290.85     0            0.00         0.00         3.05          "CACHE : view projects"
Test 60000   26           0            2.92         5.41         0.55         52977.77     29041.16     0            0.00         0.77         2.46          "CACHE : view issue"
Test 60002   17           0            1.06         0.24         0.36         661.82       237.21       0            0.00         0.00         1.00          "CACHE : edit issue"
Test 70000   286          0            1.02         0.21         6.03         727.66       4387.73      0            0.00         0.35         0.85          "CACHE : create issues"
Totals       861          0            39.18        85.67        0.65         66849.70     43340.25     1            0.00         0.44         38.56        
             (10)         (0)                                                                                                                               

The tests in brackets are high level groupings of many requests, which is why they don't have response lengths in the main.  The interesting data is in the HTTP-REQ section, where individual HTTP requests are broken down. Finally we also display the amount of requests for cached resources - you can disable this by setting the grinder.jira.cache.record=false property in the grinder.properties file.

Further Analysis

You could export the data to a CSV file to create local graphs. There is also a plethora of 3rd party tools for analyzing Grinder logs, such as The Grinder Analyzer.

Future Resources

  1. Apart from exporting the data to a CSV file, you could run a utility that converts the data to create local graphs. We hope to have this available to customers in the near future.
  2. At present you need to attach a profiler manually - we may possibly add a profiling profile to allow automatic attachment.

 

 

Last modified on May 27, 2016

Was this helpful?

Yes
No
Provide feedback about this article

Not finding the help you need?

Ask the community

Powered by Confluence and Scroll Viewport.