Troubleshooting Jira performance with Thread dumps

Still need help?

The Atlassian Community is here for you.

Ask the community

Platform notice: Server and Data Center only. This article only applies to Atlassian products on the Server and Data Center platforms.

Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.

*Except Fisheye and Crucible

Summary

This page provides a way of collecting Thread dumps (a breakdown of what every thread is doing for a Java process) and the output of top for Lunix/Unix operating systems (shows what each native OS thread is consuming as far as resources are concerned). This breakdown could normally be collected with something like jProfiler as per Use jProfiler to analyse Jira application performance - in this example we're using native (free) tools to collect information.

This needs jstack to be installed. Another option for a high-level overview is Using jvmtop to analyze JIRA performance.


You may capture Thread dumps whenever Jira is taking longer than expected to process requests or taking up more machine resources than expected.


Environment

Any version of Jira Software or Jira Service Management, Server or Data Center.


Solution

If running Jira on a container, skip to the Steps for Atlassian Docker containers below.

  1. Optional: Enable Thread diagnostics in Administration > System > Troubleshooting and Support Tools. This will provide additional details on the Tomcat threads (namely the URI and username) and will allow for faster troubleshooting.

    1. This is preferred, but if your system is currently in a bad state, please skip step 1 and proceed forward to generate thread dumps.
    2. In earlier versions, you might not see the Thread diagnostics tab. This feature was introduced with Atlassian Troubleshooting and Support Tools version 1.32.3 (bundled plugin). You may upgrade this plugin or use Thready plugin instead.
  2. Download and install the scripts located in https://bitbucket.org/atlassianlabs/atlassian-support/downloads/ or run the commands in the Workaround section below.

  3. Execute the scripts during periods of slowness or unresponsiveness and provide the resulting tar.gz file to support.
  4. Select 'Y' to thread dumps. The 'N' option is for heap dump.
  5. Optional: Run the disk speed tests with the same scripts, which are covered in more detail in our Testing Disk Access Speed KB.

To analyze them:

  • Look in the resulting CPU usage files to identify which threads are consistently using a lot of CPU time.

  • Use a tool such as TDA and check the native ID to correspond it to the thread dump.
  • Review the stack traces and search for known bugs on jira.atlassian.com.

Workaround

How to manually run the tools, requires Java Development Kit (JDK)

  • Identify the JIRA application PID by using a command such as:
    Linux/Unix:

    JIRA_PID=`ps aux | grep -i jira | grep -i java | grep -v grep | awk  -F '[ ]*' '{print $2}'`;

    Windows:

    $JIRA_PID = (Get-Process -name "tomcat8.exe.x64").id

    Alternatively, you can use the following command to identify the process ID of JIRA when it is currently running:


    JIRA_PID=$(cat <JIRA_INSTALL>/work/catalina.pid)
  • Then run the following command - it generates 6 snapshots of CPU usage and thread dumps with 10 seconds interval for a minute.

    Linux (tested with Ubuntu / Debian):

    for i in $(seq 6); do top -b -H -p $JIRA_PID -n 1 > jira_cpu_usage.`hostname`.`date +%s`.txt; jstack -l $JIRA_PID > jira_threads.`hostname`.`date +%s`.txt; sleep 10; done

    Unix (tested with Solaris):

    for i in $(seq 6); do prstat -L -p $JIRA_PID -n 500 1 1 > jira_cpu_usage.`hostname`.`date +%Y-%m-%d_%H%M%S`.txt; jstack -l $JIRA_PID > jira_threads.`hostname`.`date +%Y-%m-%d_%H%M%S`.txt; sleep 10; done

    Windows (tested with PowerShell 5.1):

    1..6|foreach{jstack -l $JIRA_PID|Out-File -FilePath "app_threads.$(Get-Date -uformat %s).txt";sleep 10}

    (info) Windows has no equivalent to the top command, so we can't get the thread CPU usage programmatically. For thread CPU usage, check Use Windows Process Explorer to troubleshoot Jira server Performance.

  • Look in the resulting CPU usage files to identify which threads are consistently using a lot of CPU time.

  • Use a tool such as TDA and check the native ID to correspond it to the thread dump.
  • Alternatively, take the PID of the top 10 threads which are using CPU time and convert them to hexadecimal. Eg: 11159 becomes 0x2b97.
  • Search up the Hex values in the thread dumps to figure out which threads are using up all the CPU.


Parsing Thread dumps from the catalina.out file (kill -3)

If you captured Thread dumps issuing the kill -3 command, the JVM will write them to stdout (Standard Output), which is usually the catalina.out file located in Jira's installation directory logs folder.

To extract the Thread dumps out of the file, you can run this example command below or a similar one (change the path at the end to where your copy of catalina.out file is):

awk '/^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}$/{n++;td=1;lastLine=$0;outFile=("jira_threads." sprintf("%06d", n) ".txt")}; {if (td) {print $0 >> outFile; close(outFile)}}; /object space/{if (lastLine ~ /PSPermGen/) {td=0}}' tomcat-logs/catalina.out

It'll generate a txt file for every Thread dump identified in the logs:

$ ls -1 | head -20
jira_threads.000001.txt
jira_threads.000002.txt
jira_threads.000003.txt
jira_threads.000004.txt
jira_threads.000005.txt
jira_threads.000006.txt
jira_threads.000007.txt
jira_threads.000008.txt
jira_threads.000009.txt
jira_threads.000010.txt

If your catalina.out is too large, you should implement a rotation on it. (please refer to Configure log rotation for the catalina log in Jira Server for more on this)

You may also need to split the file before running the awk parsing command above so it doesn't take too long.


Steps for Atlassian Docker containers

/opt/atlassian/support/thread-dumps.sh can be run via docker exec to easily trigger the collection of thread dumps from the containerized application. For example:

docker exec my_container /opt/atlassian/support/thread-dumps.sh

By default this script will collect 10 thread dumps at 5 second intervals. This can be overridden by passing a custom value for the count and interval, by using -c / --count and -i / --interval respectively. For example, to collect 20 thread dumps at 3 second intervals:

docker exec my_container /opt/atlassian/support/thread-dumps.sh --count 20 --interval 3

If you're running the Docker container in a Kubernetes environment, you can execute the command as below:

kubectl exec -it jira-1 -n jira -- bash -c "/opt/atlassian/support/thread-dumps.sh --count 20 --interval 3"

Replace -it jira-1  with the pod name, and -n jira  with the namespace where the Jira pods are running.

Thread dumps will be written to $APP_HOME/thread_dumps/<date>.

Note: By default this script will also capture output from top run in 'Thread-mode'. This can be disabled by passing -n / --no-top

The Troubleshooting section on https://hub.docker.com/r/atlassian/jira-software has additional information. 


DescriptionThis page provides a way of collecting thread dumps (a breakdown of what every thread is doing for a Java process) and the output of top (shows what each native OS thread is consuming as far as resources are concerned). This breakdown could normally be collected with something like jProfiler as per Using jProfiler to analyse a JIRA application performance. In this example, we're using native tools to collect information.
ProductJira
PlatformServer
Last modified on Oct 19, 2023

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.