Troubleshoot Jira Server performance with thread dumps

Still need help?

The Atlassian Community is here for you.

Ask the community

Platform Notice: Server and Data Center Only. This article only applies to Atlassian products on the server and data center platforms.

Purpose

This page provides a way of collecting thread dumps (a breakdown of what every thread is doing for a Java process) and the output of top for Lunix/Unix operating systems (shows what each native OS thread is consuming as far as resources are concerned). This breakdown could normally be collected with something like jProfiler as per Use jProfiler to analyse Jira application performance - in this example we're using native (free) tools to collect information.

This needs jstack to be installed. Another option for a high-level overview is Using jvmtop to analyze JIRA performance.


Symptoms

JIRA application is behaving slowly, and you need more information as to what part of it is being slow.

Solution

If running Jira on a container, skip to the Steps for Atlassian Docker containers below.

  1. Optional: Enable Thread diagnostics in Administration > System > Troubleshooting and Support Tools. This will provide additional details on the Tomcat threads (namely the URI and username) and will allow for faster troubleshooting.

    1. This is preferred, but if your system is currently in a bad state, please skip step 1 and proceed forward to generate thread dumps.
    2. In earlier versions, you might not see the Thread diagnostics tab. In this case, install the Thready plugin instead.
  2. Download and install the scripts located in https://bitbucket.org/atlassianlabs/atlassian-support/ or run the commands in the Workaround section below.

  3. Execute the scripts during periods of slowness or unresponsiveness and provide the resulting tar.gz file to support.
  4. Select 'Y' to thread dumps.
    1. Select 'N' to heap dump
  5. Optional: Run the disk speed tests with the same scripts, which are covered in more detail in our Testing Disk Access Speed KB.

To analyze them:

  • Look in the resulting CPU usage files to identify which threads are consistently using a lot of CPU time.

  • Use a tool such as TDA and check the native ID to correspond it to the thread dump.
  • Review the stack traces and search for known bugs on jira.atlassian.com.

Workaround

How to manually run the tools, requires Java Development Kit (JDK)

  • Identify the JIRA application PID by using a command such as:
    Linux/Unix:

    JIRA_PID=`ps aux | grep -i jira | grep -i java | awk  -F '[ ]*' '{print $2}'`;

    Windows:

    $JIRA_PID = (Get-Process -name "tomcat8.exe.x64").id

    Alternatively, you can use the following command to identify the process ID of JIRA when it is currently running:


    JIRA_PID=$(cat <JIRA_INSTALL>/work/catalina.pid)
  • Then run the following command - it generates 6 snapshots of CPU usage and thread dumps with 10 seconds interval for a minute.

    Linux (tested with Ubuntu / Debian):

    for i in $(seq 6); do top -b -H -p $JIRA_PID -n 1 > jira_cpu_usage.`hostname`.`date +%s`.txt; jstack -l $JIRA_PID > jira_threads.`hostname`.`date +%s`.txt; sleep 10; done

    Unix (tested with Solaris):

    for i in $(seq 6); do prstat -L -p $JIRA_PID -n 500 1 1 > jira_cpu_usage.`hostname`.`date +%Y-%m-%d_%H%M%S`.txt; jstack -l $JIRA_PID > jira_threads.`hostname`.`date +%Y-%m-%d_%H%M%S`.txt; sleep 10; done

    Windows (tested with PowerShell 5.1):

    1..6|foreach{jstack -l $JIRA_PID|Out-File -FilePath "app_threads.$(Get-Date -uformat %s).txt";sleep 10}

    (info) Windows has no equivalent to the top command, so we can't get the thread CPU usage programmatically. For thread CPU usage, check Use Windows Process Explorer to troubleshoot Jira server Performance.

  • Look in the resulting CPU usage files to identify which threads are consistently using a lot of CPU time.

  • Use a tool such as TDA and check the native ID to correspond it to the thread dump.
  • Alternatively, take the PID of the top 10 threads which are using CPU time and convert them to hexadecimal. Eg: 11159 becomes 0x2b97.
  • Search up the Hex values in the thread dumps to figure out which threads are using up all the CPU.

Steps for Atlassian Docker containers

/opt/atlassian/support/thread-dumps.sh can be run via docker exec to easily trigger the collection of thread dumps from the containerized application. For example:

docker exec my_container /opt/atlassian/support/thread-dumps.sh

By default this script will collect 10 thread dumps at 5 second intervals. This can be overridden by passing a custom value for the count and interval, by using -c / --count and -i / --interval respectively. For example, to collect 20 thread dumps at 3 second intervals:

docker exec my_container /opt/atlassian/support/thread-dumps.sh --count 20 --interval 3

If you're running the Docker container in a Kubernetes environment, you can execute the command as below:

kubectl exec -it jira-1 -n jira -- bash -c "/opt/atlassian/support/thread-dumps.sh --count 20 --interval 3"

Replace -it jira-1  with the pod name, and -n jira  with the namespace where the Jira pods are running.

Thread dumps will be written to $APP_HOME/thread_dumps/<date>.

Note: By default this script will also capture output from top run in 'Thread-mode'. This can be disabled by passing -n / --no-top

The Troubleshooting section on https://hub.docker.com/r/atlassian/jira-software has additional information. 


DescriptionThis page provides a way of collecting thread dumps (a breakdown of what every thread is doing for a Java process) and the output of top (shows what each native OS thread is consuming as far as resources are concerned). This breakdown could normally be collected with something like jProfiler as per Using jProfiler to analyse a JIRA application performance. In this example, we're using native tools to collect information.
ProductJira
PlatformServer
Last modified on Jun 6, 2022

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.