JIRA is Unresponsive Threads Stuck
Platform Notice: Server, Data Center, and Cloud By Request - This article was written for the Atlassian server and data center platforms but may also be useful for Atlassian Cloud customers. If completing instructions in this article would help you, please contact Atlassian Support and mention it.
Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.
*Except Fisheye and Crucible
Problem
Any request made to JIRA just spins, waiting to load. After a while user may receive a timeout message from Tomcat or a Proxy.
Application logs do not contain any relevant information. Access logs may have indication of 500 errors.
Diagnosis
Diagnostic Steps
- Running
netstat -t
indicates that all of Tomcat's connections are in the state:SYN_RECV
Cause
When all all connections are stuck in SYN_RECV
it indicates that the server has received the request, but is unable to respond. This is an indication that the server is overloaded, or simply does not have any extra threads. Possibilities include:
- Denial of service attack (DOS)
- Something is holding onto all of Tomcat's threads
Resolution
If your server is victim of a DOS attack. Check with your network team/ISP. You may want to implement aggressive timeouts or rate limiting incoming connections.
Otherwise, you'll want to examine thread dumps to determine what is holding onto all of Tomcat's threads:
- Generate at least 5 thread dumps, waiting 10 seconds in between: Troubleshooting Jira performance with Thread dumps
- Open a Support ticket with Atlassian for help on examining thread dumps.