Full CodeCache causes Jira to crash or perform slowly
Platform notice: Server and Data Center only. This article only applies to Atlassian products on the Server and Data Center platforms.
Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.
*Except Fisheye and Crucible
Summary
The instance may crash, become slow, or become unresponsive.
The following entries can be found in the atlassian-jira.log
:
Java HotSpot(TM) 64-Bit Server VM warning: CodeCache is full. Compiler has been disabled.
Java HotSpot(TM) 64-Bit Server VM warning: Try increasing the code cache size using -XX:ReservedCodeCacheSize=
Note: An instance might still be impacted by this issue even when the above error is not present in the logs.
The following appears on the thread dumps where the compiler threads would be in the runnable state and consuming CPU.
"C2 CompilerThread0" #6 daemon prio=5 tid=0x000000001ab9c000 nid=0xcfc runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
Environment
All versions of Jira Server and Data Center
Cause
This issue can occasionally happen when the Java CodeCache becomes full.
Resolution
Starting from Jira 7.13, the following properties are in the setenv.sh
/ .bat
file by default. This should solve the problems with the code cache in most environments. We've specified the reserved size as 512m. If you're still getting this error, try increasing the size even more.
Add the following arguments to the Java startup options by following the instructions on Setting Properties and Options on Startup:
-XX:ReservedCodeCacheSize=512m
- Clear catalina.out log file under <JIRA_INSTALL>/logs directory because the health check may be responding to outdated messages in catalina.out.
Restart the application for the new settings to take effect.
If you still get full CodeCache messages after the steps above, you may increase it even further. Some large Jira instances may require more than 512m.
We have yet to experience any consequences because of this besides the use of more system memory.
If past 1024m the error still happens, please contact Atlassian Support.