Jira server: IO exception "Too many open files" then crashes
Platform Notice: Server and Data Center Only - This article only applies to Atlassian products on the server and data center platforms.
Jira is running out of open files descriptors in Linux environment eventually failing completely to function leading to a crash.
The following plugin is installed on the Jira instance: Exporter / Exporter - Jira Issues to Excel & CSV versions before v2.1.5
You can see similar exceptions in the logs:
2017-05-14 14:43:25,674 http-nio-8080-exec-10 ERROR anonymous 883x135283x1 1lds68b 192.168.0.121,192.168.0.242 /rest/usermanagement/1/search [c.a.p.r.c.error.jersey.ThrowableExceptionMapper] Uncaught exception thrown by REST service: java.io.FileNotFoundException: /opt/atlassian/jira/atlassian-jira/WEB-INF/lib/spring-security-crypto-3.1.0.RELEASE.jar (Too many open files) java.lang.IllegalStateException: java.io.FileNotFoundException: /opt/atlassian/jira/atlassian-jira/WEB-INF/lib/spring-security-crypto-3.1.0.RELEASE.jar (Too many open files) at org.apache.catalina.webresources.AbstractSingleArchiveResourceSet.getArchiveEntry(AbstractSingleArchiveResourceSet.java:97) at org.apache.catalina.webresources.AbstractArchiveResourceSet.getResource(AbstractArchiveResourceSet.java:260) at org.apache.catalina.webresources.StandardRoot.getResourceInternal(StandardRoot.java:280) at org.apache.catalina.webresources.CachedResource.validateResource(CachedResource.java:95) at org.apache.catalina.webresources.Cache.getResource(Cache.java:69) ... 4 filtered
- OS: Linux / Unix like OS environment.
- Exporter / Exporter - Jira Issues to Excel & CSV versions before v2.1.5
Confirm the Open files limits on the OS and for Jira using below commands:
cat /proc/<Jira_PID>/limits cat /proc/sys/fs/file-max
Open files should show as more than 1024 for default and something like 8192 or more for Jira.
OS one coming from file-max should be much bigger, it is set on kernel level typically can be bigger than 500k and can reach more than 1 million files.
Look for what kind of files are opened by Jira using the command:
lsof -p <Jira_PID> >Jira_open_files.txt
Usually you might see some files like the below in the list:
java 14317 xpkjir1p 78r DIR 253,30 4096 7212034 /usr/local/pr/Jira_Software/atlassian/application-data/jira/com.deiser.jira.exporter java 14317 xpkjir1p 79r DIR 253,30 4096 7212034 /usr/local/pr/Jira_Software/atlassian/application-data/jira/com.deiser.jira.exporter java 14317 xpkjir1p 80r DIR 253,30 4096 7212034 /usr/local/pr/Jira_Software/atlassian/application-data/jira/com.deiser.jira.exporter java 14317 xpkjir1p 81r DIR 253,30 4096 7212034 /usr/local/pr/Jira_Software/atlassian/application-data/jira/com.deiser.jira.exporter
The main issue is the older version of the Exporter plugin, it causes Jira to create a lot of open temp files and thus will consume the whole open files allowed for the Jira process eventually.
Increase the Open Files limit for Jira so that It can last longer without crashing.
You may need to check this Redhat article for checking and setting global ulimit values on Redhat systems and on CentOS: https://access.redhat.com/solutions/61334
Restart Jira after applying the open files change.
Upgrade the Exporter plugin to version 2.1.5.
The Vendor has fixed this issue in that version and this should be a permanent solution for this issue.
Below are the steps provided by the plugin Vendor
Version 2.1.5 • Released 2017-07-01 • Supported By DEISER • Paid via Atlassian • Commercial
Resolved the problem that was affecting only Jira instances installed on a CentOS/RedHat and was causing that with the time a lot of processes keep opened and was causing an impact on the instance.
IMPORTANT! If you have a CentOS/RedHat distribution, we strongly recommend following these instructions to prevent malfunctions:
1) Uninstall Exporter
2) Reboot Jira (this will close the previous open connections)
3) Install Exporter again, the latest version (2.1.5)
Thank you so much for your patience and reporting.If you have further problems don't hesitate to contact us at our Service Management: