Health Check: Open Files Limit
Platform Notice: Data Center Only - This article only applies to Atlassian products on the Data Center platform.
Note that this KB was created for the Data Center version of the product. Data Center KBs for non-Data-Center-specific features may also work for Server versions of the product, however they have not been tested. Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.
*Except Fisheye and Crucible
Problem
The Open Files Limit Health Check queries the maximum and current open file descriptors on the operating system, for the JIRA running process. After doing so it calculates a % of the total, and based on what that % is, it either sends a warning or a major alert.
File descriptors are used by *nix (Unix-like) operating systems to handle access to files. There are limits put in place by the operating system, and if processes try to exceed those limits they will be refused access to additional descriptors. This will impact JIRA's operation.
Impact
If the currently running JIRA process attempts to exceed the maximum allowable file descriptors, it will critically fail and major operations will be unable to continue. The only fix for this is to restart the JIRA instance.
Understanding the Results
Results | What this means |
---|---|
There are '<num open files>' open files out of the maximum '<max open files>'. This is within an acceptable limit. | The JIRA process is using less than 70% of the maximum open file descriptors. |
There are '<num open files>' open files out of the maximum '<max open files>'. This is getting close to the limit and will cause critical failures if it exceeds the limit. | The JIRA process is using 70% or higher of the maximum open file descriptors. |
There are '<num open files>' open files out of the maximum, '<max open files>'. This is critically close to the limit, and should be fixed immediately. | The JIRA instance is using 90% or higher of the maximum open file descriptors. |
Resolution
Increasing the ulimit
for the current JIRA application session will temporarily resolve the issue:
These changes will only work on installation that uses built in initd script for starting Jira. For installations that use custom build service for systemd (latest versions of linux OS-es) changes will need to be applied directly in that systemd service configuration in the form of:
[Service]
LimitNOFILE=20000
If the
$JIRA_HOME/caches/indexes
folder is mounted over NFS move it to a local mount (i.e. storage on the same server as the JIRA instance). NFS is not supported as per our JIRA application Supported Platforms and will cause this problem to occur at a much higher frequency.Stop the JIRA application.
Edit
$JIRA_INSTALL/bin/setenv.sh
to include the following at the top of the file:1
/usr/bin/ulimit -n 16384
This will set that value each time JIRA applications are started, however, it will need to be manually migrated when upgrading JIRA applications.
Start your JIRA application.
The changes can be verified by running
/proc/<pid>/limits
where <pid> is the application process ID.
To permanently apply the resolution, this needs to be configured on a per OS basis and can be done by consulting the operating system documentation on how to do so - if using Ubuntu we have instructions in Too many open files error in Jira server.
Was this helpful?