Health Check: Open Files Limit

Still need help?

The Atlassian Community is here for you.

Ask the community

Platform notice: Server and Data Center only. This article only applies to Atlassian products on the Server and Data Center platforms.

Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.

*Except Fisheye and Crucible

Problem

The Open Files Limit Health Check queries the maximum and current open file descriptors on the operating system, for the JIRA running process. After doing so it calculates a % of the total, and based on what that % is, it either sends a warning or a major alert.

File descriptors are used by *nix operating systems to handle access to files. There are limits put in place by the operating system, and if processes try to exceed those limits they will be refused access to additional descriptors. This will impact JIRA's operation.

Impact

If the currently running JIRA process attempts to exceed the maximum allowable file descriptors, it will critically fail and major operations will be unable to continue. The only fix for this is to restart the JIRA instance.

Understanding the Results

IconResultsWhat this means

There are '<num open files>' open files out of the maximum '<max open files>'. This is within an acceptable limit.

The JIRA process is using less than 70% of the maximum open file descriptors.

There are '<num open files>' open files out of the maximum '<max open files>'. This is getting close to the limit and will cause critical failures if it exceeds the limit.

The JIRA process is using 70% or higher of the maximum open file descriptors.

There are '<num open files>' open files out of the maximum, '<max open files>'. This is critically close to the limit, and should be fixed immediately.

The JIRA instance is using 90% or higher of the maximum open file descriptors.


Resolution

Increasing the ulimit for the current JIRA application session will temporarily resolve the issue: 


These changes will only work on installation that uses built in initd script for starting Jira. For installations that uses custom build service for systemd (latest versions of linux OS-es) changes will need to be applied directly in that systemd service configuration in a form of:

[Service]
LimitNOFILE=20000


  1. If the $JIRA_HOME/caches/indexes folder is mounted over NFS move it to a local mount (i.e. storage on the same server as the JIRA instance). NFS is not supported as per our JIRA application Supported Platforms and will cause this problem to occur at a much higher frequency.
  2. Stop the JIRA application.
  3. Edit $JIRA_INSTALL/bin/setenv.sh to include the following at the top of the file:

    /usr/bin/ulimit -n 16384

    This will set that value each time JIRA applications are started, however, it will need to be manually migrated when upgrading JIRA applications.

  4. Start your JIRA application.
  5. The changes can be verified by running /proc/<pid>/limits where <pid> is the application process ID.

To permanently apply the resolution, this needs to be configured on a per OS basis and can be done by consulting the operating system documentation on how to do so - if using Ubuntu we have instructions in Too many open files error in Jira server.

DescriptionThe Open Files Limit Health Check queries the maximum and current open file descriptors on the operating system, for the JIRA running process. After doing so it calculates a % of the total, and based on what that % is, it either sends a warning or a major alert.
ProductJira
PlatformServer
Last modified on Oct 21, 2020

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.