Too many open files error in Jira server

Still need help?

The Atlassian Community is here for you.

Ask the community

Symptoms

A JIRA application experiences a general loss of functionality in several areas.

The following error appears in the atlassian-jira.log:

java.io.IOException: java.io.IOException: Too many open files
	at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
	at java.lang.ProcessBuilder.start(ProcessBuilder.java:451)
	at java.lang.Runtime.exec(Runtime.java:591)
	at java.lang.Runtime.exec(Runtime.java:429)
	at java.lang.Runtime.exec(Runtime.java:326)
	at org.netbeans.lib.cvsclient.connection.LocalConnection.openConnection(LocalConnection.java:57)
	at org.netbeans.lib.cvsclient.connection.LocalConnection.open(LocalConnection.java:110)
	at com.atlassian.jira.vcs.cvsimpl.CvsRepositoryUtilImpl.openConnectionToRepository(CvsRepositoryUtilImpl.java:443)
	at com.atlassian.jira.vcs.cvsimpl.CvsRepositoryUtilImpl.updateCvs(CvsRepositoryUtilImpl.java:244)
	at com.atlassian.jira.vcs.cvsimpl.CvsRepository.updateCvs(CvsRepository.java:241)
	at com.atlassian.jira.vcs.cvsimpl.CvsRepository.updateRepository(CvsRepository.java:298)
	at com.atlassian.jira.vcs.DefaultRepositoryManager.updateRepository(DefaultRepositoryManager.java:657)
	at com.atlassian.jira.vcs.DefaultRepositoryManager.updateRepositories(DefaultRepositoryManager.java:608)
	at com.atlassian.jira.service.services.vcs.VcsService.run(VcsService.java:55)
	at com.atlassian.jira.service.JiraServiceContainerImpl.run(JiraServiceContainerImpl.java:67)
	at com.atlassian.jira.service.ServiceRunner.execute(ServiceRunner.java:61)
	at org.quartz.core.JobRunShell.run(JobRunShell.java:191)
	at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:516)

To identify the current open file handler count limit please run:

ulimit -aS | grep open

And then look at the row which looks like:

open files                      (-n) 2560    

Cause

Lucene, the indexing system that is used by JIRA applications, does not support NFS mounts in JIRA applications. Using a NFS mount is known to cause this behaviour - further information can be found on the Lucene documentation.

UNIX systems have a limit on the number of files that can be concurrently open by any one process. The default for most distributions is only 1024 files, and for certain configurations of JIRA applications, this is too small a number. When that limit is hit, the above exception is generated and JIRA applications can fail to function as it cannot open the required files to complete its current operation.

This usually happens in JIRA application instances with particularly large heap space memory allocations. On each search, a lock file will be placed on the file system and deleted, however, the handle will not be released by JAVA until the garbage collector runs a Full Garbage Collection, removes these dereferenced objects, and clears the file handle through each object's finalize() method. If there has not been a collection for some time, it can precipitate this error.

Additionally, the following bugs are reported that are known to cause this behaviour:

  • JRA-29587 - Getting issue details... STATUS
  • JRA-35726 - Getting issue details... STATUS
  • JRA-39114 - Getting issue details... STATUS  (affects JIRA 6.2 and higher)

We have the following improvement requests to better handle this in JIRA applications:

Resolution

If you are using a JIRA application version with the bug in JRA-29587 - Getting issue details... STATUS , upgrade it to the latest version. If using NFS, migrate to a local storage mount.

For a more permanent solution of increasing the number of open files, see your operating system's manual. For example, Ubuntu users would follow the below.

  1. Modify the limits.conf file with the following command:
    sudo vim /etc/security/limits.conf 
  2. Add the following for the user that runs JIRA applications. If you have used the bundled installer, this will be jira.

    limits.conf
    #<domain>      <type>  <item>         <value>
    #
    #*               soft    core            0
    #root            hard    core            100000
    #*               hard    rss             10000
    #@student        hard    nproc           20
    #@faculty        soft    nproc           20
    #@faculty        hard    nproc           50
    #ftp             hard    nproc           0
    #ftp             -       chroot          /ftp
    #@student        -       maxlogins       4
    jira           soft    nofile          16384
    jira           hard    nofile          32768 
  3. Modify the common-session file with the following:

    sudo vim /etc/pam.d/common-session
    (info) common-session is a file only available in debian/ubuntu

  4. Add the following line:

    common-session
    # The following changes were made from the JIRA KB (https://confluence.atlassian.com/display/JIRAKB/Loss+of+Functionality+due+to+Too+Many+Open+Files+Error):
    session required pam_limits.so
  5. Restart the JIRA server application.
tip/resting Created with Sketch.

For exceedingly large instances, we recommend consulting with our partners for scaling JIRA applications. See Is Clustering or Load Balancing JIRA Possible.

This changes will only work on installation that uses built in initd script for starting Jira. For installations that uses custom build service for systemd (latest versions of linux OS-es) changes will need to be applied directly in that systemd service configuration in a form of:

[Service]
LimitNOFILE=20000

Diagnosis

In order to identify what files are kept open, the lsof +L1 command can be used. This will provide a list of the open files, for example:

COMMAND   PID USER   FD   TYPE DEVICE SIZE/OFF NLINK     NODE NAME
java     2565  dos  534r   REG   8,17    11219     0 57809485 /home/dos/deploy/applinks-jira/temp/jar_cache3983695525155383469.tmp (deleted)
java     2565  dos  536r   REG   8,17    29732     0 57809486 /home/dos/deploy/applinks-jira/temp/jar_cache5041452221772032513.tmp (deleted)
java     2565  dos  537r   REG   8,17   197860     0 57809487 /home/dos/deploy/applinks-jira/temp/jar_cache6047396568660382237.tmp (deleted)

Atlassian Support can investigate this further if the resolution does not work, in order to do so please provide the following to Atlassian Support.

  1. The file generated by lsof +L1 > open_files.txt.
    (warning) This command must be executed by the JIRA user, or a user who can view the files. For example if JIRA is running as root (which is not at all recommended), executing this command as jira will not show the open files.
  2. A heap dump taken at the time of the exception being thrown, as per Generating a Heap Dump.
  3. A JIRA application Support ZIP.
Last modified on Sep 25, 2019

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.