Too many open files error

Still need help?

The Atlassian Community is here for you.

Ask the community

Platform notice: Server and Data Center only. This article only applies to Atlassian products on the Server and Data Center platforms.

Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.

*Except Fisheye and Crucible

Problem

The following appears in the atlassian-confluence.log:

Caused by: net.sf.hibernate.HibernateException: I/O errors during LOB access
	at org.springframework.orm.hibernate.support.AbstractLobType.nullSafeSet(AbstractLobType.java:163)
	at net.sf.hibernate.type.CustomType.nullSafeSet(CustomType.java:118)
	at net.sf.hibernate.persister.EntityPersister.dehydrate(EntityPersister.java:387)
	at net.sf.hibernate.persister.EntityPersister.insert(EntityPersister.java:460)
	at net.sf.hibernate.persister.EntityPersister.insert(EntityPersister.java:436)
	at net.sf.hibernate.impl.ScheduledInsertion.execute(ScheduledInsertion.java:37)
	at net.sf.hibernate.impl.SessionImpl.execute(SessionImpl.java:2464)
	at net.sf.hibernate.impl.SessionImpl.executeAll(SessionImpl.java:2450)
	at net.sf.hibernate.impl.SessionImpl.execute(SessionImpl.java:2407)
	at net.sf.hibernate.impl.SessionImpl.flush(SessionImpl.java:2276)
	at com.atlassian.confluence.pages.persistence.dao.hibernate.HibernateAttachmentDataDao.save(HibernateAttachmentDataDao.java:63)
	... 17 more
Caused by: java.io.IOException: Too many open files

Cause

Lucene, the indexing system that is used by Confluence, does not support NFS mounts in Confluence. Using a NFS mount is known to cause this behaviour - further information can be found on the Lucene documentation.

Confluence has too many open files and has reached the maximum limit set in the system. UNIX systems have a limit on the number of files that can be concurrently open by any one process. The default for most distributions is only 1024 files, and for certain configurations of Confluence this is too small a number. When that limit is hit, the above exception is generated and Confluence can fail to function as it cannot open the required files to complete its current operation.

Resolution

To resolve this, you will need to increase the maximum open file limit:

  1. Shutdown Confluence
  2. Run the following command in your terminal to check the file handler count limit in your system:

    ulimit -aS | grep open
  3. To set the limit of the file handler, add the following line into the <confluence-install>/bin/setenv.sh file. You can modify the number based on your application's needs

    ulimit -n 32768

    All limit settings are set per login. They are not global, nor are they permanent; existing only for the duration of the session. This will set that value each time Confluence is started, however will be need to be manually migrated when upgrading Confluence. Please see below for a permanent resolution.

  4. After that, you will need to restart the Confluence for the modification to take effect

Resolution based on the limits.conf file

The steps below are suggested as a permanent solution and is based on Too many open files error in Jira server.

  1. As root, edit the /etc/security/limits.conf file with the below entries. The values on the first column refer to the OS user running the Confluence application. In this example the name of the user is confluence.

    confluence      soft    nofile  32768
    confluence      hard    nofile  32768

     Note that the value 32768 is recommended for large instances and may vary depending on your installation.

  2. For debian based Linux distros (such as Ubuntu) as root, edit /etc/pam.d/common-session with the below entry.

    session required pam_limits.so
  3. Reboot the server.

  4. To ensure the new value is being used by the application, take a Support Zip (link here) and search for the max-file-descriptor attribute in application-properties/application.xml file.

    <max-file-descriptor>32,768</max-file-descriptor>

Resolution when Confluence is installed as a systemd service

When Confluence is installed as a systemd service you may need to update the service file (systemd service configuration) as described below.
If you are sure Confluence is running as a systemd service, go straight to Step 2.

  1. Check if Confluence is configured as a systemd service. This helps to identify if this is the case and the service name, in case you are not sure.

    grep -i confluence /etc/systemd/system/*.service /lib/systemd/system/*.service

    In this example, the name of our service is confluence.service, which is the name of the unit file itself and is located in the standard folder /etc/systemd/system.


  2. Edit the service unit file (/etc/systemd/system/confluence.service in our example) and add the following line.

    LimitNOFILE=32768

    Note that the value 32768 is recommended for large instances and may vary depending on your installation.

  3. Execute the following command to reload the changes or modifies the systemd unit files. These unit files are placed in the “/etc/systemd/system” directory and require reloading if any changes are made to them.

    systemctl daemon-reload

       4. Reboot the server.

       5. To ensure the new value is being used by the application, take a Support Zip (link here) and search for the max-file-descriptor attribute in application-properties/application.xml file.

<max-file-descriptor>32,768</max-file-descriptor>




DescriptionLucene, the indexing system that is used by Confluence, does not support NFS mounts in Confluence. Using an NFS mount is known to cause this behavior.
ProductConfluence
PlatformServer,
Last modified on Jun 14, 2023

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.