Re-indexing stuck at 0% after Jira Data Center upgrade

robotsnoindex

   

Platform Notice: Data Center - This article applies to Atlassian products on the Data Center platform.

Note that this knowledge base article was created for the Data Center version of the product. Data Center knowledge base articles for non-Data Center-specific features may also work for Server versions of the product, however they have not been tested. Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.

*Except Fisheye and Crucible

Summary

Re-indexing process can get stuck at 0%, indefinitely. This can happen after Jira upgrade between major versions, where the index needs to be recreated.

Environment

Jira 9.x, after upgrading from Jira 8.x.

Diagnosis

  • Reviewing the atlassian-jira.log files, indexing is starting and stuck at 0%. No progress is being made after these lines:

    2023-08-24 04:25:07,175-0400 JiraTaskExecutionThread-1 INFO anonymous     [c.a.j.index.request.DefaultReindexRequestManager] Re-indexing is 0% complete. Current index: Issue
    2023-08-24 04:25:07,176-0400 JiraTaskExecutionThread-1 INFO anonymous     [c.a.j.issue.index.DefaultIndexManager] ReindexAll in foreground: {indexIssues=true, indexChangeHistory=true, indexComments=true, indexWorklogs=true}
    ...
  • The following error is visible after the re-indexing has been started:

    2023-08-24 12:20:44,658-0400 JiraTaskExecutionThread-1 ERROR admin 740x201x1 oexxtc xxx.xxx.xxx.xxx,yyy.yyy.yyy.yyy /secure/admin/IndexReIndex!reindex.jspa [c.a.j.issue.index.DefaultIndexManager] Failed to acquire the write lock but no threads are holding the lock
  • Thread dumps show that thread(s) performing indexing operations are runnable, but stuck accessing the files from the disk:

    12:22:27 - Caesium-1-3
    State:RUNNABLE
    CPU usage:0.00%
    
    Running for: 0:00.43
    
    Waiting for
    This thread is not waiting for notification on any lock
    
    Locks held
    This thread holds [0x630dbf730, 0x62b403e30, 0x62b403e30, 0x608ae20b0, 0x608ae6aa8]
    
    Stack trace
    "Caesium-1-3" #625 daemon prio=5 os_prio=0 cpu=444.39ms elapsed=280.37s tid=0x00002b0ad93cb800 nid=0x7820 runnable  [0x00002b0ad677e000]
       java.lang.Thread.State: RUNNABLE
    	at sun.nio.fs.UnixNativeDispatcher.stat0(java.base@11.0.16/Native Method)
    	at sun.nio.fs.UnixNativeDispatcher.stat(java.base@11.0.16/UnixNativeDispatcher.java:301)
    	at sun.nio.fs.UnixFileAttributes.get(java.base@11.0.16/UnixFileAttributes.java:70)
    	at sun.nio.fs.UnixFileStore.devFor(java.base@11.0.16/UnixFileStore.java:59)
    	at sun.nio.fs.UnixFileStore.<init>(java.base@11.0.16/UnixFileStore.java:74)
    	at sun.nio.fs.LinuxFileStore.<init>(java.base@11.0.16/LinuxFileStore.java:53)
    	at sun.nio.fs.LinuxFileSystem.getFileStore(java.base@11.0.16/LinuxFileSystem.java:128)
    	at sun.nio.fs.UnixFileSystem$FileStoreIterator.readNext(java.base@11.0.16/UnixFileSystem.java:211)
    	at sun.nio.fs.UnixFileSystem$FileStoreIterator.hasNext(java.base@11.0.16/UnixFileSystem.java:222)
    	- locked <0x0000000630dbf730> (a sun.nio.fs.UnixFileSystem$FileStoreIterator)
    	at org.apache.lucene.util.IOUtils.getFileStore(IOUtils.java:595)
    	at org.apache.lucene.util.IOUtils.spinsLinux(IOUtils.java:539)
    	at org.apache.lucene.util.IOUtils.spins(IOUtils.java:528)
    	at org.apache.lucene.util.IOUtils.spins(IOUtils.java:503)
    	at org.apache.lucene.index.ConcurrentMergeScheduler.initDynamicDefaults(ConcurrentMergeScheduler.java:412)
    	- locked <0x000000062b403e30> (a org.apache.lucene.index.ConcurrentMergeScheduler)
    	at org.apache.lucene.index.ConcurrentMergeScheduler.merge(ConcurrentMergeScheduler.java:500)
    	- locked <0x000000062b403e30> (a org.apache.lucene.index.ConcurrentMergeScheduler)
    	at org.apache.lucene.index.IndexWriter.waitForMerges(IndexWriter.java:2621)
    	at org.apache.lucene.index.IndexWriter.shutdown(IndexWriter.java:1275)
    	at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1319)
    	at com.atlassian.jira.index.MonitoringIndexWriter.close(MonitoringIndexWriter.java:91)
    	at com.atlassian.jira.index.WriterWrapper.close(WriterWrapper.java:399)
    	at com.atlassian.jira.index.WriterWithStats.close(WriterWithStats.java:203)
    	at com.atlassian.jira.index.DefaultIndexEngine$WriterReference.doClose(DefaultIndexEngine.java:231)
    	at com.atlassian.jira.index.DefaultIndexEngine$WriterReference.doClose(DefaultIndexEngine.java:203)
    	at com.atlassian.jira.index.DefaultIndexEngine$ReferenceHolder$1.apply(DefaultIndexEngine.java:258)
    	at io.atlassian.fugue.Effect.accept(Effect.java:43)
    	at io.atlassian.fugue.Option$Some.forEach(Option.java:468)
    	at io.atlassian.fugue.Option$Some.foreach(Option.java:464)
    ...

Cause

The cause for this issue is an inaccessible network mount (e.g. stale mount or blocked NFS port). This mount does not need to be necessarily related to Jira, it is sufficient that the mount is present on the OS and it cannot be accessed.


One potential way to reproduce the problem:

  1. Install a fresh Jira instance on Linux
  2. Create an NFS unrelated to Jira; i.e. /tmp/targetfolder; using nfstab
  3. Block connections to NFS port (2049)
  4. Try to access Jira functionalities, or re-index

Solution

  • Check for inaccessible NFS mounts on the operating system and make them accessible again
  • In case of stale mounts, do a server reboot to remove them.


Last modified on Mar 17, 2025

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.