Documentation for JIRA 4.0. Documentation for other versions of JIRA is available too.

Skip to end of metadata
Go to start of metadata

To our knowledge, JIRA does not have any memory leaks. We know of various public high-usage JIRA instances (eg. 40k issues, 100+ new issues/day, 22 pages/min in 750Mb of memory) that run for months without problems. When memory problems do occur, the following checklist can help you identify the cause.

Too little memory allocated?

Check the System Info page (see Increasing JIRA memory) after a period of sustained JIRA usage to determine how much memory is allocated.

0%

Checklist

  1. handler

    Set the minimum amount of memory (--JvmMs for the Windows service, -Xms otherwise)

    Priority MEDIUM
    rhartono
    N/A
  2. handler

    Restart JIRA

    Priority MEDIUM
    rhartono
    N/A
  3. handler

    Go to Admin -> System Info, and ensure that Total Memory is the minimum you set.

    Priority MEDIUM
    rhartono
    N/A

Too much memory allocated?

When increasing Java's memory allocation with -Xmx, please ensure that your system actually has the allocated amount of memory free. For example, if you have a server with 1Gb of RAM, most of it is probably taken up by the operating system, database and whatnot. Setting -Xmx1Gb to a Java process would be a very bad idea. Java would claim most of this memory from swap (disk), which would dramatically slow down everything on the server. If the system ran out of swap, you would get OutOfMemoryErrors.

If the server does not have much memory free, it is better to set -Xmx conservatively (eg. -Xmx256m), and only increase -Xmx when you actually see OutOfMemoryErrors. Java's memory management will work to keep within the limit, which is better than going into swap.

0%

Task List

  1. handler

    On Windows, ctrl-alt-del, and check the amount of memory marked "Available": !winmem.png|thumbnail!

    Priority MEDIUM
    amohdaris
    N/A
  2. handler

    On Unix, cat /proc/meminfo or use top to determine free memory.

    Priority MEDIUM
    mtaylor
    N/A
  3. handler

    If JIRA is running, check there is spare available memory.

    Priority MEDIUM
    N/A
  4. handler

    If raising a support request, please let us know the total system memory and (if on linux) the /proc/meminfo output.

    Priority MEDIUM
    N/A

Bugs in older JIRA versions

Please make sure you are using the latest version of JIRA. There are often memory leaks fixed in JIRA. Here are some recent ones:

Loading
Key Summary Updated Status
JRA-28572 Jira re-indexing process appears to leak memory Oct 09, 2012 Resolved
JRA-28519 JIRA dies with OutOfMemoryException when rendering issues with numerous and/or large comments Apr 09, 2014 Open
JRA-27670 Chop too long texts to avoid memory issues during rendering Oct 28, 2013 Open
JRA-27415 SSOSeraphAuthenticator unnecessarily creates sessions on logout Mar 06, 2012 Open
JRA-24623 The size of description fields should be able to be limited Mar 06, 2014 Resolved
JRA-19622 OutOfMemoryError caused by a large number of Issue Security Levels Jul 24, 2012 Resolved
JRA-19198 Classloader leak in atlassian-plugins-2.3.1 Nov 11, 2009 Resolved
JRA-18581 Single Level Group By Report unbound memory usage Jan 22, 2013 Open
JRA-18202 Add Google Collections to the webapp classpath to workaround FinalizableReferenceQueue memory leak Aug 07, 2009 Resolved
JRA-18129 Memory Leak in SAL 2.0.10 Jul 30, 2009 Resolved
JRA-18116 Memory Leak in Apache Shindig Aug 11, 2009 Resolved
JRA-17390 Memory Leak in Felix framework BundleProtectionDomain May 22, 2009 Resolved
JRA-16765 Re-enable bundled plugins in setenv May 11, 2009 Resolved
JRA-16750 Fix any memory leaks in JIRA mainly caused by restoring data from XML and refreshing all singleton objects May 05, 2009 Resolved
JRA-16742 SOAP search methods are unbounded - this can lead to xml-rpc generating huge xml responses causing memory problems Apr 15, 2009 Resolved
JRA-15898 too many commit Nov 05, 2008 Resolved
JRA-15489 Tomcat Manager not unloading classes leading to Permgen errors Aug 28, 2008 Resolved
JRA-15460 Cannot create index directory on reindexing jira Aug 27, 2008 Resolved
JRA-15059 One/TwoDimensionalTermHitCollectors use StatsJiraLuceneFieldCache with no cacheing Dec 10, 2012 Resolved
JRA-14053 MappedSortComparator needs to reduce its memory footprint Feb 05, 2010 Closed
Showing 20 out of 25 issues Refresh

Too many webapps (out of PermGen space)

People running multiple JSP-based web applications (eg. JIRA and Confluence) in one Java server are likely to see this error:

java.lang.OutOfMemoryError: PermGen space

Java reserves a fixed 64Mb block for loading class files, and with more than one webapp this is often exceeded. You can fix this by setting the -XX:MaxPermSize=128m property. See the Increasing JIRA memory page for details.

Tomcat memory leak

Tomcat caches JSP content. If JIRA is generating huge responses (eg. multi-megabyte Excel or RSS views), then these cached responses will quickly fill up memory and result in OutOfMemoryErrors.

In Tomcat 5.5.15+ there is a workaround – set the org.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true property (see how). For earlier Tomcat versions, including that used in JIRA Standalone 3.6.x and earlier, there is no workaround. Please upgrade Tomcat, or switch to another app server.

100%

Task List

  1. handler

    Ensure you are using Tomcat 5.5.15 or above.

    Priority MEDIUM
    asaintprix@atlassian.com
    N/A
  2. handler

    On Unix, run {ps -ef | grep java} and make sure the LIMIT_BUFFER property is set.

    Priority MEDIUM
    asaintprix@atlassian.com
    N/A

Other webapps

We strongly recommend running JIRA in its own JVM (app server instance), so that web applications cannot affect each other, and each can be restarted/upgraded separately. Usually this is achieved by running app servers behind Apache or IIS.

If you are getting OutOfMemoryErrors, separating the webapps should be your first action. It is virtually impossible to work out retroactively which webapp is consuming all the memory.

0%

Task List

Plugins

Plugins are a frequent cause of memory problems. If you have any third-party plugins in use, try disabling them temporarily. The same applies to Atlassian plugins such as the toolkit, charting and calendar plugins.

0%

Task List

Millions of notificationinstance records

In order to correctly 'thread' email notifications in mail browsers, JIRA tracks the Message-Id header of mails it sends. In heavily used systems, the notificationinstance table can become huge, with millions of records. This can cause OutOfMemoryErrors in the JDBC driver when it is asked to generate an XML export of the data (see JRA-11725)

0%

Task List

  1. handler

    Run the SQL select count&#40;*) from notificationinstance;. If you have over (say) 500,000 records, delete the old ones with {{delete from notificationinstance where id < }}{pick an id halfway}.

    Priority MEDIUM
    asaintprix@atlassian.com
    N/A

Services (custom, CVS, etc)

Occasionally people write their own services, which can cause memory problem if (as is often the case) they iterate over large numbers of issues. If you have any custom services, please try disabling them for a while to eliminate them as a cause of problems.

The CVS service sometimes causes memory problems, if used with a huge CVS repository (in this case, simply increase the allocated memory).

A symptom of a CVS (or general services-related) problem is that JIRA will run out of memory just minutes after startup.

0%

Task List

JIRA backup service with large numbers of issues.

Do you have hundreds of thousands of issues? Is JIRA's built-in backup service running frequently? If so, please switch to a native backup tool and disable the JIRA backup service, which will be taking a lot of CPU and memory to generate backups that are unreliable anyway (due to lack of locking). See the JIRA backups documentation for details.

0%

Task List

JIRA mail misconfiguration causing comment loops.

Does a user have an e-mail address that is the same as one of the mail accounts in your mail handler services? This can cause a comment loop where notifications are sent out and appended to the issue which then triggers another notification and so forth. If a user then views that issue, it could consume a lot of memory. You can query your database using this query that will show you issues with more than 50 comments. It could be normal for issues that have 50 comments, you want to spot for any irregular pattern in the comments themselves such as repeating notifications.

SELECT count(*) as commentcount, issueid from jiraaction group by issueid having commentcount > 50 order by commentcount desc

The SOAP getProjects request

The SOAP getProjects call loads a huge object graph, particularly when there are many users in JIRA, and thus can cause OutOfMemoryErrors. Please always use getProjectsNoSchemes instead.

0%

Task List

Eclipse Mylyn plugin

If your developers use the Eclipse Mylyn plugin, make sure they are using the latest version. The Mylyn bundled with Eclipse 3.3 (2.0.0.v20070627-1400) uses the getProjects method, causing problems as described above.

0%

Task List

  1. handler

    As below - enable access logging and ensure the latest Mylyn plugin is used.

    Priority MEDIUM
    tchin
    N/A

Huge XML/RSS or SOAP requests

This applies particularly to publicly visible JIRAs. Sometimes a crawler can slow down JIRA by making multiple huge requests. Every now and then someone misconfigures their RSS reader to request XML for every issue in the system, and sets it running once a minute. Similarly, people sometimes write SOAP clients without consideration of the performance impact, and set it running automatically. JIRA might survive these (although be oddly slow), but then run out of memory when a legitimate user's large Excel view pushes it over the limit.

The best way to diagnose unusual requests is to enable Tomcat access logging (on by default in JIRA Standalone), and look for requests that take a long time.

In JIRA 3.10 there is a jira.search.views.max.limit property you can set in WEB-INF/classes/jira-application.properties, which is a hard limit on the number of search results returned. It is a good idea to enable this for sites subject to crawler traffic.

0%

Task List

Unusual JIRA usage

Every now and then someone reports memory problems, and after much investigation we discover they have 3,000 custom fields, or are parsing 100Mb emails, or have in some other way used JIRA in unexpected ways. Please be aware of where your JIRA installation deviates from typical usage.

0%

Task List

Memory diagnostics

If you have been through the list above, there are a few further diagnostics which may provide clues.

Getting memory dumps

By far the most powerful and effective way of identifying memory problems is to have JIRA dump the contents of its memory on exit (when exiting due to an OutOfMemoryError hang). These run with no noticeable performance impact. This can be done in one of two ways:

  • On Sun's JDK 1.5.0_07 and above, or 1.4.2_12 and above, set the -XX:+HeapDumpOnOutOfMemoryError option. If JIRA runs out of memory, it will create a jira_pid*.hprof file containing the memory dump in the directory you started JIRA from.
  • On other platforms, you can use the yourkit profiler agent. Yourkit can take memory snapshots when when the JVM exits, or when an OutOfMemoryError is imminent (eg. 95% memory used), or when manually triggered. The agent part of Yourkit is freely redistributable. For more information, see Profiling Memory and CPU usage with YourKit.

Please reduce your maximum heap size (-Xmx) to 750m or so, so that the generated heap dump is of manageable size. You can turn -Xmx up once a heap dump has been taken.

Enable gc logging

Garbage collection logging looks like this:

0.000: [GC [PSYoungGen: 3072K->501K(3584K)] 3072K->609K(4992K), 0.0054580 secs]
0.785: [GC [PSYoungGen: 3573K->503K(3584K)] 3681K->883K(4992K), 0.0050140 secs]
1.211: [GC [PSYoungGen: 3575K->511K(3584K)] 3955K->1196K(4992K), 0.0043800 secs]
1.734: [GC [PSYoungGen: 3583K->496K(3584K)] 4268K->1450K(4992K), 0.0045770 secs]
2.437: [GC [PSYoungGen: 3568K->499K(3520K)] 4522K->1770K(4928K), 0.0042520 secs]
2.442: [Full GC [PSYoungGen: 499K->181K(3520K)] [PSOldGen: 1270K->1407K(4224K)]
    1770K->1589K(7744K) [PSPermGen: 6658K->6658K(16384K)], 0.0480810 secs]
3.046: [GC [PSYoungGen: 3008K->535K(3968K)] 4415K->1943K(8192K), 0.0103590 secs]
3.466: [GC [PSYoungGen: 3543K->874K(3968K)] 4951K->2282K(8192K), 0.0051330 secs]
3.856: [GC [PSYoungGen: 3882K->1011K(5248K)] 5290K->2507K(9472K), 0.0094050 secs]

This can be parsed with tools like gcviewer to get an overall picture of memory use:

To enable gc logging, start JIRA with the option -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -verbose:gc -Xloggc:gc.log. Replace gc.log with an absolute path to a gc.log file.

For example, with a Windows service, run:

tomcat5 //US//JIRA ++JvmOptions="-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -verbose:gc -Xloggc:c:\jira\logs\gc.log"

or in bin/setenv.sh, set:

export CATALINA_OPTS="$CATALINA_OPTS -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -verbose:gc -Xloggc:${CATALINA_BASE}/logs/gc.log"

If you modify bin/setenv.sh, you will need to restart JIRA for the changes to take effect.

Access logs

It is important to know what requests are being made, so unusual usage can be identified. For instance, perhaps someone has configured their RSS reader to request a 10Mb RSS file once a minute, and this is killing JIRA.

If you are using Tomcat, access logging can be enabled by adding the following to conf/server.xml, below the </Host> tag:

The %S logs the session ID, allowing requests from distinct users to be grouped. The %D logs the request time in milliseconds. Logs will appear in logs/access_log.<date>, and look like this:

127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /secure/Dashboard.jspa HTTP/1.1" 200 15287 2.835 A2CF5618100BFC43A867261F9054FCB0 2835
127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /styles/combined-printable.css HTTP/1.1" 200 111 0.030 A2CF5618100BFC43A867261F9054FCB0 30
127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /styles/combined.css HTTP/1.1" 200 38142 0.136 A2CF5618100BFC43A867261F9054FCB0 136
127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /styles/global.css HTTP/1.1" 200 548 0.046 A2CF5618100BFC43A867261F9054FCB0 46
127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /includes/js/combined-javascript.js HTTP/1.1" 200 65508 0.281 A2CF5618100BFC43A867261F9054FCB0 281
127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /includes/js/calendar/calendar.js HTTP/1.1" 200 49414 0.004 A2CF5618100BFC43A867261F9054FCB0 4
127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /includes/js/calendar/lang/calendar-en.js HTTP/1.1" 200 3600 0.000 A2CF5618100BFC43A867261F9054FCB0 0
127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /includes/js/calendar/calendar-setup.js HTTP/1.1" 200 8851 0.002 A2CF5618100BFC43A867261F9054FCB0 2
127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /includes/js/cookieUtil.js HTTP/1.1" 200 1506 0.001 A2CF5618100BFC43A867261F9054FCB0 1

Alternatively, or if you are not using Tomcat or can't modify the app server config, JIRA has a built-in user access logging which can be enabled from the admin section, and produces terser logs like:

2006-09-27 10:35:50,561 INFO [jira.web.filters.AccessLogFilter] bob http://localhost:8080/secure/IssueNavigator.jspa 102065-4979 1266
2006-09-27 10:35:58,002 INFO [jira.web.filters.AccessLogFilter] bob http://localhost:8080/secure/IssueNavigator.jspa 102806-4402 1035
2006-09-27 10:36:05,774 INFO [jira.web.filters.AccessLogFilter] bob http://localhost:8080/browse/EAO-2 97058+3717 1730

Thread dumps

If JIRA has hung with an OutOfMemoryError, the currently running threads often point to the culprit. Please take a thread dump of the JVM, and send us the logs containing it.

References

Monitoring and Managing Java SE 6 Platform Applications