High CPU usage caused by Bitbucket Server

Platform Notice: Data Center Only - This article only applies to Atlassian products on the Data Center platform.

Note that this KB was created for the Data Center version of the product. Data Center KBs for non-Data-Center-specific features may also work for Server versions of the product, however they have not been tested. Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.

*Except Fisheye and Crucible

Summary

While a high CPU usage is expected (see Scaling Bitbucket Server - CPU for details), this should only last for a limited amount of time.

There may be other contributing factors besides the one listed on this page.

Problem

Bitbucket Server is taking a lot of CPU usage on the server - we had examples of 80%+ CPU usage (can be viewed with a top command - example below).

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 top - 15:54:19 up 158 days, 1:02, 2 users, load average: 4.04, 5.00, 4.58 Tasks: 243 total, 1 running, 240 sleeping, 0 stopped, 2 zombie Cpu(s): 81.1%us, 3.7%sy, 0.0%ni, 15.0%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 3924792k total, 3812744k used, 112048k free, 68068k buffers Swap: 2097144k total, 110204k used, 1986940k free, 1917588k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 9569 git 20 0 3632m 1.3g 10m S 301.8 34.1 1277:32 java 7210 git 20 0 212m 22m 19m S 5.9 0.6 0:00.81 git 7249 git 20 0 185m 21m 20m S 5.6 0.6 0:00.46 git 7166 git 20 0 214m 24m 20m S 5.3 0.7 0:01.12 git 7238 git 20 0 185m 21m 20m S 4.9 0.6 0:00.48 git 7155 git 20 0 215m 26m 21m S 4.6 0.7 0:01.39 git 7239 git 20 0 185m 22m 20m S 4.6 0.6 0:00.49 git 7267 git 20 0 161m 18m 17m S 3.9 0.5 0:00.24 git 27888 git 20 0 878m 363m 174m S 1.0 9.5 1:03.53 git 7163 82052508 20 0 33060 2464 1700 R 0.3 0.1 0:00.07 top

Diagnosis

Diagnostic Steps

  • Check if there are any outdated plugins or any plugins with a more recent version

  • Check if the Java process with high CPU usage is the Bitbucket Server one or the bundled Elasticsearch one.

Cause

There could be different root causes for this issue.

Cause #1 - plugins (e.g. Awesome Graphs)

We have seen "Awesome Graphs" plugin (version 2 and earlier) come up as the root cause of many of them. This plugin appears to cope well with smaller repositories but the performance can drop with larger ones.

To identify this is the case, refer to atlassian-bitbucket-profile.log and if you found the entry below (calling "HistoryService" as you can see on the second line), it is most probably the Awesome Graphs plugin:

1 2 3 4 5 6 7 8 2013-08-20 15:19:40,779 | pool-5-thread-252 | 549x73988x2 | bitbucketwallmonitor | 6w76oc [68152ms] - Page com.atlassian.bitbucket.history.HistoryService.getChangesets(Repository,String,String,PageRequest) [0ms] - ScmCommandFactory com.atlassian.bitbucket.scm.ScmService.getCommandFactory(Repository) [68101ms] - /opt/git/bin/git rev-list --format=%H%x02%h%x02%P%x02%p%x02%aN%x02%aE%x02%at%n%B%n%x03 HEAD -- [0ms] - String com.atlassian.bitbucket.internal.plugin.PluginSettingDao.get(String,String) [0ms] - String com.atlassian.bitbucket.internal.plugin.PluginSettingDao.get(String,String) [0ms] - String com.atlassian.bitbucket.internal.plugin.PluginSettingDao.get(String,String) [44ms] - Map com.atlassian.bitbucket.internal.content.ChangesetDao.getAttributesForChangesets(Collection,Collection)

Thread dumps (thread-dump directory after you generate a zip log) might reinforce the "Awesome Graphs"

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [62670] pool-5-thread-409: TIMED_WAITING (waiting on java.util.concurrent.CountDownLatch$Sync@7144afda) ... com.stiltsoft.bitbucket.graphs.manager.GraphsManager.getChangesets(GraphsManager.java:45) com.stiltsoft.bitbucket.graphs.manager.CommitActivityManager.getCommitActivity(CommitActivityManager.java:46) com.stiltsoft.bitbucket.graphs.manager.CommitActivityManager.access$100(CommitActivityManager.java:22) com.stiltsoft.bitbucket.graphs.manager.CommitActivityManager$EntityBuilder.getEntity(CommitActivityManager.java:112) com.stiltsoft.bitbucket.graphs.manager.CommitActivityManager$EntityBuilder.getEntity(CommitActivityManager.java:99) com.stiltsoft.bitbucket.graphs.cache.CacheUpdateCallable.call(CacheUpdateCallable.java:25) java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) java.util.concurrent.FutureTask.run(FutureTask.java:166) java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) java.util.concurrent.FutureTask.run(FutureTask.java:166) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:722)

Cause #2 - bundled Elasticsearch indexes are corrupted

The Java process with the high CPU corresponds to the Elasticsearch one.

The atlassian-bitbucket.log log contains the following ERROR:

1 ERROR [http-nio-7990-exec-16] X182486 <session id> <username> <ip address> "POST /rest/search/latest/search HTTP/1.1" c.a.b.s.internal.rest.SearchResource Unexpected response code from Elasticsearch: 503

Solution

Resolution

Resolution for Cause #1 - plugins (e.g. Awesome Graphs)

  • Update the "Awesome Graphs" plugin.

Resolution for Cause #2 - bundled Elasticsearch indexes are corrupted

Updated on April 2, 2025

Still need help?

The Atlassian Community is here for you.