Node sizing in a clustered Jira environment

This page covers the recommendations for node sizing, memory allocation, as well as the CPU requirements for nodes.

 

On this page:

Sizing guidelines

For a two-node cluster of Data Center, Atlassian recommends making each of the nodes the same size as your current unclustered Jira. This practice provides adequate resources for high availability. If one of the nodes fails, the other node needs to be able to handle the load or there will be no improvement in availability. If one node experiences an outage, the application can operate with reduced performance, but it is important to ensure that there are sufficient memory and CPU resources to continue to handle the expected load.

For a cluster with more than two nodes, it may be possible to use an architecture with less memory or CPU in each node, but it is still important to ensure that if one node is lost, the remaining cluster has enough capacity to continue to handle the expected load. 

When you're deciding on the resources required in the cluster, make sure to consider:

  • The current performance of the application under low and peak conditions
  • How will your Jira instance grow and be used in the future?

Memory usage guidelines

There are three types of memory usage in Data Center:

  • Constant in each node - This includes all of the baseline memory required to run Jira - e.g. application and plugin classes, caches, Tomcat, etc.  A copy of this memory is located on each node.
  • Variable with usage - This includes memory used for individual requests such as searches/results and general request data. Memory for requests are shared between the nodes.
  • Headroom - Some requests use a large amount of memory for a single request, therefore it is important to have a buffer so that these requests don't run out of memory. A copy of this memory is located on each node.

For any Data Center cluster, there may be a different balance between the three types of memory usage. For example, two individual instances may have the same number of issues and similar usage patterns, but they could have completely different memory cache sizes because the first cluster may only have one project and the second cluster could have 100 projects. Other Jira configurations such as custom fields and schemes might also impact memory usage.

To understand the memory usage in your application, it might be useful to compare memory usage of low and peak usage times or examine a heap dump.  

CPU guidelines

For a single request in Data Center, the CPU requirement for the node which services the request is slightly higher than for an unclustered Jira. This is because it has to do everything that a standard Jira instance would plus a little more work to keep the nodes caches and indexes in sync. 

However during concurrent usage, the work is shared between the nodes, so the cluster should be able to maintain its performance while under more concurrent load.

To understand the CPU requirements for your nodes, it might be useful to examine your current (non-clustered) CPU at low and peak usage times. 

 

 

Ready to get started? 

Contact us to speak with an Atlassian or get started with Data Center today.

Last modified on Jan 12, 2018

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.