Running Jira Data Center in a cluster

On this page

Still need help?

The Atlassian Community is here for you.

Ask the community

Jira Data Center allows you to run a cluster of multiple Jira nodes, providing high availability, scalable capacity, and performance and scale. We’ll tell you about the benefits, and give you an overview of what you’ll need to run Jira in a clustered environment.

Ready to get started? See Set up a Jira Data Center cluster.

Benefits of clustering

Clustering is designed for enterprises with large or mission-critical Data Center deployments that require continuous uptime, instant scalability, and performance under high load.

Here are some of the benefits:

  • High availability and failover
    If one node in your cluster goes down, the other take on the load, ensuring your users have uninterrupted access to Jira.

  • Performance and scale
    Each node added to your cluster increases concurrent user capacity, and improves response time as user activity grows.

  • Instant scalability
    Add new nodes to your cluster without downtime or additional licensing fees. Indexes and apps are automatically synced.

Architecture

The image below shows a typical configuration:

Architecture of clustered Data Center.

As you can see, a Jira Data Center cluster consists of:

  • Multiple identical application nodes running Jira Data Center.

  • A load balancer to distribute traffic to all of your application nodes.

  • A shared file system that stores attachments, and other shared files. 

  • A database that all nodes read and write to.

All application nodes are active and process requests. A user will access the same Jira node for all requests until their session times out, they log out, or a node is removed from the cluster. 

Learn more

Licensing

Your Data Center license is based on the number of users in your cluster, rather than the number of nodes. This means you can scale your environment without additional licensing fees for new servers or CPU.

You can monitor the available license seats in the Versions & licenses page in the admin console.

If you wanted to automate this process (for example to send alerts when you are nearing full allocation) you can use the REST API.

Your Jira license determines which features and infrastructure choices are available. Head to Jira Server and Data Center feature comparison for a full run down of the differences between a Server license and a Data Center license. 

Home directories

To run Jira in a cluster, you'll need an additional home directory, known as the shared home.

Each Jira node has a local home that contains logs, caches, Lucene indexes and configuration files. Everything else is stored in the shared home, which is accessible to each Jira node in the cluster.

Here's a summary of what is found in the local home and shared home:

Local homeShared home
  • logs
  • caches
  • Lucene indexes
  • configuration files
  • plugins
  • attachments
  • avatars / profile pictures
  • icons
  • export files
  • import files
  • plugins
  • cluster status and synchronization data
Caching

In Jira Data Center, cache modifications are replicated between the nodes to keep all of them in sync. We’re using asynchronous cache replication, which means that modifications aren’t replicated immediately after they occur, but are added to local queues (each node has local queues for every other node), and then replicated based on their order in the queue. With this approach, we can improve the scalability of the cluster, reduce the amount of cache inconsistencies, and separate the replication itself from any cache modifications, which simplifies and speeds up the whole process.

For more info, see Jira Data Center cache replication.

Indexes

Each individual Jira application node stores its own full copy of the index. A journal service keeps each index in sync.

When you first set up your cluster, you will copy the local home directory, including the indexes, from the first node to each new node.

When adding a new Jira node to an existing cluster, you will copy the local home directory of an existing node to the new node. When you start the new node, Jira will check if the index is current, and if not, request a recovery snapshot of the index from either the shared home directory, or a running node (with a matching build number) and extract it into the index directory before continuing the start up process. If the snapshot can't be generated or is not received by the new node in time, existing index files will be removed, and Jira will perform a full re-index.

If a Jira node is disconnected from the cluster for a short amount of time (hours), it will be able to use the journal service to bring its copy of the index up-to-date when it rejoins the cluster. If a node is down for a significant amount of time (days), its Lucene index will have become stale, and it will request a recovery snapshot from an existing node as part of the node startup process. 

If you suspect there is a problem with the index on all nodes, you can temporarily disable index recovery on one node, rebuild the index on that node, then copy the new index over to each remaining node.  

For more info, see Jira Data Center search indexing.

Cluster locks

Where an action must only run on one node, for example a scheduled job or sending daily email notifications, Jira uses a cluster lock to ensure the action is only performed on one node.  

Cluster locks are acquired and then released by a node. To make sure that a cluster lock doesn’t block the whole cluster if one of the nodes goes offline, we use a heartbeat mechanism that regularly checks if the node that acquired the lock is still active. This mechanism can release the lock, if needed.

Cluster node discovery

When configuring your cluster nodes you can either supply the IP address of each cluster node, or a multicast address.

If you're using multicast:

Jira will broadcast a join request on the multicast network address. Jira must be able to open a UDP port on this multicast address, or it won't be able to find the other cluster nodes. Once the nodes are discovered, each responds with a unicast (normal) IP address and port where it can be contacted for cache updates. Jira must be able to open a UDP port for regular communication with the other nodes.

A multicast address can be auto-generated from the cluster name, or you can enter your own, during the set-up of the first node. 

Infrastructure and requirements

The choice of hardware and infrastructure is up to you. Below are some areas to think about when planning your hardware and infrastructure requirements.

Running Jira Data Center on Kubernetes

If you plan to run Jira Data Center on Kubernetes, you can use our Helm charts. For more information, see Running Jira Data Center on a Kubernetes cluster.

Deploying Jira Data Center on AWS and Azure

If you plan to run Jira Data Center on AWS or Azure, you can use our templates to deploy the whole infrastructure. You’ll get your Jira Data Center nodes, database and storage all configured and ready to use in minutes. For more info, see the following resources:

Server requirements

You should not run additional applications (other than core operating system services) on the same servers as Jira. Running Jira, Confluence and Bamboo on a dedicated Atlassian software server works well for small installations but is discouraged when running at scale. 

Jira Data Center can be run successfully on virtual machines.

Cluster nodes requirements

Each node does not need to be identical, but for consistent performance we recommend they are as close as possible. All cluster nodes must:

  • be located in the same data center, or region (for AWS and Azure)
  • run the same Jira version
  • have the same OS, Java and application server version
  • have the same memory configuration (both the JVM and the physical memory) (recommended)
  • be configured with the same time zone (and keep the current time synchronized). Using ntpd or a similar service is a good way to ensure this.

You must ensure the clocks on your nodes don't diverge, as it can result in a range of problems with your cluster.

How many nodes?

Your Data Center license does not restrict the number of nodes in your cluster. The right number of nodes depends on the size and shape of your Jira instance, and the size of your nodes. See our Jira Data Center size profiles guide for help sizing your instance. In general, we recommend starting small and growing as you need.

Memory requirements

We recommend that each Jira node has a minimum of 8GB RAM. This would be sufficient for a single Server instance with a small number of projects (up to 100) with 1,000 to 5,000 issues in total and about 100-200 users.

To get an idea on how large and complex your Jira instance is, see Jira Data Center size profiles.

The maximum heap (-Xmx) for the Jira application is set in the setenv.sh or setenv.bat file. The default should be increased for Data Center. We recommend keeping the minimum (Xms) and maximum (Xmx) heap the same value. 

You can also check the details of our public Jira Data Center instances. See Jira Data Center sample deployment.

Database

You should ensure your intended database is listed in the current Supported platforms. The load on an average cluster solution is higher than on a standalone installation, so it is crucial to use the a supported database.

You must also use a supported database driver, which should be listed in supported platforms linked above. For more detailed instructions on connecting Jira to a database, see Connecting Jira applications to a database.

Additional requirements for database high availability

Running Jira Data Center in a cluster removes the application server as a single point of failure. You can also do this for the database through the following supported configurations:

  • Amazon RDS Multi-AZ: this database setup features a primary database that replicates to a standby in a different availability zone. If the primary goes down, the standby takes its place.

  • Amazon PostgreSQL-Compatible Aurora: this is a cluster featuring a database node replicating to one or more readers (preferably in a different availability zone). If the writer goes down, Aurora will promote one of the writers to take its place.

The AWS Quick Start deployment option allows you to deploy Jira Data Center with either one, from scratch. If you want to set up an Amazon Aurora cluster with an existing Jira Data Center instance, refer to Configuring Jira Data Center to work with Amazon Aurora.

Shared home and storage requirements

All Jira cluster nodes must have access to a shared directory in the same path. NFS and SMB/CIFS shares are supported as the locations of the shared directory. As this directory will contain large amount of data (including attachments and backups) it should be generously sized, and you should have a plan for how to increase the available disk space when required.

Load balancers

We suggest using the load balancer you are most familiar with. The load balancer needs to support ‘session affinity’. If you're deploying on AWS you'll need to use an Application Load Balancer (ALB).

Here are some recommendations when configuring your load balancer:

  • Queue requests at the load balancer. By making sure the maximum number requests served to a node does not exceed the total number of http threads that Tomcat can accept, you can avoid overwhelming a node with more requests than it can handle. You can check the maxThreads in <install-directory>/conf/server.xml.

  • Don't replay failed idempotent requests on other nodes, as this can propagate problems across all your nodes very quickly.

  • Using least connections as the load balancing method, rather than round robin, can better balance the load when a node joins the cluster or rejoins after being removed. 

Many load balancers require a URL to constantly check the health of their backends in order to automatically remove them from the pool. It's important to use a stable and fast URL for this, but lightweight enough to not consume unnecessary resources. The following URL returns Jira’s status and can be used for this purpose. 

URLExpected contentExpected HTTP status
http://<jiraurl>/status{"state":"RUNNING"}200 OK
See all status codes and responses...
HTTP status codeResponse entityDescription
200{"state":"RUNNING"}

Running normally

500{"state":"ERROR"}An error state
503{"state":"STARTING"}Application is starting
503{"state":"STOPPING"}Application is stopping
200{"state":"FIRST_RUN"}Application is running for the first time and has not yet been configured
404
Application failed to start up in an unexpected way (the web application failed to deploy)

Here are some recommendations, when setting up monitoring, that can help a node survive small problems, such as a long GC pause: 

  • Wait for two consecutive failures before removing a node.
  • Allows existing connection to the node to finish, for say 30 seconds, before the node is removed from the pool.

For more info, see Load balancer configuration options and Load balancer examples.

Network adapters

Use separate network adapters for communication between servers. Cluster nodes should have a separate physical network (i.e. separate NICs) for inter-server communication. This is the best way to get the cluster to run fast and reliably. Performance problems are likely to occur if you connect cluster nodes via a network that has lots of other data streaming through it.

App compatibility

The process for installing Marketplace apps (also known as add-ons or plugins) in a Jira cluster is the same as for a standalone installation. You will not need to stop the cluster, or bring down any nodes to install or update an app. 

The Atlassian Marketplace indicates apps that are compatible with Jira Data Center. Learn more about Data Center approved apps

Ready to get started? 

Head to Set up a Jira Data Center cluster for a step-by-step guide to enabling and configuring your cluster.

Last modified on Sep 6, 2021

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.