Node sizing overview for Atlassian Data Center
This guide applies to clustered Data Center installations. Learn more about the different Data Center deployment options.
An important part of configuring your Data Center application is sizing the application nodes in your cluster to meet your performance requirements. The number and size of the nodes in your cluster will depend on your needs and how you configure your application. However, we do have several articles that offer a starting point for how to approach node sizing. The most important thing to do when configuring your hardware is to test the application on the hardware configuration you chose, to ensure that your choice meets your performance needs. For Confluence Data Center and Bitbucket Data Center, you can use the Atlassian Performance Testing Framework. For Jira Data Center, you can use these available tools for performance testing.
Sizing resources for Jira Data Center
The Jira sizing guide provides guidance on how to approach sizing for a single node based on the number of users, issues, custom fields, workflows, and more. This information offers a good starting point to understand what type of CPU and how much RAM you will need for each node:
- For a two-node Jira Data Center cluster, we recommend using the sizing guide above to estimate the node size. These estimates provide enough capacity that if one node fails, the other node can handle the load, and the application can continue operating.
- For a cluster with more than two nodes, you might be able to use a configuration of nodes with less memory or a slower CPU. However, it is still important to ensure that if one node fails, the remaining cluster has enough capacity to handle the expected load. You can find more information at Node sizing in a clustered Jira environment.
Confluence Data Center
For Confluence Data Center, your servers will need enough RAM for the Confluence application and the external process pool (which handles memory and CPU intensive tasks). You may also need to allow additional RAM for Synchrony, which is required for collaborative editing, if Synchrony is running on the same node as Confluence. For a full summary of the memory requirements, head to the Confluence Data Center technical overview.
When estimating node sizes for Confluence Data Center, we recommend using the same estimation guidelines for Confluence Server. You will also need to ensure reliable network connections between nodes, and ideally use two physical network interface cards (NICs) for each node. One network card distributes user requests, and the other manages internode communication. To understand how to approach sizing a Confluence node, see the Confluence Server hardware requirements guide.
If you plan to deploy an instance for the Large or XLarge scale, read through our Infrastructure recommendations for enterprise Confluence instances on AWS. Here, we provide recommended AWS infrastructure configurations backed by extensive performance tests.
Bitbucket Data Center
The following guide and article explain how the number of concurrent git operations relates to the number of CPUs and amount of RAM when calculating node size. It's important to test your environment based on these calculations, as actual behavior may vary based on your organization's usage. For example, an organization that uses more
git clone operations may have different resource needs than a similarly sized organization that uses
git clone less frequently.
You can roughly estimate how many nodes to include in your cluster based on how many concurrent git operations a single node can handle, and how many you need to support across your organization. For example, if you have a single node that supports 40 concurrent git operations and you need to support 200 concurrent git operations, then your Data Center deployment would need to consist of 5 of those nodes (40 * 5 = 200). However, keep in mind that as you add nodes, you are also increasing the amount of communication required between nodes, so we recommend and prefer environments with fewer, higher-powered nodes rather than many low-powered nodes. You might want to test both types of clusters when determining the best option to support your needs.
For additional information regarding Bitbucket Data Center, see our blog posts explaining how we built it to scale.
For information regarding instance sizes for Bitbucket Data Center deployed to AWS, see our recommendations for running Bitbucket in AWS.
Hipchat Data Center
Hipchat Data Center supports three configurations, depending on the number of users who will use the application, and if your organization requires high availability (HA) or not. An additional AWS CloudFormation option is available for quickly deploying HA clusters, and sizing information is also available if you're comfortable building your own AWS deployments. See the deployment options and sizing guidelines for Hipchat Data Center for more information on sizing for both.
Choosing AWS EC2 instance types
The default EC2 instance type for Jira Data Center, Confluence Data Center, and Bitbucket Data Center application nodes on AWS is the
c3.xlarge. While you are free to choose the instance type, whatever you choose must meet each product's system requirements, as noted in their sizing guide. It's critically important also to note that you must not use T2 instance types for your application nodes. T2 instances are not sufficient for production environments, and using such may cause performance issues and outages.
We're here to help
An Atlassian Technical Account Manager provides strategic guidance, and will collaborate with you to ensure that your hardware configuration meets the needs of your organization's operations and long-term goals.
Our Premier Support team performs health checks by meticulously analyzing your application and logs, to ensure that your application's deployment fully meets the needs of your users. If the health check process reveals any performance gaps, Premier Support can recommend possible changes to your hardware configuration.
Was this helpful?Yes Provide feedback about this article