OpenSearch hardware recommendations for Jira
With OpenSearch, you can reduce the number of nodes and infrastructure costs, and still provide a fast, reliable search experience for your users. More about OpenSearch benefits
This page details hardware recommendations for running OpenSearch with Jira Data Center, including performance test insights to help you determine the optimal size and number of application and OpenSearch nodes.
OpenSearch requires dedicated infrastructure within a standard Jira Data Center setup. Knowing how to deploy OpenSearch for your indexing requirements will help you plan and streamline operations tailored to your business needs.
On this page:
- Hardware recommendations
- OpenSearch and Jira
- Considerations for OpenSearch deployments
- Testing approach
- Performance testing: results and analysis
- Ready to make the switch?
Hardware recommendations
Our recommendations are based on extensive performance testing. They’ll help you plan a suitable environment or assess your current instance based on content volume and traffic. Note that increasing application nodes doesn’t always improve performance and might have the opposite effect.
To get the most from these insights:
Determine your instance size profile.
Review the recommendations below.
Monitor your instance for bottlenecks.
OpenSearch and Jira
OpenSearch in Jira
OpenSearch in Jira isn’t just about searching — it supports everything you do, from loading boards and viewing backlogs to generating reports and working with issues. Because so many core actions rely on search, OpenSearch’s speed, consistency, and reliability have a direct impact on your team’s productivity and experience.
The table below outlines a typical setup for Jira with OpenSearch, including pricing, filesystem, and database details. Pricing doesn’t include shared home or application load balancer costs. By using OpenSearch instead of Lucene, you’ll be able to reduce the number of required Jira nodes and use cheaper hardware while maintaining similar performance.
ole | AWS Service | Instance Type | Nodes | Cost per node per hour1 | Cost per hour | Total cost per month* |
|---|---|---|---|---|---|---|
Jira | EC2 | c5.4xlarge | 5 | $0.68 | $3.4 | $3760.96 monthly |
NFS | EC2 | m5.2xlarge | 1 | $0.384 | $0.384 | |
Database | RDS (PostgreSQL) | db.m6i.4xlarge (Single-AZ) | 1 | $1.368 | $1.368 | |
OpenSearch — Data node | OpenSearch (service) | m7g.2xlarge.search | 4 | $0.542 | $2.168 | $1731.56 monthly |
OpenSearch — Master node | OpenSearch (service) | m7g.medium.search | 3 | $0.068 | $0.204 |
Explanation:
1Prices as of 13 November 2025 based on US East (Ohio). See Amazon's OpenSearch pricing guide and Amazon EC2 On-Demand Pricing.
Considerations for OpenSearch deployments
Performance depends on factors like third-party apps, data, traffic, concurrency, customizations, and instance type. Your results might vary.
We recommend a minimum of three data nodes for OpenSearch. If you have three dedicated master nodes in Amazon OpenSearch Service, keep at least two data nodes for replication. More on dedicated master nodes
Review the test details below to help you choose between best-performing and most cost-effective options.
For more details, refer to the AWS documentation on Sizing Amazon OpenSearch Service domains.
Testing approach
All tests were run in AWS environments using standard AWS components, making it easy to replicate our recommended configurations. We used PostgreSQL with default Amazon Relational Database Service (AWS RDS) settings and dedicated AWS infrastructure on the same subnet to minimize network latency. Read more about AWS components
Benchmarking notes:
The dataset included all Jira issue types.
No apps were installed on test instances; results reflect core Jira only. When designing your infrastructure, account for the performance impact of apps you want to install.
Dataset
The following table shows the dataset we used on our performance tests for Jira.
Metric | Value |
|---|---|
Issues | 5,407,147 |
Projects | 4,299 |
Users | 82,725 |
Custom fields | 1,200 |
Workflows | 4,299 |
Groups | 20,006 |
Comments | 8,408,998 |
Permission Schemes | 2 |
Issue Security Schemes | 11 |
The following table shows the index sizes from our test environment. If you know the size of your current Lucene index, you can estimate your OpenSearch index size by multiplying it by approximately 4. For example, a 14 GB Lucene index per node resulted in a 61.7 GB OpenSearch index (cluster-wide, excluding replicas). You can use this ratio as a starting point, then refer to the Amazon documentation to plan your OpenSearch cluster: Sizing Amazon OpenSearch Service domains.
Search platform | Storage | Size |
|---|---|---|
Lucene | Local index (per Jira node) | 14 GB |
OpenSearch | Primary store (cluster wide, excluding replicas) | 61.7 GB |
Performance testing: results and analysis
End-user performance
Our benchmarking results show that OpenSearch delivers a 38% improvement in overall performance compared to a Lucene-based instance, based on the 95th percentile (P95) of response times. This means that 95% of all measured response times are faster than this value, so it’s a good indicator of how the system performs under heavy load. The tests included a mix of operations, with 75% reads and 25% writes.
Metric | Lucene | OpenSearch | Difference |
|---|---|---|---|
P95 Response Time | 813ms | 500ms | OpenSearch is faster by around 38.5%. |
P90 Response Time | 472ms | 348ms | OpenSearch is faster by around 26.27%. |
Error Rate | 0% | 0% | n/a |
Full reindexing performance
We manually triggered a full reindex and found that OpenSearch completed the process faster than Lucene. However, the real benefit of OpenSearch is that you don’t need to reindex as often. Because updates are quickly shared across all nodes (by default, every second), OpenSearch keeps your data consistent and reduces the need for full site reindexes.
Search platform1 | Duration (lower is better) |
|---|---|
Lucene | 102 minutes |
OpenSearch | 47 minutes |
Explanation:
1Run on a single node of Jira on c5.9xlarge.
Resource usage: Lucene vs OpenSearch
Our internal benchmarks show that OpenSearch reduces memory consumption and garbage collection overhead and also significantly lowers thread contention and eliminates input and output (I/O) anomalies seen with Lucene.
OpenSearch enables you to achieve the same level of performance with fewer Jira nodes. For example, while a Lucene-based cluster typically requires five nodes to maintain optimal performance, OpenSearch can deliver similar results with just three nodes. This means you can scale down your Jira cluster, reduce infrastructure costs, and still provide a fast, reliable search experience for your users.
Metric | Lucene (5 nodes) | OpenSearch (3 nodes) | Notes |
|---|---|---|---|
CPU usage (peak) | 72.6% | 50% | OpenSearch uses around 31% less CPU. |
Memory usage (peak) | 8.87 GB | 4.42 GB | OpenSearch uses significantly less memory. |
Garbage collection |
|
| OpenSearch ensures fewer events and shorter pauses. |
Socket input/output anomalies1 (unexpected, prolonged delays in network data read operations) | 4.17 s reading 5B | None | No anomalies in OpenSearch. |
Thread contention** |
|
| OpenSearch results in fewer blocked threads and much shorter durations. |
Jira node count | 5 | 3 | OpenSearch delivers similar performance with fewer nodes. |
Explanations:
1Socket input/output anomalies are unexpected delays or interruptions when Jira nodes communicate over the network. Even a single slow read or write can cause noticeable lags for users, especially during high-traffic periods. In our tests, Lucene experienced a significant delay (over four seconds to read just five bytes), which can lead to slow page loads, timeouts, or degraded user experience.
2Thread contention occurs when multiple processes compete for the same resources, causing some operations to be blocked or delayed. High thread contention means Jira spends more time waiting and less time serving users, which can result in sluggish boards, slow issue views, and longer response times.
OpenSearch significantly reduces both socket input/output anomalies and thread contention. In our benchmarks, OpenSearch showed no anomalies and reduced blocked thread duration from 70 minutes (Lucene) to just 2 seconds. This means your Jira instance will be more stable, responsive, and able to handle heavy workloads without performance bottlenecks.
Ready to make the switch?
OpenSearch is designed to help your Jira instance grow, perform, and stay reliable — no matter how large your team or your data becomes. If you’re ready to take advantage of faster search, easier scaling, and a more resilient platform, consider enabling OpenSearch for your Jira Data Center instance.