Managing HTTP connection pool ratio in Jira Data Center

The HTTP connection pool ratio measures the proportion of active to idle HTTP connections in your Jira Data Center cluster. This metric is crucial for understanding how well your system is managing incoming HTTP requests and overall system performance.

Thresholds:

Optimal

Less than 50% utilization

Requires attention

Between 50% - 80% utilization

Needs attention

Greater than 80% utilization

How does a high HTTP connection pool ratio affect Jira Data Center performance?

A high HTTP connection pool ratio can significantly impact your Jira Data Center instance in several ways:

  • Increased request processing times

  • Potential timeouts for user requests

  • Higher resource utilization across cluster nodes

  • Reduced system throughput

  • In extreme cases, system outages or unresponsiveness

Consistently high HTTP connection pool ratio can lead to degraded performance and poor user experience across your entire Jira Data Center cluster.

What's the recommendation?

You should aim to keep the HTTP connection pool ratio below 50% for optimal performance. If you're consistently seeing ratios above this threshold, consider the following actions:

  • Monitor HTTP connection pool usage

  • Optimize thread pool configuration

  • Analyze thread dumps

  • Implement caching strategies

  • Optimize instance performance

Monitor HTTP connection pool usage

Regular monitoring of HTTP thread pool usage can help you identify patterns and potential issues before they become critical.

Set up regular monitoring to identify patterns and potential issues:

  1. Enable JMX monitoring on all Jira Data Center nodes

  2. Configure monitoring tools to collect HTTP thread pool metrics

  3. Analyze usage patterns, looking for consistent high utilization periods

  4. Work with your IT team to address identified issues

Learn how to set up live monitoring using the JMX interface in Jira Data Center.

For advanced monitoring, see Monitor Jira with Prometheus and Grafana.

Optimize thread pool configuration

Proper sizing of the HTTP thread pool is crucial for maintaining optimal performance:

  1. Review the current maxThreads setting in Tomcat

  2. Calculate optimal maxThreads:

    • Determine max requests per node:

      • Max requests per node = maxRequestsPerMinute / 60 / numberOfNodes, 
        eg: 8350 / 60 / 4 = 35
    • Calculate required HTTP threads:

      • Number of http threads required per second = maxRequestsPerMinute * AvgTimeToCompleteEachRequest, 
        eg:Maximum: 35 * 2.6 = 91
  3. Round up for future growth

    • maxThreads = 100 
  4. Adjust the maxThreads value in <jira-install>/conf/server.xml on each node

  5. Restart Jira and monitor performance after changes

More details on tuning HTTP and database connection pooling threads for Jira.

Analyze thread dumps

If your HTTP connection pool ratio consistently exceeds 80% (poor status), analyzing thread dumps can help identify the root cause.

Steps to analyze thread dumps:

  1. Generate thread dumps:

    • Use Jira's built-in tools or command-line utilities to generate thread dumps during periods of high HTTP thread usage.

    • Collect dumps from all nodes in your cluster.

  2. Analyze the thread dumps:

    • Look for patterns of blocked or long-running threads.

    • Identify any custom code or plugins that may be causing issues.

  3. Implement solutions:

    • Optimize identified problematic code or configurations.

    • Consider disabling or replacing plugins that cause excessive thread usage.

  4. Monitor and iterate:

    • Continue monitoring after implementing changes to ensure improvements.

For instructions on generating and analyzing thread dumps, see Generating a thread dump and how to take thread dumps and analyse them.

Optimize instance performance

Ensuring your Jira Data Center nodes are properly sized and optimized can help reduce the load on the HTTP connection pool.

Steps to optimize instance performance:

  1. Review node sizing:

    • Ensure each node meets or exceeds the recommended specifications.

    • Consider adding more nodes if your cluster is consistently under high load.

  2. Optimize database performance:

    • Review and optimize database queries.

    • Ensure database connection latency is within the recommended thresholds.

  3. Review and optimize scheduled jobs:

    • Identify resource-intensive scheduled jobs.

    • Adjust job schedules to run during off-peak hours when possible.

    • Stagger resource-intensive jobs to avoid overlapping execution.

  4. Optimize application settings:

    • Review and adjust cache settings.

    • Optimize indexing configuration.

For guidance on proper node sizing, see Node sizing in a clustered Jira environment.

Additional considerations:

  • Enable Java Flight Recorder (JFR) for advanced diagnostics:

    1. Go to Administration > Troubleshooting and Support tools > Diagnostic settings.

    2. Enable Runtime diagnostics under the JFR section.

    3. Increase JFR recording size and duration if needed for longer monitoring periods.

  • Seek additional support through Atlassian support:

    1. Attach Support Zip files from all cluster nodes.

    2. Include Tomcat access logs in the Support Zip (select "Customize zip" and choose "Tomcat access logs").

    3. Generate the Support Zip as soon as possible after observing high thread utilization.

Last modified on Jan 23, 2025

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.