Configuration requirements for HipChat Data Center deployment

The information on this page provides more configuration details for the services you must set up before you deploy a HipChat Data Center instance.

These configuration requirements are the same for both small- and Enterprise-scale deployments, so make sure you read it top to bottom!

But what hardware do I need?!

Looking for the hardware requirements that used to be on this page? They've moved! Check out the Deployment options and sizing guidelines for HipChat Data Center.




On this page:

Network configuration requirements

  • The HipChat nodes should be deployed on a private network. If you're on AWS, we recommend that you deploy on a dedicated virtual private cloud (VPC). You should only be able to access the nodes through the load balancer, or directly by using SSH.
  • Each HipChat node must have unrestricted network access to the other nodes. 
  • Your private network should only allow inbound connections on port 80 (HTTP), 443 (HTTPS), and 22 (SSH). (See the load balancer configuration details below.)
  • SSL must be terminated at the load balancer.
  • The private network on which you deploy should allow outbound access or have a forward proxy. If your organization uses a DMZ, you can deploy the load balancer there.
  • The HipChat nodes also require access to ports 53 for DNS and 123 for NTP, and may require additional additional access to enable optional features such as email notifications, Add-ons, HipChat Video, and mobile notifications. You may need to write additional firewall rules to allow this access. See the HipChat node requirements for more information. 

HipChat node requirements

Configuration requirements

  • Each node must have a static IPv4 address. (This IP should not be accessible by the public internet.)
  • The nodes must be configured in the UTC timezone, and must keep the time synchronized using NTP. 
  • To use NTP, the nodes must be able to access port 123 over TCP/UDP.
  • Each node must have a unique hostname. This name must be unique among all members of the cluster. 
    (tick) For clarity in logs and troubleshooting, the name should also be different from the public DNS entry used by the load balancer.
  • To use DNS, the nodes must be able to access port 53 over TCP/UDP.

Outgoing TCP ports for optional features

Depending on which HipChat features you choose to enable, you may need to unblock or write firewall rules for the following outbound connections. If you are using a forward proxy, you can use it to access most of these services without writing rules.

Feature Port/Protocol Notes
Email notifications 25 TCP If your SMTP server is accessible by the HipChat nodes from inside the network, this is not required.
Native mobile notifications 443 TCP to You can whitelist all of, or and individually to enable mobile push for iOS and Android mobile devices
HipChat video

Server: 443 TCP to

Clients: 443 TCP and 1000 UDP to

The HipChat nodes require one-time access to the central Video server to register themselves and create a trust keypair.

Clients require additional open access on ports 1000 and 443 to access the video service.

Add-ons 443 TCP to
Used for retrieving Add-Ons listings from the Atlassian Marketplace. Add-ons may require additional access to function correctly.
Analytics reporting ("Phone home") 443 TCP to
We use the statistics reported to these servers to help make HipChat better!

HipChat Binaries

HipChat binaries

If you're deploying on AWS (but are not using the CloudFormation stack) go to to locate the AMIs for the regions you'll be deploying to.


If you're deploying on VMWare, download the binary from and deploy it to the hosts.

HipChat Data Center version 3.0.1 - Production channel
sha512sum:             (you may need to scroll --> )

Find the binaries for Beta versions of HipChat Data Center over in the HipChat Data Center release notes.

Postgres database requirements

  • If you're deploying a highly available (HA) Enterprise-scale HipChat Data Center instance, Postgres must also be configured to be highly available. You may wish to deploy multiple database nodes in this mode and connect them using a load balancer with a single endpoint that the HipChat nodes can connect to.
  • You must use the Postgres default port 5432.
  • Record the IP or DNS address of the host, or an endpoint that can be used to access it (such as a dedicated load balancer for the database). 

  • Must be Postgres version: 9.5

  • We recommend that you configure Postgres to be highly available, but this is not required.

  • Set the database to use UTF-8. 
  • Set the max_connections to 1000.

  • The instance should be configured in the UTC timezone, and must use NTP to stay synchronized with the HipChat nodes.
  • Create a database on the Postgres instance. The database name must:
    • start with a letter
    • be between 8 and 30 characters
    • only include letters, numbers, dashes, periods, and underscores
  • Create a user to access the database. (Do not use the Postgres SUPERUSER account.) Make sure that the user has GRANTS ALL access to the database you just created.

For clarity, we recommend putting the word 'hipchat' in the database name, and in the name of the user that you create to access it. This will help you if you ever need to troubleshoot.

ℹ️   Need some help? Click here for some example commands
# create hipchat_postgres database
sudo -u postgres psql postgres -c "CREATE DATABASE hipchat_postgres"

# create dedicated user, set $PASSWORD (replace the variable with the desired password)
sudo -u postgres psql postgres -c "CREATE USER hipchat_user"
sudo -u postgres psql postgres -c "ALTER USER hipchat_user PASSWORD '$PASSWORD';"

# give hipchat_user access to database
sudo -u postgres psql postgres -c "ALTER ROLE hipchat_user WITH LOGIN;"
sudo -u postgres psql postgres -c "GRANT ALL ON DATABASE hipchat_postgres TO hipchat_user;"

Redis cache requirements

  • Deploy a Redis cache. The official Redis basic deployment documentation is here, however you may also use a package manager for your environment.  
  • If you're deploying a highly available (HA) Enterprise-scale HipChat Data Center instance, Redis must also be configured to be highly available.
  • Use the default Redis port of 6379.
  • Record the address of the instance. 
  • Must be Redis version 3.2.
  • For security purposes, we recommend that you enable authentication, and then change the default password. (Make sure you record these credentials for later.) 
  • Enable persistence by setting the following configuration values:

    appendonly no
    appendfsync everysec
    no-appendfsync-on-rewrite no
    auto-aof-rewrite-min-size 64mb
    auto-aof-rewrite-percentage 100
    stop-writes-on-bgsave-error yes
    rdbcompression yes
    maxclients 10000
    repl-timeout 60

NFS volume requirements

  • Must be NFSv4.
  • This volume must be accessible anonymously (with read and write permissions) by all three HipChat Nodes. 
  • If you are deploying a Highly Available cluster, ensure that your NFS data store uses HA best practices appropriate to your environment, and as defined by your organization. 
  • If you are using AWS, you can use the AWS Elastic File System (EFS). 

Load balancer or Reverse proxy configuration

Load balancer or Reverse proxy?

In both deployment scale options, clients access the HipChat node(s) through a service which terminates SSL. This is required regardless of which deployment type you choose.

In an Enterprise-scale deployment this service is a load balancer which distributes client connections among the HipChat nodes. In a small-scale deployment all connections go to a single HipChat node, so this service can be just a reverse proxy.

Several services (such as NGINX or Apache) can function as both a load balancer and a reverse proxy. If you use one of these, you can quickly scale up your deployment later by adding more HipChat nodes.


You may use any load balancer that meets the following requirements:

  • Must support HTTPs endpoints and SSL offloading, and terminate SSL.

  • If you are deploying an Enterprise-scale HipChat Data Center instance (with multiple HipChat nodes), the load balancer must support "cookie based session affinity" (also known as "sticky sessions").
  • Must be configured with a DNS record so clients can access it.
  • Must be configured to forward traffic from port 443 to port 80.
  • Configured with an SSL certificate for connections on port 443. 
  • If you are using AWS, you can use the Classic Elastic Load Balancer (ELB).

If you will be using publicly hosted Add-ons (formerly called Integrations) your proxy or load balancer should be accessible from the public internet.


The following reference configuration is for an NGINX load balancer:

Example NGINX configuration
upstream chat {
    keepalive 100;

server {
	listen                  443;
    server_name   ;
    ssl_certificate         /etc/nginx/certs/cert.crt;
    ssl_certificate_key     /etc/nginx/certs/cert.key;
    ssl on;
    ssl_session_cache  builtin:1000  shared:SSL:10m;
    ssl_protocols  TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
    ssl_prefer_server_ciphers on;

    location / {
        client_max_body_size        80m;
        proxy_http_version          1.1;
        proxy_set_header Connection "";
        proxy_set_header            Host $host;
        proxy_set_header            X-Real-IP $remote_addr;
        proxy_set_header            X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header            X-Forwarded-Proto $scheme;
        proxy_read_timeout          90;
        proxy_pass                  http://chat;
Last modified on Oct 17, 2017

Was this helpful?

Provide feedback about this article

Not finding the help you need?

Ask the community

Powered by Confluence and Scroll Viewport.