Set up an Enterprise-scale Hipchat Data Center cluster

This guide contains instructions for setting up the component services for an Enterprise-scale instance of Hipchat Data Center using VMware hosts. This type of deployment uses three VMWare hosts for the Hipchat nodes, plus a load balancer to route traffic to them, and a Postgres instance, Redis cache, and shared NFS for file storage.

If you're deploying for high availability you must also deploy highly-available versions of the component services, and that may require additional hosts and infrastructure.

tip/resting Created with Sketch.

Use this deployment configuration if:

On this page:

An Enterprise-scale deployment of Hipchat Data Center

Enterprise-scale deployment architecture notes

An Enterprise-scale deployment can provide stability and improved performance at scale, and when configured for High Availability, can prevent downtime in the event of a service failure. An Enterprise-scale Hipchat Data Center cluster is deployed with three Hipchat nodes, and can be deployed either with, or without, highly-available data stores. 

Cluster sizing

If you're deploying an Enterprise-scale cluster, you must deploy exactly three Hipchat nodes. If you chose this option because of the number of users in your organization, a two-node cluster would give you a false sense of security. If one node of a two-node cluster were to fail, the second one would immediately be overwhelmed with traffic and also fail. If you're deploying a HA cluster, three nodes also allows you to place these nodes in different locations geographically. (The AWS CloudFormation template places each Hipchat node in a different Availability Zone automatically.) Deploying more than three Hipchat nodes is not supported at this time.

Planning for High Availability

In a non-HA Enterprise-scale cluster, you can deploy a single load balancer, Postgres instance, Redis cache, and NFS file store along with the standard three Hipchat Data Center nodes. However, if you want true high-availability, you should also deploy highly-available versions of each of these components. More details about the hardware requirements for HA are available below. You can also refer to the documentation for each component for more configuration details.

Sizing for migration

The sizing guidelines we provide here assume that you're deploying Hipchat Data Center for the first time. If you're migrating from an existing Hipchat Server deployment, you might need to increase the amount of storage for files in the NFS share, and space in the Postgres database, based on the number and size of messages in your exported server data.

Planning for apps, add-ons, and integrations

If your deployment will use apps, add-ons, or integrations which are hosted on the public internet, you must make sure that your cluster allows the Hipchat nodes to access them. You can set up a forward proxy (sometimes called an "outbound" proxy) to allow the Hipchat nodes to reach external apps, and configure your network so that the apps can reach your reverse proxy (or load balancer). 

If you are unable to connect your cluster to the internet for security reasons, some open source add-ons allow you to download their source code, inspect it, and rebuild it to host it on your network as a private service. 

Growing from a Small-scale cluster into an Enterprise-scale cluster

You can add nodes to a small-scale deployment to create an Enterprise-scale deployment which can serve more users. When you do this, you need to make sure that the proxy service you used as a small-scale deployment can also function as a load-balancer. The load balancer for the cluster terminates SSL and distributes client connections among the Hipchat nodes, and must support cookie-based session affinity ("sticky sessions"). If your proxy doesn't meet these requirements, you'll have to replace it.


Hardware requirements for an Enterprise-scale Hipchat Data Center deployment

  • The load balancer host(s) can be small: 2 CPU cores 1GHz (or faster), with 2GB of RAM.
  • The three Hipchat nodes must have 8 CPU cores 2.8GHz (or faster), and a minimum of 16GB unallocated RAM. 
  • The Postgres server must have at least an 8 CPU cores 2.25GHz (or faster), with simultaneous threading (SMT) capability, 32 GB of RAM, and at least 64GB of storage.
  • The Redis server must have a 4 CPU cores 2.25Ghz (or faster) , at least 8GB of RAM, and 20GB of storage per host.
  • The NFS volume must be NFSv4, and accessible anonymously (with read and write permissions) by all three Hipchat Nodes.
    We recommend that you start with at least 40GB of file storage for a new deployment, however if you're migrating from an existing Hipchat Server deployment, you might need to increase the amount of storage.

Additional infrastructure for High Availability

If your deployment requires high availability (HA), you need to configure the component services to be highly-available too. (It doesn't matter if the cluster is up if your load balancer fails, and nobody can get in!) This may require additional infrastructure. The Hipchat nodes must be able to access each component service through a single endpoint for the service. 


Highly-available deployments require:

  • A backup load balancer for cluster access.
    This load balancer takes over if the primary load balancer fails. (The failover load balancer must support sticky sessions.)

  • Additional Postgres hosts.
    These serve either as the replica(s) in a failover configuration, or as shards in for a multimaster configuration. 
    • A load balancer for Postgres, so that Hipchat can access the database cluster through a single endpoint.

  • Additional Redis caches.
    Ideally a total of three Redis caches for a full HA Sentinel cluster. (Sentinel is the Redis feature that coordinates between replicas.
    • A load balancer for Redis, so that Hipchat can access the caches through a single endpoint, similar to this example using HAProxy.

  • A highly-available solution for your NFS datastore.
    This might be a commercial solution such as a SAN (storage area network). What you choose here may vary based on your environment and the best practices in your organization. 

An Enterprise-scale deployment of Hipchat Data Center with highly-available component services

Network configuration requirements

  • Hipchat nodes must be deployed on a private network. You should only be able to access the nodes through a reverse proxy or jumpbox.
  • Your private network should only allow inbound connections on port 80 (HTTP), 443 (HTTPS), and 22 (SSH). (See below for details.)
  • SSL must be terminated at a load balancer or reverse proxy.
  • Hipchat nodes must have unrestricted network access to any other Hipchat nodes in the cluster. 
  • Hipchat nodes must have unrestricted network access to the data stores. 
  • Hipchat nodes also require access to ports 53 for DNS and 123 for NTP, and may require additional access to enable optional features such as email notifications, Add-ons, Hipchat Video, and mobile notifications. You might need to write additional DNS-based firewall or proxy rules to allow this access. See the Open and Outgoing ports for more information. 
  • The private network on which you deploy should allow outbound access or have a forward proxy. If your organization uses a DMZ, you can deploy the reverse proxy there.

 

Open and outgoing ports 

Depending on which Hipchat features you choose to enable, you may need to unblock or write firewall rules for the following ports. If you are using a forward proxy, you can use it to access most of these services without writing rules.

FeatureOptional or required?Port/ProtocolNotes
DNSRequired53 TCP/UDPUsed for DNS resolution of hostnames and to set up SSL trust.
NTP (Network Time Protocol)Required123 TCP/UDPKeeps the cluster members on the same schedule.
Email notifications

Optional

(recommended)

25 TCPIf your SMTP server is accessible by the Hipchat nodes from inside the network, you shouldn't need to open additional ports.
Native mobile notificationsOptional443 TCP to barb.hipch.at

You can whitelist all of barb.hipch.at or barb.hipch.at/android and barb.hipch.at/ios individually to enable mobile push for iOS and Android mobile devices

Hipchat videoOptional

Server: 443 TCP to video.hipchatserver.com

Clients: 10000 UDP and 443 TCP to hipchat.me

The Hipchat nodes require one-time access to the central Video server to register themselves and create a trust keypair.

Clients require additional open access on ports 10000 (with 443 as a fallback) to access the video service.

Add-onsOptional443 TCP to marketplace.atlassian.com and marketplace-cdn.atlassian.comUsed for retrieving Add-Ons listings from the Atlassian Marketplace. Add-ons may require additional access to function correctly.
Analytics reporting ("Phone home")

Optional

(recommended, please!)

443 TCP to https://hipchat-server-stable.s3.amazonaws.com/ohaibtf.html We use the statistics reported to these servers to help make Hipchat better!

Load balancer configuration

You may use any load balancer that meets the following requirements:

  • The load balancer must support "cookie based session affinity" (also known as "sticky sessions"). 
    This feature makes sure all requests from the same user are sent to the same Hipchat node as long as that node is still available.

  • Must be configured for HTTP persistent connections (also called http-keepalive).

  • Must support HTTPs endpoints and SSL offloading. 
  • This will terminate SSL for the cluster, so that all communication within the cluster is unencrypted.

  • Must be configured to forward traffic from port 443 to port 80.
  • Must be configured with a DNS record to establish SSL trust, and so the chat clients can access it.
  • Must be configured with an SSL certificate for connections on port 443. 

If you will be using publicly hosted Apps, add-ons, or integrations, the service should be accessible from the public internet so that these external services can send responses back to the cluster.

(tick) For best performance, you might want to use the HAProxy sample configuration knowledge base article to tune your load balancer configuration.

Hipchat node configuration

For an Enterprise deployment of Hipchat Data Center, all Hipchat nodes must be of the same software version, and must be located in the same geographic region.

Then, for each Hipchat node in the cluster:

  • The node must have a static IPv4 address. (This IP should not be accessible by the public internet.) See the official Ubuntu network configuration documentation for more information.
  • The node must be configured in the UTC timezone, and must keep the time synchronized using NTP. (This requires access to port 123 over TCP/UDP.)
  • The node must have a unique hostname. This name must be unique among all members of the cluster.  (This requires access to port 53 over TCP/UDP.)

    tip/resting Created with Sketch.

    Hostname hints

    For clarity in logs and troubleshooting, the host name should also be different from the public DNS entry used to access the cluster. (We know it's tempting to just call it hipchat, but resist!)

    You can set the hostname on a Hipchat node by logging in with SSH and using the following command:

    sudo dont-blame-hipchat -c "hostnamectl set-hostname hipchat1.example.com"

    If your DNS server does not resolve the hostname you set, you can also add an entry similar to the one below to the /etc/hosts/ file on each node.

    127.0.0.1   hipchat1.example.com

You can find the binaries for both the Production and Beta versions of Hipchat Data Center on the Hipchat Data Center release notes

The latest production binary is:

Hipchat Data Center version 3.1.8
sha512sum:           (you may need to scroll --> )
68d77ba20ca2c3f4866173728bb23a750d687cd5178bebfb2ad1df247111a5c31716260d2e67aaa05823dad4ea417f4dfd3e831fee0a59ccfae26afb922b1335

NFS volume requirements

  • Must be NFS v4.
  • Must be a minimum of 40GB.
  • This volume must be accessible anonymously (with read and write permissions) by the Hipchat nodes.
  • Root squashing must be disabled. (Learn more.)

If you're deploying a Highly Available cluster...

Make sure your NFS data store uses HA best practices appropriate to your environment, and as defined by your organization. 

Redis cache requirements

  • Deploy a Redis cache using version 3.2. 
    The official Redis basic deployment documentation is here, however you may also use a package manager for your environment.  
  • Use the default Redis port of 6379.
  • Record the address of the instance. 
  • For security purposes, we recommend that you enable authentication, and then change the default password. (Make sure you record these credentials for later.) 
  • Enable persistence by setting the following configuration values:

    appendonly no
    appendfsync everysec
    no-appendfsync-on-rewrite no
    auto-aof-rewrite-min-size 64mb
    auto-aof-rewrite-percentage 100
    stop-writes-on-bgsave-error yes
    rdbcompression yes
    maxclients 10000
    repl-timeout 60

If you're deploying a Highly Available cluster...

Redis must also be configured to be highly available, and use an additional load balancer to direct connections. This may require additional hardware or virtual machines.

Postgres database requirements

  • Deploy a Postgres instance using version 9.5. Use the the official Postgres installation guides found here.
  • The instance should be configured in the UTC timezone, and must use NTP to stay synchronized with the rest of the cluster.
  • Record the IP or DNS address of the host, or an endpoint that can be used to access it (such as a dedicated load balancer for the database). 

  • Set the database to use UTF-8. 
  • Set the max_connections to 1000.

If you're deploying a Highly Available cluster...

Postgres must also be configured to be highly available. If you deploy multiple database nodes in this mode, you should connect them using a load balancer so that there is a single endpoint that the Hipchat nodes can connect to.

Once the Postgres instance is running, create a database and service user:

  1. Create a database on the Postgres instance. The database name must:
    • start with a letter
    • be between 8 and 30 characters
    • only include letters, numbers, dashes, periods, and underscores
  2. Create a user to access the database. (Do not use the Postgres SUPERUSER account.) 
  3. Make sure that the new user has GRANTS ALL access to the database you just created.
tip/resting Created with Sketch.

For clarity, we recommend putting the word 'hipchat' in the database name, and in the name of the user that you create to access it. This will help you if you ever need to troubleshoot.

ℹ️   Need some help? Click here for some example commands
# create hipchat_postgres database
sudo -u postgres psql postgres -c "CREATE DATABASE hipchat_postgres"

# create dedicated user, set $PASSWORD (replace the variable with the desired password)
sudo -u postgres psql postgres -c "CREATE USER hipchat_user"
sudo -u postgres psql postgres -c "ALTER USER hipchat_user PASSWORD '$PASSWORD';"

# give hipchat_user access to database
sudo -u postgres psql postgres -c "ALTER ROLE hipchat_user WITH LOGIN;"
sudo -u postgres psql postgres -c "GRANT ALL ON DATABASE hipchat_postgres TO hipchat_user;"

Set up the Hipchat VMs

In this step you'll create the virtual machines, deploy the Hipchat node OVA, and configure it. Repeat these steps for each of the three nodes. 

Download and deploy the Hipchat OVA

  1. Create the VM on your VMware host. This process may vary depending on your VMware environment and your organization's standard procedures. 
  2. Load the OVA file to create the new VM.
    If you're deploying to a VMware instance that has access to the internet, you can sometimes download the binary directly using the URL in the release notes
    Otherwise, you can download the file locally, and then transfer it to the VMware host to deploy the VM. Links to the binaries for both the Production and Beta versions are in the Hipchat Data Center release notes

    tip/resting Created with Sketch.

    If you're having trouble downloading the Hipchat OVA file using your browser, you can use a command like curl or wget.

    $ curl https://s3.amazonaws.com/hipchat-server-stable/dc/HipChat.ova --output HipChat.ova
    $ wget https://s3.amazonaws.com/hipchat-server-stable/dc/HipChat.ova

Configure the Hipchat VM network

Once you've deployed the VM, you'll configure networking.

From the VM's console:

  1. Edit the etc/network/interfaces file. (You can learn more about this file in the official Ubuntu 14.04 network configuration documentation.)
    You can use the following command to edit it in vim, but you can use another text editor if you prefer.

    vim etc/network/interfaces


  2. Change the iface eth0 inet line from dchp to static.

    tip/resting Created with Sketch.

    If you're using vim, you'll need to press i to get into "insert" mode which allows you to type.

  3. Add a line for address, and set its value to the IP address you allocated for your Hipchat node. (If there was already an address line, replace the existing/default IP with the one you'll be using.)

  4. If they're not already included, make sure you have lines for netmask and gateway. (These are usually the same for any server on your network.) 
    When you're done, your file should look something like this:

    auto eth0                  # turn on automatically
    iface eth0 inet static     # set this to static
    address 10.0.0.100         # the IP address you allocated for this Hipchat node
    netmask 255.255.255.0      # network mask 
    gateway 10.0.0.1           # outgoing traffic 
  5. Save the file.

    tip/resting Created with Sketch.

    If you're using vim, you'll need to press Esc to exit insert mode, then type colon ( : ) and x.

  6. Next run the following two commands to restart networking and apply your changes.

    sudo ifdown eth0
    sudo ifup eth0

Set the node hostname

The node must have a unique hostname, which must be unique among all members of the cluster. (We know it's tempting to just call it hipchat, but resist!)

The Hipchat node's name should also be different from the public DNS entry used to access the cluster. (The node requires access to port 53 over TCP/UDP to resolve DNS.)

sudo dont-blame-hipchat -c "hostnamectl set-hostname hipchat-worker-1.example.com"

If your DNS server does not resolve the hostname you set, you can also add an entry similar to the one below to the /etc/hosts/ file on each node.

127.0.0.1   hipchat-worker-1.example.com

Optional - Set custom NTP servers

The Hipchat node must be configured in the UTC timezone, and must keep the time synchronized using NTP. (This requires access to port 123 over TCP/UDP.)

If your deployment has access to the internet, you can skip this configuration step and use the default Atlassian NTP servers provided. (These are 0.atlassian.pool.ntp.org, and 1.atlassian.pool.ntp.org

If your environment does not have access to the internet, or if your organization runs its own NTP servers and you wish to use them, you should configure your Hipchat node to use an NTP service inside your network.

To change the NTP servers use the following command. Replace "time1.example.com,time2.example.com" with a comma-separated list of your own NTP servers.

hipchat service --ntp-servers time1.example.com,time2.example.com 

What's next? - The Hipchat Data Center deployment process

Once you've deployed the component services for your Hipchat Data Center cluster, you'll follow the instructions at Configure Hipchat Data Center nodes, then Configure optional Hipchat Data Center features, and Verify and troubleshoot your Hipchat Data Center deployment to complete the process. Then allow users to get online and get chatting!

Last modified on Apr 5, 2018

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.