Running Confluence Data Center in AWS

Confluence Data Center is an excellent fit for the Amazon Web Services (AWS) environment. Not only does AWS allow you to scale your deployment elastically by resizing and quickly launching additional nodes, it also provides a number of managed services that work out of the box with Confluence Data Center instances and handle all their configuration and maintenance automatically.

Interested in learning more about the benefits of Data Center? Check out our overview of Confluence Data Center.


Deploying Confluence Data Center using the AWS Quick Start

The simplest way to deploy your entire Data Center cluster in AWS is by using the Quick Start. The Quick Start launches, configures, and runs the AWS compute, network, storage, and other services required to deploy a specific workload on AWS, using AWS best practices for security and availability.

The Quick Start provides two deployment options, each with its own template. The first option deploys the Atlassian Standard Infrastructure (ASI) and then provisions Confluence Data Center into this ASI. The second option only provisions Confluence Data Center on an existing ASI.

Atlassian Standard Infrastructure (ASI)

The ASI is a virtual private cloud (VPC) that contains the components required by all Atlassian Data Center applications. For more information, see Atlassian Standard Infrastructure (ASI) on AWS.


Here's an overview of the architecture for the Confluence Data Center Quick Start:

The deployment consists of the following components:

  • One or more Amazon Elastic Compute Cloud (EC2) instances as cluster nodes, running Confluence, in an auto scaling group.
  • One or more Amazon Elastic Compute Cloud (EC2) instances as cluster nodes, running Synchrony (which is required for collaborative editing), in an auto scaling group.
  • An Amazon Application Load Balancer (ALB), both as load balancer and SSL-terminating reverse proxy.
  • Amazon Elastic File System (EFS) server for the shared home directory which contains attachments and other files accessible to all Confluence nodes.
  • An Amazon Relational Database (RDS) PostgreSQL instance as the shared database.

For more information on the architecture, components and deployment process, see our Quick Start Guide

Confluence will use the Java Runtime Engine (JRE) that is bundled with Confluence (/opt/atlassian/confluence/jre/), and not the JRE that is installed on the EC2 instances (/usr/lib/jvm/jre/). 


Use the Quick Start as is or develop your own

To get you up and running as quickly as possible, the Quick Start doesn't allow the same level of customization as a manual installation. You can use our templates either as is, or as a reference for creating your own template. 

Amazon Aurora database for high availability

The Quick Start also allows you to deploy Confluence Data Center with an Amazon Aurora clustered database. This cluster will be PostgreSQL-compatible, featuring a primary database writer that replicates to two database readers in a different availability zone.


If the writer fails, Aurora automatically promotes one of the readers to take its place. For more information, see Amazon Aurora Features: PostgreSQL-Compatible Edition.

If you want to set up an existing Confluence Data Center instance with Amazon Aurora, you’ll need to perform some extra steps. See

Go to Configuring Confluence Data Center to work with Amazon Aurora for detailed instructions.

EC2 sizing recommendations

The Quick Start uses c3.xlarge instances by default for Confluence and Synchrony nodes. The instance type is up to you, but it must meet Confluence's system requirements.  Smaller instance types (micro, small, medium) are generally not adequate for running Confluence.

Supported AWS regions

Not all regions offer the services required to run Confluence.  You'll need to choose a region that supports Amazon Elastic File System (EFS). You can currently deploy Confluence using the Quick Start in the following regions:  

  • Americas
    • Northern Virginia
    • Ohio
    • Oregon
    • Northern California
    • Montreal
  • Europe/Middle East/Africa
    • Ireland
    • Frankfurt
    • London
    • Paris
  • Asia Pacific
    • Singapore
    • Tokyo
    • Sydney
    • Seoul
    • Mumbai

This list was last updated on .

The services offered in each region change from time to time. If your preferred region isn't on this list, check the Regional Product Services table in the AWS documentation to see if it already supports EFS. 

The Paris region in Europe/Middle East/Africa also supports EFS. However, our Quick Start uses the db.m4 instance class, which isn't available yet in this region. We will be updating our templates soon to support the db.m5 instance class, which will then allow you to use our Quick Start in the Paris region.

If you are deploying Confluence 6.3.1 or earlier....

There is an additional dependency for Confluence versions earlier than 6.3.2. Synchrony (which is required for collaborative editing) uses a third party library to interact with the Amazon API, and the correct endpoints are not available in all regions. This means you can't run Synchrony in the following regions:

  • US East (Ohio)
  • EU (London)1
  • Asia Pacific (Mumbai) 1
  • Asia Pacific (Seoul) 1
  • Canada (Central) 1

1 At the time of writing, these regions did not yet support EFS, so also can't be used to run Confluence.


Internal domain name routing with Route53 Private Hosted Zones

Even if your Confluence site is hosted on AWS, you can still link its DNS with an internal, on-premise DNS server (if you have one). You can do this through Amazon Route 53, creating a link between the public DNS and internal DNS. This will make it easier to access your infrastructure resources (database, shared home, and the like) through friendly domain names. You can make those domain names accessible externally or internally, depending on your DNS preferences.

Step 1: Create a new hosted zone

Create a Private hosted zone in Services > Route 53. The Domain Name is your preferred domain. For the VPC, use the existing Atlassian Standard Infrastructure.

Step 2: Configure your stack to use the hosted zone

Use your deployment’s Quick Start template to point your stack to the hosted zone from Step 1. If you’re setting up Confluence for the first time, follow the Quick Start template as below:

  1. Under DNS (Optional), enter the name of your hosted zone in the Route 53 Hosted Zone field.

  2. Enter your preferred domain sub-domain in the Sub-domain for Hosted Zone field. If you leave it blank, we'll use your stack name as the sub-domain.

  3. Follow the prompts to deploy the stack.

If you already have an existing Confluence site, you can also configure your stack through the Quick Start template. To access this template:

  1. Go to to Services > CloudFormation in the AWS console

  2. Select the stack, and click Update Stack.

  3. Under DNS (Optional), enter the name of your hosted zone in the Route 53 Hosted Zone field.

  4. Enter your preferred domain sub-domain in the Sub-domain for Hosted Zone field. If you leave it blank, we'll use your stack name as the sub-domain.

  5. Follow the prompts to update the stack.

In either case, AWS will generate URLs and Route 53 records for the load balancer, EFS, and database. For example, if your hosted zone is my.hostedzone.com and your stack is named mystack, you can access the database through the URL mystack.db.my.hostedzone.com.

Step 3: Link your DNS server to the Confluence site’s VPC

If you use a DNS server outside of AWS, then you need to link it to your deployment’s VPC (in this case, the Atlassian Standard Infrastructure). This means your DNS server should use Route 53 to resolve all queries to the hosted zone’s preferred domain (in Step 1).

For instructions on how to set this up, see Resolving DNS Queries Between VPCs and Your Network.

If you want to deploy an internal facing Confluence site, using your own DNS server, you can use Amazon Route 53 to create a link between the public DNS and internal DNS. 

  1. In Route 53, create a Private hosted zone. For the VPC, you can use the existing Atlassian Services VPC. The domain name is your preferred domain.
  2. If you've already set up Confluence, go to Services > CloudFormation in the AWS console, select the stack, and click Update Stack. (If you're setting up Confluence for the first time, follow the Quick Start template as below). 
  3. Under Other Parameters, enter the name of your hosted zone in the Route 53 Hosted Zone field. 
  4. Enter your preferred sub-domain or leave the Sub-domain for Hosted Zone field blank and we'll use your stack name as the sub-domain.
  5. Follow the prompts to update the stack. We'll then generate the load balancer and EFS url, and create a record in Route 53 for each. 
  6. In Confluence, go to  > General Configuration and update the Confluence base URL to your Route 53 domain. 
  7. Set up DNS resolution between your on-premises network and the VPC with the private hosted zone. You can do this with:
    1. an Active Directory (either Amazon Directory Service or Microsoft Active Directory)
    2. a DNS forwarder on EC2 using bind9 or Unbound.
  8. Finally, terminate and re-provision each Confluence and Synchrony node to pick up the changes.
tip/resting Created with Sketch.

For related information on configuring Confluence's base URL, see Configuring the Server Base URL.


Scaling up and down

To increase or decrease the number of Confluence or Synchrony cluster nodes:

  1. Go to Services > CloudFormation in the AWS console, select the stack, and click Update Stack.
  2. Change the Minimum number of cluster nodes and Maximum number of cluster nodes parameters as desired.

It may take several minutes for the Auto Scaling Group to detect and apply changes to these parameters.

Unless you specify the same number for Minimum and Maximum number of cluster nodes, the Auto Scaling Group will launch new cluster nodes and terminate existing ones automatically to achieve the optimal desired number of nodes between these two limits. By default, this target number is determined by the following CloudWatch metrics:

  • If the average CPU utilization across the Auto Scaling Group exceeds 60% for 5 minutes, the target number of nodes increases by one (up to the Maximum).
  • If the average CPU utilization across the Auto Scaling Group is lower than 40% for 30 minutes, the target number of nodes decreases by one (down to the Minimum).

A default "cooldown" period of 10 minutes between scaling events is also applied. See Scaling Based on Metrics for more information. 

Note: Adding new cluster nodes, especially automatically in response to load spikes, is a great way to increase capacity of a cluster temporarily. Beyond a certain point, adding large numbers of cluster nodes will bring diminishing returns. In general, increasing the size of each node (i.e., "vertical" scaling) will be able to handle a greater sustained capacity than increasing the number of nodes (i.e., "horizontal" scaling), especially if the nodes themselves are small.

See the AWS documentation for more information on auto scaling groups. 

Connecting to your nodes over SSH

It is possible to SSH to your cluster nodes and file server to perform configuration or maintenance tasks. Note that you must keep your SSH private key file (the PEM file you downloaded from Amazon and specified as the Key Name parameter) in a safe place. This is the key to all the nodes in your instance, and if you lose it you may find yourself locked out. 

Note: the ConfluenceDataCenter.template deploys all EC2 instances in the Subnets specified by the Internal subnets parameter. If you have specified Internal subnets that are completely unreachable from outside, then you may need to launch an EC2 instance with SSH running and accessible in one of the the External subnets, and use this as a "jump box" to SSH to any instances in your Internal subnets. That is, you SSH first to your "jump box", and from there to any instance deployed in the Internal subnets.

When connecting to your instance over SSH, use ec2-user as the user name, for example:

ssh -i keyfile.pem ec2-user@ec2-xx-xxx-xxx-xxx.compute-1.amazonaws.com

The ec2-user has sudo access. SSH access is by root is not allowed.

Upgrading

Consider upgrading to an  Atlassian Enterprise release (if you're not on one already). Enterprise releases get fixes for critical bugs and security issues throughout its two-year support window. This gives you the option to keep a slower upgrade cadence without sacrificing security or stability. Enterprise releases are suitable for companies who can't keep up with the frequency at which we ship feature releases.

Here's some useful advice for upgrading your deployment:

  1. Before upgrading to a later version of Confluence Data Center, check if your apps are compatible with that version. Update your apps if needed. For more information about managing apps, see Using the Universal Plugin Manager.
  2. If you need to keep Confluence Data Center running during your upgrade, we recommend using read-only mode for site maintenance Your users will be able to view pages, but not create or change them. 
  3. We strongly recommend that you perform the upgrade first in a staging environment before upgrading your production instance. Create a staging environment for upgrading Confluence provides helpful tips on doing so.

Upgrading Confluence in AWS

To upgrade a Confluence Data Center instance launched from ConfluenceDataCenter.template:

  1. In the AWS console, select Update Stack.
  2. Change the size of the Confluence and Synchrony auto scaling groups (maximum and minimum) to 0. This will terminate all running nodes. 
  3. Once the update is complete, check that all EC2 nodes have been terminated. 
  4. In the AWS console, select Update Stack.
  5. Change the Confluence Version to the version you want to upgrade to.
  6. Change the size of the Confluence and Synchrony auto scaling groups (maximum and minimum) to 1. Do not add more than one node until after the upgrade is complete.
  7. Access Confluence in your browser.  Any upgrade tasks will run at this point. 
  8. Confirm that Confluence and Synchrony are both running successfully, and that you are running the new version (check the footer). 
  9. In the AWS console, select Update Stack.
  10. Change the maximum Confluence nodes and Maximum Synchrony nodes to your usual auto scaling group size. 
  11. Confirm that your new nodes have joined the cluster. 

Confluence Data Center in AWS currently doesn't allow upgrading an instance without some downtime in between the last cluster node of the old version shutting down and the first cluster node on the new version starting up.  

You must make sure all existing nodes are terminated before launching new nodes on the new version. 

Backing up

We recommend you use the AWS native backup facility, which utilizes snap-shots to back up your Confluence Data Center. For more information, see AWS Backup

Migrating your existing Confluence site to AWS

After deploying Confluence on AWS, you might want to migrate your old deployment to it. To do so:

  1. Upgrade your existing site to the version you have deployed to AWS (Confluence 6.1 or later).
  2. (Optional) If your old database isn't PostgreSQL, you'll need to migrate it. See Migrating to Another Database for instructions. 
  3. Back up your PostgreSQL database and your existing <shared-home>/attachments directory.
  4. Copy your backup files to /media/atl/confluence/shared-home in your EC2 instance.  
  5. Restore your PostgreSQL database dump to your RDS instance with pg_restore.
    See Importing Data into PostgreSQL on Amazon RDS in Amazon documentation for more information on how to do this.   

Important notes:

  • When you create a cluster using the CloudFormation template, the database name is confluence. You must maintain this database name when you restore, or there will be problems when new nodes are provisioned.  You will need to drop the new database and replace it with your backup. 
  • You don't need to copy indexes or anything from your existing local home or installation directories, just the attachments from your existing shared home directory.  
  • If you've modified the <shared-home>/config/cache-settings-overrides.properties file you may want to reapply your changes in your new environment.  
  • The _copy method described in this AWS page, Importing Data into PostgreSQL on Amazon RDS, is not suitable for migrating Confluence.

Last modified on May 10, 2019

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.