Upgrading Confluence Data Center

Still need help?

The Atlassian Community is here for you.

Ask the community

This page contains instructions for upgrading an existing Confluence cluster.

If you are not yet running a clustered instance of Confluence, see Moving to Confluence Data Center.

In this guide we'll use the following terminology:

  • installation directory - this is the directory where you installed Confluence on each node.
  • local home directory - this is the home or data directory on each node (in non-clustered Confluence this is simply known as the home directory).
  • shared home directory - this is a directory that is accessible to all nodes in the cluster via the same path. If you're upgrading from Confluence 5.4 or earlier you'll create this directory as part of the upgrade.  
  • Synchrony directory - this is the directory where you downloaded Synchrony (this can be on a confluence node, or on its own node)

On this page:

Step 1 Back up

We strongly recommend that you backup your Confluence home and install directories and your database before proceeding.

More information on specific files and directories to backup can be found in Upgrading Confluence.

Step 2 Stop the cluster

You must stop all the nodes in the cluster before upgrading. 

We recommend configuring your load balancer to redirect traffic away from Confluence until the upgrade is complete on all nodes.

Step 3 Create a shared home directory

(warning) If you are upgrading an existing Confluence Data Center instance (Confluence 5.6 or later), you can skip this step, as you already have a Shared Home directory.

To set up your Shared Home directory:

  1. Create a directory that is accessible to all cluster nodes via the same path. This will be your shared home directory. 
  2. Edit confluence.cfg.xml in the home directory on the first node and add a new property called confluence.cluster.home with the path of the shared home directory as the value. Example:

    <property name="confluence.cluster.home">/mnt/confluence-shared-home</property>
  3. Move all the files/directories from the local home directory the first node to the new shared home directory except for the following: 

    • config
    • confluence.cfg.xml
    • index
    • temp
    • bundled-plugins
    • plugin-cache-*
    • plugins-cache
    • plugins-osgi-cache
    • plugins-temp

     Remove the moved files/directories from the local home directories on all other nodes.

Step 4 Upgrade the first node

To upgrade the first node:

  1. Extract (unzip) the files to a directory (this will be your new installation directory, and must be different to your existing installation directory)
  2. Update the following line in the <Installation-Directory>\confluence\WEB-INF\classes\confluence-init.properties file to point to the existing local home directory on that node.
  3. Copy the jdbc driver jar file from your existing Confluence installation directory to  confluence/WEB-INF/lib in your new installation directory. 
    The jdbc driver will be located in either the <Install-Directory>/common/lib or <Installation-Directory>/confluence/WEB-INF/lib directories. 
  4. Copy any other immediately required customizations from the old version to the new one (for example if you are not running Confluence on the default ports or if you manage users externally, you'll need to update / copy the relevant files - find out more in Upgrading Confluence Manually)
  5. Start Confluence, and and confirm that you can log in and view pages before continuing to the next step. Don't try to edit pages at this point. 

You should now reapply any additional customizations from the old version to the new version, before upgrading the remaining nodes.

Step 5 Set up Synchrony

(warning) This step is only required the first time you upgrade to Confluence Data Center 6.0.

In this example, we assume you'll run Synchrony in its own cluster. When configuring your cluster nodes you can either supply the IP address of each Synchrony cluster node, or a multicast address.

  1. Create a Synchrony directory on your first node and copy synchrony-standalone.jar from your Confluence <home-directory> to this directory. 
  2. Copy your database driver from your Confluence <install-directory>/confluence/web-inf/lib to an appropriate location on your Synchrony node.
  3. Change to your Synchrony directory and start Synchrony using the following command.
    You need to pass all of the system properties listed, replacing the values where indicated.

    Start Synchrony command...

    In a terminal / command prompt, execute the following command, replacing <values> with appropriate values for your environment. Scroll down for more information on each of the values you will need to replace.
     

    java 
    -Xss2048k 
    -Xmx2g 
     
    # To set the classpath in Linux based operating systems
    -classpath <PATH_TO_SYNCHRONY_STANDALONE_JAR>:<JDBC_DRIVER_PATH> 
     
    # To set the classpath in Windows operating systems
    -classpath <PATH_TO_SYNCHRONY_STANDALONE_JAR>;<JDBC_DRIVER_PATH> 
     
    -Dsynchrony.cluster.impl=hazelcast-btf 
    -Dsynchrony.port=<SYNCHRONY_PORT> 
    -Dcluster.listen.port=<CLUSTER_LISTEN_PORT>
    -Dsynchrony.cluster.base.port=<CLUSTER_BASE_PORT>
     
    # Remove this section if you don't want to discover nodes using TCP/IP
    -Dcluster.join.type=tcpip 
    -Dcluster.join.tcpip.members=<TCPIP_MEMBERS> 
     
    # Remove this section if you don't want to discover nodes using multicast
    -Dcluster.join.type=multicast
    -Dcluster.join.multicast.group=<MULTICAST_GROUP> 
    -Dcluster.join.multicast.port=54327 
    -Dcluster.join.multicast.ttl=32 
     
    -Dsynchrony.context.path=/synchrony 
    -Dsynchrony.cluster.bind=<SERVER_IP> 
    -Dsynchrony.bind=<SERVER_IP> 
    -Dcluster.interfaces=<SERVER_IP>
    -Dsynchrony.service.url=<SYNCHRONY_URL> 
    -Djwt.private.key=<JWT_PRIVATE_KEY> 
    -Djwt.public.key=<JWT_PUBLIC_KEY>
    -Dsynchrony.database.url=<YOUR_DATABASE_URL> 
    -Dsynchrony.database.username=<DB_USERNAME> 
    -Dsynchrony.database.password=<DB_PASSWORD>  
    
    # The following properties must be passed, but their values do not matter
    -Dip.whitelist=127.0.0.1,localhost
    -Dauth.tokens=dummy 
    -Dopenid.return.uri=http://example.com 
    -Ddynamo.events.table.name=5 
    -Ddynamo.snapshots.table.name=5
    -Ddynamo.secrets.table.name=5 
    -Ddynamo.limits.table.name=5 
    -Ddynamo.events.app.read.provisioned.default=5 
    -Ddynamo.events.app.write.provisioned.default=5 
    -Ddynamo.snapshots.app.read.provisioned.default=5 
    -Ddynamo.snapshots.app.write.provisioned.default=5 
    -Ddynamo.max.item.size=5 
    -Ds3.synchrony.bucket.name=5 
    -Ds3.synchrony.bucket.path=5 
    -Ds3.synchrony.eviction.bucket.name=5 
    -Ds3.synchrony.eviction.bucket.path=5 
    -Ds3.app.write.provisioned.default=100
    -Ds3.app.read.provisioned.default=100
    -Dstatsd.host=localhost 
    -Dstatsd.port=8125 
    synchrony.core 
    sql

    (warning) Remember to remove all commented lines completley before you execute this command, and replace all new lines with white space. You may also need to change the name of the synchrony-standalone.jar if the file you copied is different to our example.

    Here's more information about each of the values you'll need to supply in the command above.

    ValueDescription
    <SYNCHRONY_PORT>
    This is the HTTP port that Synchrony runs on. We suggest port 8091, if available.
    <CLUSTER_LISTEN_PORT>
    This is Synchrony's Hazelcast port. We suggest port 5701, if available. As with Confluence's Hazelcast port (5801) you should ensure that only permitted cluster nodes are allowed to connect to this port, through the use of a firewall and or network segregation.
    <CLUSTER_BASE_PORT>
    This is the Aleph binding port. Synchrony uses Aleph  to communicate between nodes. We suggest port 25500, if available.
    <TCPIP_ MEMBERS>If you choose to discover nodes using TCP/IP, provide a comma separated list of IP addresses for each cluster node.
    <MULTICAST_GROUP>If you chose to discover nodes using multicast, specify an IP address for the multicast group.
    <SERVER_IP>
    This is the public IP address or hostname of this Synchrony node. It could also be a private IP address - it should be configured to the address where Synchrony is reachable by the other nodes.
    <SYNCHRONY_URL>This is the URL that the browser uses to contact Synchrony. Generally this will be the URL for your load balancer plus the Synchrony context path, for example http://yoursite.com/synchrony
    <JWT_PRIVATE_KEY>
    <JWT_PUBLIC_KEY>
    These keys are generated by Confluence. Copy each key from the <local-home>/confluence.cfg.xml file on your first Confluence node. The keys must be the same on all Confluence and Synchrony nodes.
    <YOUR_DATABASE_URL>
    This is the URL for your Confluence database. For example jdbc:postgresql://localhost:5432/confluence. You can find this URL in <local-home>/confluence.cfg.xml.
    <DB_USERNAME>
    <DB_PASSWORD>
     The username and password for your Confluence database user.
    <PATH_TO_SYNCHRONY_STANDALONE_JAR>
    This is the path to the synchrony_standalone.jar that you copied in step 1. For example <synchrony-directory>/synchrony_standalone.jar
    <JDBC_DRIVER_PATH>
    This is the path to your database driver. For example <synchrony-directory>/postgresql-9.2-1002.jdbc.jar.
    <YOUR_LOAD_BALANCER>
    This is the full URL of the load balancer Synchrony will run behind, for example, http://<lb_host>:<lb_port><lb_context_path>. For example, if your loadbalancer path is synchrony, then it will be http://hostname:80/synchrony. Notice that it does not end with /v1, unlike the synchrony.service.url system property passed to Confluence. If this URL doesn't match the URL coming from a users' browser, Synchrony will fail.

    Sensitive information (like database credentials) may be provided using environmental variables, rather than via the command line. Any dots (".") in variable names (identifiers) will need to be replaced with underscores ("_"). 

    A few other properties can be modified to suit your environment. See Configuring Synchrony for Data Center for more information.

  4. To check that Synchrony is accessible, go to:
    http://<SERVER_IP>:<SYNCHRONY_PORT>/synchrony/heartbeat
  5. Repeat this process to start Synchrony on each node of your Synchrony cluster.
    As each node joins you'll see something like this in your console.

    Members [2] {
    	Member [172.22.52.12]:5701
    	Member [172.22.49.34]:5701 
    }
    
  6. Configure your load balancer for Synchrony.
    Your load balancer must support WebSockets (for example NGINX 1.3 or later, Apache httpd 2.4, IIS 8.0 or later) and session affinity. SSL connections must be terminated at your load balancer so that Synchrony can accept XHR requests from the web browser. 

Step 6 Start Confluence on Node 1

  1. Start Confluence on node 1 and pass the following system property to tell Confluence where to find your Synchrony cluster.

    -Dsynchrony.service.url=http://<YOUR_LOAD_BALANCER>:<LOAD_BALANCER_PORT>/synchrony/v1

    You may want to add this system property to your <install-directory>/bin/setenv.bin or setenv.bat so it is automatically passed every time you start Confluence. See Configuring System Properties for more information on how to do this in your environment.

  2. Head to  > General Configuration > Collaborative editing to check that this Confluence node can connect to Synchrony. 

    Note: to test creating content you'll need to access Confluence via your load balancer.  You can't create or edit pages when accessing a node directly.

Step 7 Copy Confluence to remaining nodes

The next step is to replicate your upgraded Confluence directories to other nodes in the cluster.  

  1. Stop Confluence on the first node.
  2. Copy the installation directory and local home directory from the first node to the next node. 
    If the path to the local home directory is different on this node, edit the confluence-init.properties to point to the correct location. 
  3. Start Confluence, and and confirm that you can log in and view pages before continuing with the next node.

Repeat this process for each remaining node. 

Step 8 Start Confluence and check cluster connectivity 

Once all nodes have been upgraded you can start Confluence Data Center on each node, one at a time (starting up multiple nodes simultaneously can lead to serious failures).

The Cluster monitoring console ( > General Configuration > Clustering) includes information about the active cluster nodes. When the cluster is running properly, you should be able to see the details of each node. 

Last modified on Sep 29, 2016

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.