Upgrading Confluence Data Center

This page contains instructions for upgrading an existing Confluence cluster.

If you are not yet running a clustered instance of Confluence, see Moving to Confluence Data Center.

In this guide we'll use the following terminology:

  • installation directory - this is the directory where you installed Confluence on each node.
  • local home directory - this is the home or data directory on each node (in non-clustered Confluence this is simply known as the home directory).
  • shared home directory - this is a directory that is accessible to all nodes in the cluster via the same path. If you're upgrading from Confluence 5.4 or earlier you'll create this directory as part of the upgrade.  
  • Synchrony directory - this is the directory where you downloaded Synchrony (this can be on a confluence node, or on its own node)


Currently using Confluence Server? Learn more about the benefits of Confluence Data Center.

On this page:

1. Back up

We strongly recommend that you backup your Confluence home and install directories and your database before proceeding.

More information on specific files and directories to backup can be found in Upgrading Confluence.

2. Stop the cluster

You must stop all the nodes in the cluster before upgrading. 

We recommend configuring your load balancer to redirect traffic away from Confluence until the upgrade is complete on all nodes.

3. Upgrade the first node

To upgrade the first node:

  1. Extract (unzip) the files to a directory (this will be your new installation directory, and must be different to your existing installation directory)
  2. Update the following line in the <Installation-Directory>\confluence\WEB-INF\classes\confluence-init.properties file to point to the existing local home directory on that node.
  3. Copy the jdbc driver jar file from your existing Confluence installation directory to  confluence/WEB-INF/lib in your new installation directory. 
    The jdbc driver will be located in either the <Install-Directory>/common/lib or <Installation-Directory>/confluence/WEB-INF/lib directories. 
  4. Copy any other immediately required customizations from the old version to the new one (for example if you are not running Confluence on the default ports or if you manage users externally, you'll need to update / copy the relevant files - find out more in Upgrading Confluence Manually)
  5. Start Confluence, and and confirm that you can log in and view pages before continuing to the next step. Don't try to edit pages at this point. 

You should now reapply any additional customizations from the old version to the new version, before upgrading the remaining nodes.

4. Set up Synchrony

(warning) This step is only required the first time you upgrade from Confluence 5.x to Confluence 6.x.


Set up your Synchrony cluster...

 

Collaborative editing requires Synchrony, which runs as a separate process. You can deploy Synchrony on the same nodes as Confluence, or in its own cluster with as many nodes as you need. 

In this example, we assume you'll run Synchrony in its own cluster. When configuring your cluster nodes you can either supply the IP address of each Synchrony cluster node, or a multicast address.

  1. Create a Synchrony directory on your first node and copy synchrony-standalone.jar from your Confluence <home-directory> to this directory. 
  2. Copy your database driver from your Confluence <install-directory>/confluence/web-inf/lib to an appropriate location on your Synchrony node.
  3. Change to your Synchrony directory and start Synchrony using the following command.
    You need to pass all of the system properties listed, replacing the values where indicated.

    Start Synchrony command...

    In a terminal / command prompt, execute the following command, replacing <values> with appropriate values for your environment. Scroll down for more information on each of the values you will need to replace.

     

    java 
    -Xss2048k 
    -Xmx2g
     
    # To set the classpath in Linux  
    -classpath <PATH_TO_SYNCHRONY_STANDALONE_JAR>:<JDBC_DRIVER_PATH> 
     
    # To set the classpath in Windows 
    -classpath <PATH_TO_SYNCHRONY_STANDALONE_JAR>;<JDBC_DRIVER_PATH> 
     
    # Remove this section if you don't want to discover nodes using TCP/IP
    -Dcluster.join.type=tcpip 
    -Dcluster.join.tcpip.members=<TCPIP_MEMBERS> 
     
    # Remove this section if you don't want to discover nodes using multicast
    -Dcluster.join.type=multicast
     
    # Remove this section if you don't need to discover nodes in AWS
    -Dsynchrony.cluster.impl=hazelcast-micros
    -Dcluster.join.type=aws
     
    -Dsynchrony.bind=<SERVER_IP> 
    -Dsynchrony.service.url=<SYNCHRONY_URL> 
    -Dsynchrony.database.url=<YOUR_DATABASE_URL> 
    -Dsynchrony.database.username=<DB_USERNAME> 
    -Dsynchrony.database.password=<DB_PASSWORD>  
     
    synchrony.core 
    sql

    (warning) Remember to remove all commented lines completley before you execute this command, and replace all new lines with white space. You may also need to change the name of the synchrony-standalone.jar if the file you copied is different to our example.

    Here's more information about each of the values you'll need to supply in the command above.

    Values Required? Description
    <PATH_TO_SYNCHRONY_STANDALONE_JAR>
    Yes This is the path to the synchrony_standalone.jar that you copied in step 1. For example <synchrony-directory>/synchrony_standalone.jar
    <JDBC_DRIVER_PATH>
    Yes This is the path to your database driver. For example <synchrony-directory>/postgresql-9.2-1002.jdbc.jar.
    <TCPIP_ MEMBERS> Yes, if TCP/IP If you choose to discover nodes using TCP/IP, provide a comma separated list of IP addresses for each cluster node.
    <SERVER_IP>
    Yes This is the public IP address or hostname of this Synchrony node. It could also be a private IP address - it should be configured to the address where Synchrony is reachable by the other nodes.
    <SYNCHRONY_URL> Yes This is the URL that the browser uses to contact Synchrony. Generally this will be the full URL of the load balancer Synchrony will run behind plus the Synchrony context path, for example http://yoursite.com:8091/synchrony. Note that it does not end with /v1, unlike the synchrony.service.url system property passed to Confluence. If this URL doesn't match the URL coming from a users' browser, Synchrony will fail.
    <YOUR_DATABASE_URL>
    Yes This is the URL for your Confluence database. For example jdbc:postgresql://localhost:5432/confluence. You can find this URL in <local-home>/confluence.cfg.xml.
    <DB_USERNAME>
    <DB_PASSWORD>
    Yes The username and password for your Confluence database user. Sensitive information (like these database credentials) may be provided using environmental variables, rather than via the command line. Any dots (".") in variable names (identifiers) will need to be replaced with underscores ("_"). 

    When you start Synchrony we also pass default values for a number of additional properties. You can choose to override these values by specifying these optional properties when you start Synchrony. See Configuring Synchrony for Data Center for more information.

    (warning) You can use this info to create your own script to run Synchrony, or follow the steps in this guided example to create a service script - Run Synchrony-standalone as a service on Linux.

     

  4. To check that Synchrony is accessible, you can go to:
    http://<SERVER_IP>:<SYNCHRONY_PORT>/synchrony/heartbeat
  5. Repeat this process to start Synchrony on each node of your Synchrony cluster.
    As each node joins you'll see something like this in your console.

    Members [2] {
    	Member [172.22.52.12]:5701
    	Member [172.22.49.34]:5701 
    }
    
  6. Configure your load balancer for Synchrony.
    Your load balancer must support WebSockets (for example NGINX 1.3 or later, Apache httpd 2.4, IIS 8.0 or later) and session affinity. SSL connections must be terminated at your load balancer so that Synchrony can accept XHR requests from the web browser. 

5. Start Confluence on Node 1

In this example, we assume you use the same load balancer for Synchrony and Confluence, as shown in Configuring Synchrony for Data Center.

  1. Start Confluence on node 1 and pass the following system property to Confluence to tell Confluence where to find your Synchrony cluster.

    -Dsynchrony.service.url=http://<synchrony load balancer url>/synchrony/v1

    For example http://yoursite.example.com/synchrony/v1. You must include /v1 on the end of the URL.

    If Synchrony is set up as one node without a load balancer, use the following instead:

    -Dsynchrony.service.url=http://<synchrony ip or hostname>:<synchrony port>/synchrony/v1

    For example http://42.42.42.42:8091/synchrony/v1 or http://synchrony.example.com:8091/synchrony/v1

    You may want to add this system property to your <install-directory>/bin/setenv.sh or setenv.bat so it is automatically passed every time you start Confluence. See Configuring System Properties for more information on how to do this in your environment.

  2. Head to  > General Configuration > Collaborative editing to check that this Confluence node can connect to Synchrony. 

    Note: to test creating content you'll need to access Confluence via your load balancer.  You can't create or edit pages when accessing a node directly.

6. Copy Confluence to remaining nodes

The next step is to replicate your upgraded Confluence directories to other nodes in the cluster.  

  1. Stop Confluence on the first node.
  2. Copy the installation directory and local home directory from the first node to the next node. 
    If the path to the local home directory is different on this node, edit the confluence-init.properties to point to the correct location. 
  3. Start Confluence, and and confirm that you can log in and view pages on this node.
  4. Stop Confluence on this node before continuing with the next node. 

Repeat this process for each remaining node. 

7. Start Confluence and check cluster connectivity 

Once all nodes have been upgraded you can start Confluence Data Center on each node, one at a time (starting up multiple nodes simultaneously can lead to serious failures).

The Cluster monitoring console ( > General Configuration > Clustering) includes information about the active cluster nodes. When the cluster is running properly, you should be able to see the details of each node. 

Last modified on Aug 15, 2018

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.