Administering Confluence Data Center on Azure
To access your jumpbox and nodes you'll need:
- the SSH credentials you provided during setup,
- the Confluence node credentials you provided during setup
- the public DNS name or IP address of your jumpbox (you can obtain this through the Azure portal via Menu > Resource groups > <your resource group> > confluencenat), and
- the node IP addresses, listed against the
confluencecluster (instance n)row in Connected devices. (You can obtain this through the Azure portal via Menu > Resource groups > <your resource group> > confluencevnet).
Connecting to your Azure jumpbox over SSH
You can SSH into your Confluence cluster nodes, Synchrony nodes and shared home directory to perform configuration or maintenance tasks. Note that you must keep your SSH public key file in a safe place. This is the key to your jumpbox, and therefore all the nodes in your instance.
Access the jumpbox via a terminal or command line using:
$ ssh JUMPBOX_USERNAME@DNS_NAME_OR_IP_ADDRESS
You can find the SSH URL in the outputs section of your deployment.
Once you've accessed the jumpbox, we can jump to any of the nodes in the cluster, using:
$ ssh NODE_USERNAME@NODE_IP_ADDRESS
You'll then be asked for your node password - after providing this, you should be connected to the node.
Accessing your configuration files
For your Azure deployment, you may need to make changes to some configuration files, just as you would for a deployment on your own hardware:
- your local home
- your shared home
These files are only accessible from the existing nodes. The shared home is mounted (think of it as a network hard disk) on each node under
/media/atl/confluence/shared. So from an existing node (when you're logged in through SSH), you can go to
If modifications to these files are made manually, new nodes will not pick up those modifications. You can either repeat the modifications on each node, or change the templates in the
/media/atl/confluence/shared directory from which those files are derived. The mappings are:
server.xmlfile is derived from
setenv.shfile is derived from
- the local home
confluence.cfg.xmlis derived from
- the shared home
confluence.cfg.xmlis derived from
These template files contain placeholders for values that are injected via the deployment script. Removing or changing them may cause breakages with the deployment. In most cases, these files should not be modified, as a lot of these settings are produced from the Azure Resource Manager templates automatically.
Here's some useful advice for upgrading your deployment:
- Before upgrading to a later version of Confluence Data Center, check if your apps are compatible with that version. Update your apps if needed. For more information about managing apps, see Using the Universal Plugin Manager.
- If you need to keep Confluence Data Center running during your upgrade, we recommend using read-only mode for site maintenance. Your users will be able to view pages, but not create or change them.
- We strongly recommend that you perform the upgrade first in a staging environment before upgrading your production instance. Create a staging environment for upgrading Confluence provides helpful tips on doing so.
Upgrading Confluence in Azure
The process of upgrading Confluence is the same as if you were running the cluster on your own hardware. You will stop Confluence on all nodes, upgrade one node, stop that node then copy the installation directory across to each remaining node in the cluster, before restarting each node, one at a time.
See Upgrading Confluence Data Center for more details.
You can't use the
confluenceVersion parameter in the deployment template to upgrade an existing Confluence deployment, or to provision new nodes running a different version to the rest of your cluster.
You also can't do a rolling upgrade. You will need to bring all nodes down before upgrading.
Upgrading your operating system
If you need to upgrade the operating system running on your Confluence nodes, you will need to SSH into each node, perform a
sudo apt dist-upgrade (Ubuntu) and reboot each node.
As Confluence is running as a service it will be automatically restarted on reboot.
You can't simply reimage an instance, as you might do in Jira, due to the way Hazelcast discovers cluster nodes.
Backing up and recovering from failures
We recommend you use the Azure native backup facilities where possible to make sure your data is backed up, and you can easily recover in the case of a failure.
We use Azure-managed database instances with high availability. Azure provides several excellent options for backing up your database, so you should take some time to work out which will be the best, and most cost effective option for your needs. See the following Azure documentation for your chosen database:
Shared home backups
The shared home stores your your attachments, profile pictures, and export files. We create a general purpose Azure storage account, configured with local redundant storage (LRS), which means there are multiple copies of the data at any one time.
LRS provides a basic redundancy strategy for your shared home. As such, it shouldn't be necessary to take regular backups yourself. If you need to take point-in-time backups, use snapshots.
The application nodes are VMs in an Azure Virtual Machine Scale Set. Each application node has a Confluence installation directory and a local home directory containing things like logs and search indexes.
Like the shared home, application nodes are configured with local redundant storage. This means there are multiple copies of the data at any one time.
If you've manually customised any configuration files in the installation directory (for example velocity templates), you may also want to manually back these up as a reference.
As this VM acts as a jumpbox, and doesn't store any data it doesn't need to be backed up. If the VM becomes unresponsive it can be restarted from the Azure Portal.
The application gateway is highly available. We deploy 2 instances by default. As with the bastion host, it doesn't need to be backed up.
See Confluence Data Center disaster recovery to learn about how you can develop a disaster recovery strategy. See also information in the Azure documentation about recovering from a region-wide failure Azure resiliency technical guidance: recovery from a region-wide service disruption.