Alternative Disaster Recovery Guide for JIRA

Disaster Recovery Guide for JIRA

On this page

This guide shows you how to set up an alternative disaster recovery solution for JIRA, if you do not have JIRA Data Center. This solution is not supported by Atlassian. The only Atlassian-supported disaster recovery solution for JIRA is this one described in this guide: Disaster Recovery Guide for JIRA, which requires JIRA Data Center.  

A disaster recovery strategy is a key part of any business continuity plan. It covers the processes that should be followed in the event of a disaster, to ensure that the business can recover and keep operating. For JIRA, this means ensuring JIRA's availability in the event of your primary site becoming unavailable.

On this page:


Overview

The guide describes what is generally referred to as a "cold standby" strategy. That means that the standby JIRA instance is not continuously running and that some administrative steps need to be taken to start the standby instance and ensure it is in a suitable state to service the business needs of the organization.

The major components that need to be considered in the disaster recovery plan are:

JIRA installation The standby site should have the exact same version on JIRA installed as the production site.
Database This is the primary source of truth for JIRA and contains most of the JIRA data, (except for attachments, avatars, installed plugins, etc). The database needs to be replicated and continuously kept up to date to satisfy your RPO1
Attachments

All issue attachments are stored in the local file system and need to be replicated to the standby instance.

Search Index The search index is not a primary source of truth and can always be recreated from the database, however for large installations this can be quite time consuming and the functionality of JIRA would be greatly reduced until the index is fully recovered. JIRA provides tools for reducing this recovery time to the bare minimum.
Plugins User installed plugins are stored in the local file system and need to be replicated to the standby instance.
Other data There are a few other non-critical items that should also be replicated to the standby instance such as User and Project avatars.

Setting up a standby system

Step 1. Install JIRA

Install the same version of JIRA on standby system. Configure the system to attach to the standby database.

You also need to configure the instance to be a disaster recovery installation. This enables the automatic index recovery mechanism to kick in when JIRA starts.

Add the following to jira-config.properties in the JIRA Home directory of the standby instance:

disaster.recovery=true

DO NOT start the standby JIRA system

Starting JIRA would write data to the database, which you do not want to do.

You may like to test the installation by temporarily connecting it to a different database and starting JIRA, then making sure it works as expected. Don't forget to update the database configuration to point to the standby database after your testing.

Step 2. Implement a data replication strategy

Replicating data to your standby location is a crucial to a cold standby failover strategy. You don't want to fail over to your standby JIRA instance and find that it is out of date or that it takes a few hours to reindex.

Manage data replication via external tools, as described below:

Database

Atlassian does not provide or recommend a particular strategy for replicating the database. All of the supported database suppliers -- that is, Oracle, PostgreSql, MySql and Microsoft SQLServer – provide their own database replication solutions:

You need to implement a database replication strategy that meets your RPO1  and RCO1 .

Attachments

There are a number of possibilities for managing attachments for disaster recovery:


  • Have JIRA replicate the attachments:
    You can configure JIRA to write a second copy of attachments to a secondary location. This secondary copy is written asynchronously so as to not impact normal JIRA performance. For JIRA secondary storage you can:
    • Use a file system mapping which you need to use operating system level tools to map that location to the remote standby location, using NFS, CIFS or some other mechanism,
    • Use a plugin with a defined storage, or
    • Create your own plugin that implements the SimpleAttachmentStore of JIRA

    JIRA will keep this location synchronized from the time you enable it. JIRA  will not  migrate all the previous attachments from the primary location to the secondary.
    If you enable the secondary file system storage JIRA will provide a default location in JIRA_HOME/secondary where it will map the same structure of a JIRA home to keep consistency when the secondary storage will need to be used as primary. Keep in mind that when you edit this path, because plugins and snapshots can be replicated to this storage also.
    To improve RTO the physical location of the secondary storage should have the same path as the primary. This will give you the advantage that when JIRA starts in the failover data center you would not need to configure the new location. 

  • Use an attachment store that provides its own DR:
    JIRA provides an entry point for customers to add different types of storage as a primary storage location (Amazon's S3 , Google Drive, etc). This can be either the sole attachments storage or a secondary backup storage. 

  • Use file system or operating system tools to replicate the attachments:
    If you are already using a corporate SAN or similar system that provides this functionality that may be the easiest, most cost effective and reliable way to replicate the attachments.

Search indexes

The steps to put the search index into a state that meets your RTO 1 objective are:

  1. Enable index recovery on the live instance:
    This will take a consistent snapshot of the search index periodically. The frequency of this will affect how long it takes to recover the full index on the standby after the failover, but even with a frequency of 24 hours the amount to be recovered will be at most one days indexing which would be typically < 1% of the index and take only a very short time to recover, for example if a full re-index takes 5 hours then the recovery would be expected to only be about 5 minutes.
  2. Copy the index snapshots to the standby instance:
      • The snapshots which are saved to <yourjirahome>/export/indexsnapshots.
      • The snapshots should be copied to <yourjirahome>/import/indexsnapshots on the standby server.
    JIRA does not provide a mechanism to copy these files. You need to set up a regular job to do a file system copy. You should retain at least the last 2 snapshots on the standby server. 
  3. Ensure that the standby server is a disaster recovery installation — See Installing JIRA above.
Plugins Installed plugins are kept in the <yourjirahome>/plugins/installed-plugins directory. This directory on the standby instance should be kept in sync with that on the live instance. You need to set up a regular job to do this at the file system level.
Other data

You should also periodically replicate the content of the <yourjirahome>/data/avatars directory.

If you have non Atlassian plugins, they may write some data to your <yourjirahome> directory. You will need to contact you plugin supplier to determine if this data should be replicated to the standby server.

Disaster recovery testing

You should exercise extreme care when testing any disaster recovery plan. Simple mistakes may cause your live instance to be corrupted, for example, if testing updates are inserted into your production database. You may detrimentally impact your ability to recover from a real disaster, while testing your disaster recovery plan.

(info) The key is to keep the main data center as isolated as possible from the disaster recovery testing.

Prerequisites

Before you perform any testing, you need to isolate your production data:

Database
  1. Temporarily pause all replication to the standby database.
  2. Replicate the data from the standby database to another database that is isolated and with no communication with the main database.
Attachments, plugins and indexes

You need to ensure that no plugin updates or index backups occur during the test:

  1. Disable index backups.
  2. Instruct sysadmins to not perform any updates in JIRA.

Note, attachments should not cause any kind of problem, healthchecks in the failover instance are going to give enough information if the folders have the write permissions.

Installation folders
  1. Clone your standby installation, separate from both the live and standby instances.
  2. Change the connection to the database in the JIRA_HOME/dbconfig.xml to avoid any conflict.

After this, you can resume all replication to the standby instance, including the database.

Performing the disaster recovery testing

Once you have isolated your production data, follow the steps below to test your disaster recovery plan:

  1. Ensure that the new database is ready, with the latest snapshot and no replication.
  2. Ensure that you have a copy of JIRA on a clean server with the proper dbconfig.xml connection.
  3. Ensure that you have JIRA_HOME mapped as it was in the standby instance, but in the test server. It is important to have the latest snapshot in JIRA_HOME/export folder.
  4. Disable email.
  5. Start JIRA in Disaster Recovery mode, by starting it with the following parameter: disaster.recovery=true.

Handling a failover

In the event of your primary site becoming unavailable, you will need to fail over to your standby system. This section describes how to do this, including instructions on how to check the data in your standby system.

Step 1. Fail over to the standby instance

The basic steps to failover to the standby instance are:

  1. Ensure your live system is shutdown and no longer updating the database.
  2. Ensure that the directory  <yourjirahome>/old does not exist on the standby instance.
  3. Perform whatever steps are required to activate your standby database.
  4. Start JIRA in the standby instance.
  5. Wait for JIRA to start and check it is operating as expected.
  6. Update your DNS, HTTP Proxy or other front end devices to route traffic to your standby server.

You should check the log, <yourjirahome>/log/atlassian-jira.log after JIRA starts for information regarding the recovery state.

Step 2. Check the data in your standby instance

After you have failed over to your standby instance, perform these checks before users start accessing the system and changing data:

Check Instructions
Latest issue update recorded in the database.

In the database, run the SQL query:

SELECT max(updated) from jiraissue; 
Latest issue update recorded in the search index.

In JIRA, go to Issues > Search for issues and run the JQL:

order by updated desc
Check the total number of issues

In the database, run the SQL query:

SELECT count(*) from jiraissue;
Check the total number of issues in the search index

In JIRA, go to Issues > Search for issues and run a search with an empty query.

Clustering considerations

If you have a clustered environment, you need to be aware of the following, in addition to the information above:

Standby cluster

If you have a standby cluster, the node ids of the standby nodes must be different from those of the live cluster.

There is no need for the configuration of the standby cluster to reflect that of the live cluster, it may contain more or fewer nodes, depending upon your requirements and budget. Fewer nodes may result in lower throughput but that may be acceptable depending upon your circumstances.

File locations Where we mention <yourjirahome> for the location of files that need to be synchronized will be the shared home for the cluster.
Starting the standby cluster It is important to initially start only one node of the cluster, allow it to recover the search index and check it is working correctly before starting additional nodes.

Returning to the primary instance

In most cases, you will want to return to using your primary instance, after you have resolved the problems that caused the disaster. This is easiest to achieve if you can schedule a reasonably-sized outage window.

You need to:

  • Synchronize your primary database with the state of the secondary.
  • Synchronize the primary attachment directory with the state of the secondary.
  • Recover the index state on the primary server.

Preparation

Attachments and other files
  1. Use rsync or a similar uililty to synchronize the majority of attachments to the primary server before starting the switchover process.
  2. Similarly, you should synchronize the installed plugins and logos before you start.
Search index Enable Index snapshots on the standby (running) node so that you have a recent index snapshot. This should be copied to a location that is accessible from the live node.

Perform the cut over

  1. Shutdown JIRA on the standby node.
  2. Ensure the database is synchronized correctly and configured to as required.
  3. Start JIRA.
  4. Log in to JIRA and restore the index from the index snapshot. You will need to know the name and location of the snapshot file.
  5. Check that JIRA is operating as expected.
  6. Update your DNS, HTTP Proxy or other front end devices to route traffic to your primary server.

Other resources

Atlassian Experts

JIRA Data Center is the only Atlassian-supported disaster recovery solution for JIRA. However, if you cannot get JIRA Data Center, many of our Experts have been implementing disaster recovery solutions for JIRA for years.

To get help implementing a disaster recovery solution for your environment, contact our Experts team.

Atlassian Answers

Our community and staff are active on Atlassian Answers. Feel free to contribute your best practices, questions and comments. Here are some of the answers relevant to this page:

Troubleshooting

If you encounter problems after failing over to your standby instance, the following FAQs may help:

What do I do if my database is not synchronized correctly?

If the database does not have the data available that it should, then you will need to restore the database from a backup.

Once you have restored the database, the search index will no longer by in sync with the database. You can either do a full re-index, background or foreground, or recover from the latest index snapshot if you have one. The index snapshot can be older or more recent than your database backup, it will synchronize itself as part of the recovery process.

What do I do if my search index is corrupt?

If the search index is corrupt, you can either do a full re-index, background or foreground, or recover from an earlier index snapshot if you have one. 

What do I do if attachments are missing?

You may be able to recover them from backups if you have them, or recover from the primary site, if you have access to the hard drives.  Tools such as rsync may be useful in such circumstances. Missing attachments will not stop JIRA performing normally: the missing attachments will just not be available, so users may be able to upload them again.

Definitions

1 - Definitions

RPO Recovery Point Objective How up-to-date you require your JIRA instance to be after a failure.
RTO Recovery Time Objective How quickly you require your standby system to be available after a failure.
RCO Recovery Cost Objective How much you are willing to spend on your disaster recovery solution.
Last modified on Jul 30, 2017

Was this helpful?

Yes
No
Provide feedback about this article

Not finding the help you need?

Ask the community

Powered by Confluence and Scroll Viewport.