[Other doc versions]
[Doc downloads]
This guide assumes that you already have a production instance of Stash, and that you are aiming to migrate that to a Stash Data Center instance.
We recommend that you:
Regardless of the process you use, please smoke test your Stash Data Center instance every step of the way.
On this page:
It's worth getting a clear understanding of what you're aiming to achieve, before starting to provision your Stash Data Center.
A Stash Data Center instance consists of a cluster of dedicated machines connected like this:
The URL of the Stash Data Center instance will be the URL of the load balancer, so this is the machine that you will need to assign the name of your Stash instance in the DNS.
The remaining machines (Stash cluster nodes, shared database, and shared file system) do not need to be publicly accessible to your users.
The Stash cluster nodes all run the Stash Data Center web application.
You can use the load balancer of your choice. Stash Data Center does not bundle a load balancer.
If you don't have a preference for your load balancer, we provide instructions for haproxy
, a popular Open Source software load balancer.
You must run Stash Data Center on an external database. You can not use Stash's internal HSQL database with Stash Data Center.
Stash Data Center requires a high performance shared file system such as a SAN, NAS, RAID server, or high-performance file server optimized for I/O.
What is stored on the shared file system?
What is stored locally on each node?
Begin by upgrading your production Stash Server instance to the latest public release. This is necessary for several reasons:
Upgrade your Stash Server by following the instructions in the Stash upgrade guide.
Now, take a backup of your production Stash instance's database and home directory. For this you can:
Set up your shared database server. Note that clustered databases are not yet supported.
See Connecting Stash to an external database for more information.
You must ensure your database is configured to allow enough concurrent connections. Stash by default uses up to 80 connections per cluster node, which can exceed the default connection limit of some databases.
For example, in PostgreSQL the default limit is usually 100 connections. If you use PostgreSQL, you may need to edit your postgresql.conf
file, to increase the value of max_connection
s
, and restart Postgres.
We do not support MySQL for Stash Data Center at this time due to inherent deadlocks that can occur in this database engine at high load. If you currently use MySQL, you should migrate your data to another supported database (such as PostgreSQL) before upgrading your Stash Server instance to Stash Data Center. You can migrate databases (on a standalone Stash instance) using the Migrate database feature in Stash's Adminstration pages, or by using the Stash backup client.
Set up your shared file server.
See Stash Data Center FAQ for performance guidelines when using NFS.
You must ensure your shared file system server is configured with enough NFS server processes.
For example, some versions of RedHat Enterprise Linux and CentOS have a default of 8 server processes. If you use one of these systems, you may need to edit your /etc/sysconfig/nfs
file, increase the value of RPCNFSDCOUNT
, and restart the nfs
service.
You must ensure your shared file system server has the NFS lock service enabled. For example:
portmap
and dbus
services are enabled for the NFS lockd
to function.nfs-utils
and nfs-utils-lib
packages, and ensure the rpcbind
and nfslock
services are running. Create a Stash user account (recommended name atlstash
) on the shared file system server to own everything in the Stash shared home directory. This user account must have the same UID on all cluster nodes and the shared file system server. In a fresh Linux install the UID of a newly created account is typically 1001, but in general there is no guarantee that this UID will be free on every Linux system. Choose a UID for atlstash
that's free on all your cluster nodes and the shared file system server, and substitute this for 1001
in the following command:
sudo useradd -c "Atlassian STASH" -u 1001 atlstash
You must ensure that the atlstash user has the same UID on all cluster nodes and the shared file system server.
Then restore the backup you have taken in step 2 into the new shared database and shared home directory.
Only the shared
directory in the Stash home directory needs to be restored into the shared home directory. The remaining directories (bin
, caches
, export
, lib
, log
, plugins
, and tmp
) contain only caches and temporary files, and do not need to be restored.
You must ensure that the user running Stash (usually atlstash
) is able to read and write everything in the Stash home directory, both the node-local part and the shared part (in NFS). The easiest way to do this is to ensure that:
atlstash
owns all files and directories in the Stash home directory,atlstash
has the recommended umask
of 0027
, and
atlstash
has the same UID on all machines.Do not run Stash as root
. Many NFS servers squash accesses by root
to another user.
We highly recommend provisioning cluster nodes using an automated configuration management tool such as Chef, Puppet, or Vagrant, or by spinning up identical virtual machine snapshots.
On each cluster node, mount the shared home directory as ${STASH_HOME}/shared
. For example, suppose your Stash home directory is/var/atlassian/application-data/stash
, and your shared home directory is available as an NFS export called stash-san:/stash-shared
. Add the following line to /etc/fstab
on each cluster node:
stash-san:/stash-shared /var/atlassian/application-data/stash/shared nfs nfsvers=3,lookupcache=pos,noatime,intr,rsize=32768,wsize=32768 0 0
Only the ${STASH_HOME}/shared
directory should be shared between cluster nodes. All other directories, including ${STASH_HOME}
, should be node-local (that is, private to each node).
Stash Data Center checks during startup that ${STASH_HOME}
is node local and ${STASH_HOME}/shared
is shared, and will fail to form a cluster if this is not true.
Your shared file system must provide sufficient consistency for Stash and Git.
Linux NFS clients require the lookupcache=pos
mount option to be specified for proper consistency.
NFSv4 may have issues in Linux kernels from about version 3.2 to 3.8 inclusive. The issues may cause very high load average, processes hanging in "uninterruptible sleep", and in some cases may require rebooting the machine. We recommend using NFSv3 unless you are 100% sure that you know what you're doing and your operating system is free from such issues.
Linux NFS clients should use the nfsvers=3
mount option to force NFSv3.
Then mount it:
mkdir -p /var/atlassian/application-data/stash/shared sudo mount -a
Ensure all your cluster nodes have synchronized clocks and identical timezone configuration. For example, in RedHat Enterprise Linux or CentOS:
sudo yum install ntp sudo service ntpd start sudo tzselect
In Ubuntu Linux:
sudo apt-get install ntp sudo service ntp start sudo dpkg-reconfigure tzdata
For other operating systems, consult your system documentation.
The system clocks on your cluster nodes must remain reasonably synchronized (say, to within a few seconds or less). If your system clocks drift excessively or undergo abrupt "jumps" of minutes or more, then cluster nodes may log warnings, become slow, or in extreme cases become unresponsive and require restarting. You should run the NTP service on all your cluster nodes with identical configuration, and never manually tamper with the system clock on a cluster node while Stash Data Center is running.
Download the latest Stash Data Center distribution from https://www.atlassian.com/software/stash/download, and install Stash as normal on all the cluster nodes. See Getting started.
Edit the file ${STASH_HOME}/shared/stash-config.properties
, and add the following lines:
# Use multicast to discover cluster nodes (recommended). hazelcast.network.multicast=true # If your network does not support multicast, you may uncomment the following lines and substitute # the IP addresses of some or all of your cluster nodes. (Not all of the cluster nodes have to be # listed here but at least one of them has to be active when a new node joins.) #hazelcast.network.tcpip=true #hazelcast.network.tcpip.members=192.168.0.1:5701,192.168.0.2:5701,192.168.0.3:5701 # The following should uniquely identify your cluster on the LAN. hazelcast.group.name=your-stash-cluster hazelcast.group.password=your-stash-cluster
Using multicast to discover cluster nodes (hazelcast.network.multicast=true
) is recommended, but requires all your cluster nodes to be accessible to each other via a multicast-enabled network. If your network does not support multicast then you can set hazelcast.network.multicast=false
, hazelcast.network.tcpip=true
, and hazelcast.network.tcpip.members
to a comma-separated list of cluster nodes instead. Only enable one of hazelcast.network.tcpip
or hazelcast.network.multicast
, not both!
Choose a name for hazelcast.group.name
and hazelcast.group.password
that uniquely identifies the cluster on your LAN. If you have more than one cluster on the same LAN (for example, other Stash Data Center instances or other products based on similar technology such as Confluence Data Center) then you must assign each cluster a distinct name, to prevent them from attempting to join together into a "super cluster".
Then start Stash. See Starting and stopping Stash.
Then go to http://<stash>:7990/admin/license
, and install the Stash Data Center license you were issued. Restart Stash for the change to take effect. If you need a Stash Data Center license, please contact us!
You can use the load balancer of your choice, either hardware or software. Stash Data Center does not bundle a load balancer.
Your load balancer must proxy three protocols:
Protocol | Typical port on the load balancer | Typical port on the Stash cluster nodes | Notes |
---|---|---|---|
HTTP | 80 | 7990 | HTTP mode. Session affinity ("sticky sessions") should be enabled using the 52-character JSESSIONID cookie. |
HTTPS | 443 | 7990 | HTTP mode. Terminating SSL at the load balancer and running plain HTTP to the Stash cluster nodes is highly recommended. |
SSH | 7999 | 7999 | TCP mode. |
For best performance, your load balancer should support session affinity ("sticky sessions") using the JSESSIONID
cookie. By default, Stash Data Center assumes that your load balancer always directs each user's requests to the same cluster node. If it does not, users may be unexpectedly logged out or lose other information that may be stored in their HTTP session.
Stash Data Center also provides a property hazelcast.http.sessions
that can be set in ${STASH_HOME}/shared/stash-config.properties
that provides finer control over HTTP session management. This property can be set to one of the following values:
local
(the default): HTTP sessions are managed per node. When used in a cluster, the load balancer must have session affinity ("sticky sessions") enabled. If a node fails or is shut down, users that were assigned to that node may need to log in again.sticky
: HTTP sessions are distributed across the cluster with a load balancer configured to use session affinity ("sticky sessions"). If a node fails or is shut down, users should not have to log in again. In this configuration, session management is optimized for sticky sessions and will not perform certain cleanup tasks for better performance.replicated:
HTTP sessions are distributed across the cluster. If a node fails or is shut down, users should not have to log in again. The load balancer does not need to be configured for session affinity ("sticky sessions"), but performance is likely to be better if it is.
Both the sticky
and replicated
options come with some performance penalty, which can be substantial if session data is used heavily (for example, in custom plugins). For best performance, local
(the default) is recommended.
When choosing a load balancer, it must support the HTTP, HTTPS, and TCP protocols. Note that:
If your load balancer supports health checks of the cluster nodes, configure it to perform a periodic HTTP GET of http:// <stash>:7990/status
, where <stash>
is the cluster node's name or IP address. This returns one of two HTTP status codes:
If a cluster node does not return 200 OK within a reasonable amount of time, the load balancer should not direct any traffic to it.
You should then be able to navigate to http://<load-balancer>/
, where <load-balancer>
is your load balancer's name or IP address. This should take you to your Stash front page.
If you don't have a particular preference or policy for load balancers, you can use HAProxy which is a popular Open Source software load balancer.
If you choose HAProxy, you must use a minimum version of 1.5.0. Earlier versions of HAProxy do not support HTTPS.
Here is an example haproxy.cfg
configuration file (typically found in the location /etc/haproxy/haproxy.cfg
). This assumes:
/etc/cert.pem
.Review the contents of the haproxy.cfg
file carefully, and customize it for your environment. See http://www.haproxy.org/ for more information about installing and configuring haproxy
.
Once you have configured the haproxy.cfg
file, start the haproxy
service.
sudo service haproxy start
You can also monitor the health of your cluster by navigating to HAProxy's statistics page at http://<load-balancer>:8090/
. You should see a page similar to this:
Stash needs to be configured to work with HAProxy. For example:
<Connector port="7990" protocol="HTTP/1.1" connectionTimeout="20000" useBodyEncodingForURI="true" redirectPort="443" compression="on" compressableMimeType="text/html,text/xml,text/plain,text/css,application/json,application/javascript,application/x-javascript" secure="true" scheme="https" proxyName="<load-balancer>" proxyPort="443" />
Securing Stash behind HAProxy using SSL for more details.
Go to a new cluster node, and start Stash. See Starting and stopping Stash.
Once Stash has started, go to
http://<load-balancer>/admin/clustering
. You should see a page similar to this:
Verify that the new node you have started up has successfully joined the cluster. If it does not, please check your network configuration and the ${STASH_HOME}/log/atlassian-stash.log
files on all nodes. If you are unable to find a reason for the node failing to join successfully, please contact Atlassian Support.
If you are using your own hardware or software load balancer, consult your vendor's documentation on how to add the new Stash cluster node to the load balancer.
If you are using HAProxy, just uncomment the lines
server stash02 192.168.0.2:7990 check inter 10000 rise 2 fall 5
server stash02 192.168.0.2:7999 check port 7999
in your haproxy.cfg
file and restart haproxy
:
sudo service haproxy restart
Verify that the new node is in the cluster and receiving requests by checking the logs on each node to ensure both are receiving traffic and also check that updates done on one node are visible on the other.
You have now set up a clustered instance of Stash Data Center! We are very interested in hearing your feedback on this process – please contact us!
For any issues please raise a support ticket and mention that you are following the 'Installing Stash Data Center' page.
Please see Using Stash in the enterprise for information about using Stash in a production environment.