Stash is now known as Bitbucket Server.
See the

Unknown macro: {spacejump}

of this page, or visit the Bitbucket Server documentation home page.

This page describes the Atlassian Stash Amazon Machine Image (AMI), what's inside it, how to launch it, and how to perform administration tasks on your Stash instance in the Amazon Web Services (AWS) environment.

The Stash AMI

The Atlassian Stash AMI provides a typical deployment of Stash in AWS. It bundles all the components used in a typical Stash deployment (reverse proxy, external database, backup tools, data volume, and temporary storage), pre-configured and ready to launch. 

You can use the Atlassian Stash AMI as a "turnkey" deployment of a Stash instance in AWS, or use it as the starting point for customizing your own, more complex Stash deployments.

On this page:

Components of the Stash AMI

An instance launched from the Atlassian Stash AMI contains the following components:

  • Stash (either the latest version or a version of your choice),
  • an external PostgreSQL database,
  • nginx as a reverse proxy,
  • the Stash DIY Backup utilities pre-configured for native AWS snapshots,
  • an EBS Volume and Instance Store to hold the data.

Atlassian Stash AMI high level diagram  

Operating system

Amazon Linux 64-bit, 2014.09.1

StashStash (latest public version or a version of your choice) is downloaded and installed on launch.
Administrative toolsatlassian-stash-diy-backup pre-installed and configured for AWS native backup, accessible over SSH.
Reverse proxy

nginx, configured as follows:

  • listens on port 80 and (optionally) 443,
  • (optionally) terminates SSL (HTTPS) and passes through plain HTTP to Stash,
  • displays a static HTML page when the Stash service is not running.
DatabasePostgreSQL 9.3
Block devices
  1. An EBS volume (/dev/xvdf, mounted as /media/atl), that stores:
    • the Stash shared home directory, containing all of Stash's repository, attachment, and other data,
    • PostgreSQL's data directory.
  2. An EC2 Instance Store (/dev/xvdb, mounted on /media/ephemeral0) to store Stash's temporary and cache files. 

Launching your Stash instance

The Atlassian Stash AMI can be launched by either

  • Using a CloudFormation template which automates creation of the associated Security Group and IAM Role, see Quick Start with Stash and AWS.
  • Manually using the AWS Console which gives finer control over the optional components to enable in the instance and AWS-specific network, security, and block device settings, see Launching Stash in AWS manually.

On first boot, the Atlassian Stash AMI reads the file /etc/atl (if any), which can override variables that enable each of the installed components. So for example to enable a self-signed SSL certificate, you can supply user data to the instance at launch time like this:

echo "ATL_SSL_SELF_CERT_ENABLED=true" >>/etc/atl

The following variables can be configured: 

Variable nameDefault valueDescription
ATL_NGINX_ENABLEDtrueSet to false to disable the Nginx reverse proxy, and leave Stash's server.xml configured to listen on port 7990 with no proxy.
ATL_POSTGRES_ENABLEDtrueSet to false to disable the PostgreSQL service, and leave Stash configured with its internal HSQL database.

Set to true to enable a self-signed SSL certificate to be generated at launch time, and for Stash's server.xml and Nginx's nginx.conf to be configured for HTTPS.

Requires ATL_NGINX_ENABLED also to be true.

See Proxying and securing Stash for more information about Stash's server.xml configuration file. 

Connecting to your Stash instance using SSH

When connecting to your instance over SSH, use ec2-user as the user name, for example:

ssh -i keyfile.pem

The ec2-user has sudo access. The Atlassian Stash AMI does not allow SSH access by root

Installing an SSL certificate in your Stash instance

If launched with a self-signed SSL certificate (you selected SSLCertificate Generate a self-signed certificate in Quick Start with Stash and AWS or you set ATL_SSL_SELF_CERT_ENABLED=true in Launching Stash in AWS manually), Stash will be configured to force HTTPS and redirect all plain HTTP requests to the equivalent https:// URL.

It is highly recommended to replace this self-signed SSL certificate with a proper one for your domain, obtained from a Certification Authority (CA), at the earliest opportunity. See Securing Stash in AWS. Once you have a true SSL certificate, install it as soon as possible.

To replace the self-signed SSL certificate with a true SSL certificate

  1. Place your certificate file at (for example) /etc/nginx/ssl/my-ssl.crt
  2. Place your password-less certificate key file at /etc/nginx/ssl/my-ssl.key
  3. Edit /etc/nginx/nginx.conf as follows:
    1. Replace references to /etc/nginx/ssl/self-ssl.crt with /etc/nginx/ssl/my-ssl.crt
    2. Replace references to /etc/nginx/ssl/self-ssl.key with /etc/nginx/ssl/my-ssl.key
  4. Append the contents of /etc/nginx/ssl/my-ssl.crt to the default system PKI bundle (/etc/pki/tls/certs/ca-bundle.crt) to ensure scripts on the instance (such as DIY backup) can curl successfully. 
  5. Restart nginx.

Backing up your Stash instance

The Atlassian Stash AMI includes a complete set of Stash DIY Backup scripts which has been built specifically for AWS. For instructions on how to backup and restore your instance please refer to Using Stash DIY Backup in AWS.

Upgrading your Stash instance

To upgrade to a later version of Stash in AWS you first must connect to your instance using SSH, then follow the steps in the Stash upgrade guide.

Stopping and starting your EC2 instance

An EC2 instance launched from the Atlassian Stash AMI can be stopped and started just as any machine can be powered off and on again.

When stopping your EC2 instance, it is important to first

  1. Stop the atlstash and postgresql93 
  2. Unmount the /media/atl filesystem.

If your EC2 instance becomes unavailable after stopping and restarting

When starting your EC2 instance back up again, if you rely on Amazon's automatically assigned public IP address (rather than a fixed private IP address or Elastic IP address) to access your instance, your IP address may have changed. When this happens, your instance can become inaccessible and display a "The host name for your Atlassian instance has changed" page. To fix this you need to update the hostname for your Stash instance.

To update the hostname for your Stash instance

  1. Connect to your instance over SSH and run
    sudo /opt/atlassian/bin/
  2. Wait for Stash to restart.
  3. If you have also set up Stash's Base URL to be the public DNS name or IP address you should also update Stash's base URL in the administration screen to reflect the change.

Migrating your existing Stash instance into AWS

Migrating an existing Stash instance to AWS involves moving consistent backups of your ${STASH_HOME} and your database to the AWS instance.

To migrate your existing Stash instance into AWS

  1. Check for any known migration issues in the Stash Knowledge Base.
  2. Alert users to the forthcoming Stash service outage.
  3. Create a user in the Stash Internal User Directory with SYSADMIN permissions to the instance so you don't get locked out if the new server is unable to connect to your User Directory.
  4. Take a backup of your instance with either the Stash Backup Client or the Stash DIY Backup.
  5. Launch Stash in AWS using the Quick Start instructions, which uses a CloudFormation template.
  6. Connect to your AWS EC2 instance with SSH and upload the backup file.
  7. Restore the backup with the same tool used to generate it.
  8. If necessary, update the JDBC configuration in the ${STASH_HOME}/shared/ file.

Resizing the data volume in your Stash instance

By default, the application data volume in an instance launched from the Atlassian Stash AMI is a standard Linux ext4 filesystem, and can be resized using the standard Linux command line tools.

To resize the data volume in your Stash instance

  1. Stop the atlstash and postgresql93 services.
  2. Unmount the /media/atl filesystem.
  3. Create a snapshot of the volume to resize.
  4. Create a new volume from the snapshot with the desired size, in the same availability zone as your EC2 instance.
  5. Detach the old volume and attach the newly resized volume as /dev/sdf.
  6. Resize /dev/sdf using resize2fs, verify that its size has changed, and remount it on /media/atl
  7. Start the postgresql93 and atlstash and services.

For more information, see Expanding the Storage Space of an EBS Volume on LinuxExpanding a Linux Partition, and the Linux manual pages for resize2fs and related commands. 

Moving your Stash data volume between instances

Occasionally, you may need to move your Stash data volume to another instance–for example, when setting up staging or production instances, or when moving to an instance to a different availability zone. 

There are two approaches to move your Stash data volume to another instance

  1. Take a backup of your data volume with Stash DIY Backup, and restore it on your new instance. See Using Stash DIY Backup in AWS for this option. 
  2. Launch a new instance from the Atlassian Stash AMI with a snapshot of your existing data volume.

    A Stash data volume may only be moved to a Stash instance of the same or higher version than the original.

To launch a new instance from the Stash AMI using a snapshot of your existing Stash data volume

  1. Stop the atlstash and postgresql93 services on your existing Stash instance.
  2. Unmount the /media/atl filesystem.
  3. Create a snapshot of the Stash data volume (the one attached to the instance as /dev/sdf).
  4. Once the snapshot generation has completed, launch a new instance from the Atlassian Stash AMI as described in Launching Stash in AWS manually. When adding storage, change the EBS volume device to /dev/sdf as seen below and enter the id of the created snapshot.
  5. If the host name (private or public) that users use to reach your Stash instance has changed as a result of moving availability zones (or as a result of stopping an instance and starting a new one) you will need to SSH in and run
    sudo /opt/atlassian/bin/ <newhostname>
    where <newhostname> is the new host name. 
  6. Once Stash has restarted your new instance should be fully available. 
  7. If the host name has changed you should also update the JDBC URL configuration in the file (typically located in /var/atlassian/application-data/stash/shared/), as well as Stash's base URL in the administration screen to reflect this.