Atlassian Performance Testing Framework
The Performance Testing Framework, previously known as the Elastic Experiment Executor (E3 ), is a framework to set up, execute, and analyze performance under load on Atlassian Server and Data Center product instances.
On this page:
Experiments can be useful to compare the product performance of different versions, software configurations, and hardware infrastructure setups under workloads of different sizes and shapes.
While the experiment runs, both system-level and application-level metrics are monitored from all machines involved using the
collectd service. This information, as well as data on response times and throughput gathered from client and server-side log files, can be summarized in a number of different graphical charts and visualizations.
Currently, the Performance Testing Framework can run experiments on:
- Bitbucket Server
- Bitbucket Data Center
- Confluence Data Center.
All compute infrastructure can be provisioned automatically in the Amazon Web Services (AWS) cloud, or run in your own external infrastructure.
Note: Even if your infrastructure is hosted on AWS, it is still considered external infrastructure if it has not been provisioned by the Performance Testing Framework. In addition to the setup and configuration overview described below, you'll also need to refer to the section titled Bring your own infrastructure in each product specific guide for more specific instructions on how to tailor the framework for your purposes.
Accessing the framework
Everything you need to run experiments in the Performance Testing Framework can be found in the following repository:
The framework requires that experiments be run from a Linux or MacOS machine with a number of open source software packages. The easiest way to install these pre-requisites is with your operating system's package manager. For example, from the machine which will be used to run the Performance Testing Framework experiments, you might run one of the following:
MacOS with homebrew
$ brew install gnuplot --with-cairo $ brew install imagemagick python rrdtool
Linux Ubuntu, Debian, ...
$ apt-get install gnuplot imagemagick python rrdtool librrd-dev
Linux Red Hat, CentOS...
$ yum install gnuplot imagemagick python rrdtool
For other systems, refer to your system's documentation on how to install these prerequisites.
The Performance Testing Framework provides the ability to easily provision machines in the AWS Cloud.
You can specify the credentials for your AWS account in any of the places that
boto3 looks. See Configuring credentials in the Boto3 documentation.
If your organization uses IAM roles to authenticate with AWS, the framework also includes the ability to acquire AWS credentials automatically. For an example implementation see
AtlassianAwsSecurity.py in the Performance Testing Framework repo.
Provisioning AWS infrastructure can incur significant service charges from Amazon. You are responsible for all Amazon service charges arising from your use of the Performance Testing Framework. You are responsible for securing any proprietary data on your test machines.
If you are running an experiment for Confluence Data Center, you will need a developer license. If you already have a production license, see How to get a Confluence Developer license. Once you've got it, look for the property
confluence.license.key in your experiment file and replace
<confluence_license_key> with your license. Note that your experiments will be limited by the number of users your license supports, which is determined by the production license.
Most of the Performance Testing Framework is written in Python and requires a number of Python libraries. These requirements are described in a standard
requirements.txt file, and can be installed using the standard Python
virtualenv tools as follows.
elastic-experiment-executor repository home:
$ sudo pip install virtualenv $ virtualenv .ve $ source .ve/bin/activate $ cd e3; pip install -r requirements.txt
Once all requirements are installed, running the
Orchestrate.py script with any one of the example experiments (found in
e3-home/experiments) will provision machines in AWS with cluster and client nodes, execute a specified workload, gather data, and analyze the results. In the example here, we execute the
cluster-scaling-ssh experiment, which is designed to stress different instances with lots of parallel Git hosting operations over SSH:
$ ./Orchestrate.py -e cluster-scaling-ssh
To find out how to run an experiment, refer to our product specific guides:
The Performance Testing Framework is provided as is, and is not covered by Atlassian support. Our support team won't be able to assist you with interpreting the test results.