Audit log integrations

View and configure the audit log

On this page

Still need help?

The Atlassian Community is here for you.

Ask the community

Bitbucket Data Center and Server writes audit logs to the database and a log file. By itself, the log file saves you the effort of periodically exporting your audit logs from the database for long-term storage. However, the main purpose of the file is to easily integrate Bitbucket to a third-party logging platform.

Selecting which events to log and adjusting data retention

The Audit log settings menu controls the coverage of audit logs in both database and log file. 

The log file's retention is ultimately controlled by log rotationWe use basic log rotation to manage the volume of logs. We automatically archive the audit log file when:
  • the node's time reaches 12:00 midnight, or
  • the audit log file reaches 100MB. 

Once a node reaches the log file retention limit, the oldest one is deleted. By default the limit is 100 log files (the current audit log file + 99 archives).  Make sure you allocate enough disk space for these log files on each application node. For the default setting of 100 files, you should allow 10GB. 

To customize the log rotation rules (and, ultimately, the retention rules), use the following bitbucket.properties file parameters:

  • com.atlassian.audit.file.max.file.size controls the maximum size (in MB) for the current audit log file before it is archived.
  • com.atlassian.audit.file.max.file.count controls the maximum number of audit log files (counting the current audit log file and all archived log files). 

Both default to 100. If you adjust either of these values, make sure you allocate the right amount of space on each application node. For example, if you set com.atlassian.audit.file.max.file.count=150, you should allocate at least 15GB just for log files on each application node.

For more information on using the bitbucket.properties  file, click here.


Log file details

Bitbucket Server writes audit logs in real time to the home directory. Specifically, these logs are written to the audit log file.  On clustered Bitbucket Data Center deployments, each application node will produce its own log file in its local home directory. 

Location

To integrate the audit log file with a third-party logging platform, you'll need to know its exact location. This may vary, depending on how you configured your home directory. For more information about the local home directory, click here).

On a clustered Bitbucket Data Center deployment, the audit log file's directory should be the same on all nodes.

See CloudWatch Logs Agent Reference for more information. If you want to see how we automate this via Ansible, check out our deployment playbooks on https://bitbucket.org/atlassian/dc-deployments-automation/src/master/.

File name

The audit log file name uses the following naming convention:

YYYYMMDD-XXXXX.audit.log


The XXXXX portion is a 5-digit number (starting with 00000) tracking the number of audit log files archived in the same day (YYYMMDD). For example, if there are 5 archived log files today (January 1, 2020), then:

  • the oldest archived log file is 20200101.00000.audit.log
  • the current audit log file is 20200101.00005.audit.log 

Format

Each audit log is written as a JSON entry to the audit log file. Every line in the file represents a single event, allowing you to use regular expressions to do simple searches if needed.

Integrating with logging agents

Most enterprise environments use a third-party logging platform to aggregate, store, and otherwise manage logs from all hosts. Logging platforms like AWS CloudWatch and Splunk use agents to collect logs from every host in the environment. These agents are installed on each host, collecting local logs and sending them back to a centralized location to be aggregated, analyzed, audited, and/or stored.

If your logging platform uses agents this way, you can configure each node's agent to monitor the audit log file directly. Logging agents from most major platforms (including AWS CloudWatch, Splunk, ELK, and Sumo Logic) are compatible with the audit log file. 

Amazon CloudWatch Agent

We provide Quick Starts for Bitbucket Data Center for easy deployments on AWS. This Quick Start lets you deploy Bitbucket Data Center along with an Amazon CloudWatch instance to monitor it. 

To set up Amazon CloudWatch, use the Enable CloudWatch Integration parameter's default setting (namely, Metrics and Logs). The Quick Start will then configure the Amazon CloudWatch Agent to collect the logs from each node's audit log files. The agent will send these logs to a separate log group named bitbucket-<aws-stack-name>-audit.

Our Quick Start also sets up a default dashboard to help you read the collected data, including logs from each audit log file. Refer to Working With Log Groups and Log Streams for related information.

Click here for manual configuration instructions

Manual configuration

If needed, you can also manually configure the Amazon CloudWatch agent to collect the audit log files. To do this, set the following parameters in the Agent Configuration File:

  • file: set this to to <local home directory>/log/audit/*. Don't forget to set the absolute path to the home directory
  • log_group_name and log_stream_name: use these to send Bitbucket Data Center's audit logs to a specific log group or stream.


Splunk Universal Forwarder

For Splunk Enterprise or Splunk Cloud, you can use the Splunk Universal Forwarder as your logging agent. This will involve installing the universal forwarder on each application node.

You'll also need to define each node's audit log directory as one of the forwarder's inputs. This will set the forwarder to send all logs from the audit log directory to a pre-configured receiverOne way to define the forwarder's inputs is through the Splunk CLI. For Linux systems, use the following command on each application node:

./splunk add monitor <local home directory>/log/audit/

Refer to the following links for detailed instructions on configuring the Splunk Universal Forwarder on each node:

Configuring a source type in Splunk

Source types define how Splunk indexers parse your data. That includes how to separate data into events, how to parse the events, and how to extract the timestamp from the events.

For Splunk to interpret your audit logs correctly, you’ll need to add a new source type for Atlassian Audit logs on the indexer(s), and tell the forwarders you set up to tag outgoing data with that source type.

You’ll need to create a source type named atlassian-audit with the following properties:

[atlassian-audit]

pulldown_type = true

SHOULD_LINEMERGE = false

disabled = false

category = Custom

LINE_BREAKER = ([\r\n]+)

TIME_FORMAT = %s,"nano":%9N

TIME_PREFIX = \"timestamp\":{\"epochSecond\":

For details of how to do this see how to create source types in Splunk.

After creating a source type, you’ll need to configure your forwarders to label outgoing data with the new source type. This can be done by adding the sourcetype property to the monitor you have configured in an inputs.conf file. For example:

[monitor:///path/to/jira/home/log/audit]

disabled = false

sourcetype=atlassian-audit

For more information see the following links:

Filebeat (for the ELK stack)

Within the ELK stack, you can use the Filebeat plugin to collect logs from each node's audit log files. Each time a log is written to the current audit log file, Filebeat will forward that log to Elasticsearch or Logstash.

To set this up, install Filebeat first on each application node. Then, set the audit log file directory as a Filebeat input. To do that, add its directory as a path in the filebeat.inputs section of each node's filebeat.yml configuration file. For example:

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - <local home directory>/log/audit/

Sumo Logic installed collectors

If you have a Sumo Logic instance, you can use installed collectors to collect logs from each node's audit log files. To do this, install a collector on each node first. Then, add <local home directory>/log/audit/* as a Local File Source to each node's collector.


Deprecated audit log file format

Previous releases of Bitbucket Server also generated an audit log file, but this file used a different format. This format is now deprecated. If you require logs generated in this format, you can configure Bitbucket to also generate the legacy format alongside the current one. However, we recommend that you use the current audit log file, as we will remove the legacy format in Bitbucket 8.0.

The legacy audit log file will have the same set of defaults and settings as Bitbucket Server releases before 7.0, such as:

  • The file will rotate at 25 MB.

  • There will be a 100-file limit to the number of the legacy audit log files that Bitbucket keeps. When the limit is reached, the oldest file is deleted each day.

To enable the legacy audit log file, set audit.legacy.log=true in the bitbucket.properties file. For more information on using this file, click here.

Last modified on Jan 31, 2023

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.