Audit Log Integrations in Confluence
Confluence Data Center writes audit logs to the database and a log file. By itself, the log file saves you the effort of periodically exporting your audit logs from the database for long-term storage. However, the main purpose of the file is to easily integrate Confluence Data Center to a third-party logging platform.
On this page:
Event coverage and log retention
The Audit log settings menu controls the coverage of audit logs in both database and log file. However, this menu does not control the log file's retention period.
- the node's time reaches 12:00 midnight, or
- the audit log file reaches 100MB.
Once a node reaches the log file retention limit, the oldest one is deleted. By default the limit is 100 log files (the current audit log file + 99 archives). Make sure you allocate enough disk space for these log files on each application node. For the default setting of 100 files, you should allow 10GB.
Log file details
Confluence Data Center writes audit logs in real time to the home directory. Specifically, these logs are written to the audit log file. On clustered Confluence Data Center deployments, each application node will produce its own log file in its local home directory.
To integrate the audit log file with a third-party logging platform, you'll need to know its exact location. This may vary, depending on how you configured your home directory. For more information about the local home directory, see Confluence Home and other important directories).
On a clustered Confluence Data Center deployment, the audit log file's directory should be the same on all nodes.
The audit log file name uses the following naming convention:
XXXXX portion is a 5-digit number (starting with
00000) tracking the number of audit log files archived in the same day (
YYYMMDD). For example, if there are 5 archived log files today (January 1, 2020), then:
- the oldest archived log file is
- the current audit log file is
Each audit log is written as a JSON entry to the audit log file. Every line in the file represents a single event, allowing you to use regular expressions to do simple searches if needed.
Integrating with logging agents
Most enterprise environments use a third-party logging platform to aggregate, store, and otherwise manage logs from all hosts. Logging platforms like AWS CloudWatch and Splunk use agents to collect logs from every host in the environment. These agents are installed on each host, collecting local logs and sending them back to a centralized location to be aggregated, analyzed, audited, and/or stored.
If your logging platform uses agents this way, you can configure each node's agent to monitor the audit log file directly. Logging agents from most major platforms (including AWS CloudWatch, Splunk, ELK, and Sumo Logic) are compatible with the audit log file.
Amazon CloudWatch Agent
We provide Quick Starts for Confluence Data Center for easy deployments on AWS. This Quick Start lets you deploy Confluence Data Center along with an Amazon CloudWatch instance to monitor it.
To set up Amazon CloudWatch, use the Enable CloudWatch Integration parameter's default setting (namely,
Metrics and Logs). The Quick Start will then configure the Amazon CloudWatch Agent to collect the logs from each node's audit log files. The agent will send these logs to a separate log group named
Our Quick Start also sets up a default dashboard to help you read the collected data, including logs from each audit log file. Refer to Working With Log Groups and Log Streams for related information.
If needed, you can also manually configure the Amazon CloudWatch agent to collect the audit log files. To do this, set the following parameters in the Agent Configuration File:
file: set this to to
<local home directory>/log/audit/*. Don't forget to set the absolute path to the home directory.
- log_group_name and log_stream_name: use these to send Confluence Data Center's audit logs to a specific log group or stream.
Splunk Universal Forwarder
For Splunk Enterprise or Splunk Cloud, you can use the Splunk Universal Forwarder as your logging agent. This will involve installing the universal forwarder on each application node.
You'll also need to define each node's audit log directory as one of the forwarder's inputs. This will set the forwarder to send all logs from the audit log directory to a pre-configured receiver. One way to define the forwarder's inputs is through the Splunk CLI. For Linux systems, use the following command on each application node:
./splunk add monitor <local home directory>/log/audit/*audit.log
Refer to the following links for detailed instructions on configuring the Splunk Universal Forwarder on each node:
Filebeat (for the ELK stack)
Within the ELK stack, you can use the Filebeat plugin to collect logs from each node's audit log files. Each time a log is written to the current audit log file, Filebeat will forward that log to Elasticsearch or Logstash.
To set this up, install Filebeat first on each application node. Then, set the audit log file directory as a Filebeat input. To do that, add its directory as a
path in the
filebeat.inputs section of each node's
filebeat.yml configuration file. For example:
Sumo Logic installed collectors
If you have a Sumo Logic instance, you can use installed collectors to collect logs from each node's audit log files. To do this, install a collector on each node first. Then, add
<local home directory>/log/audit/* as a Local File Source to each node's collector.