Document-based replication in Jira Data Center

The Document-Based Replication feature introduced for Jira 8.12.0 and later mitigates apps' impact on indexing time and prevents index inconsistencies in Jira Data Center even if an app takes time indexing data. With the DBR feature on, Jira Data Center is much more horizontally scalable. As a result, the more nodes there are in the cluster, the better is the overall throughput while maintaining search consistency.

This change does not introduce any API changes.

Benefits of DBR

We've introduced DBR to lower index update distribution time in Jira DC cluster. The immediate results are the following:

  • From index perspective: Improved, more stable and faster index consistency between nodes in Jira DC.

  • From user perspective: Changes in the index are distributed faster between nodes. That creates less friction to your end users because data is more consistent across the nodes. Users can collaborate effectively.

DBR considerations

The introduced architectural changes require each node to communicate directly with other nodes which can double the network traffic. When testing this on a 8 node cluster running a sustained stress load of 400 requests per second, the total network traffic volume amounted to 25% of total capacity of a 1Gbps link. Traffic will be lower if the load or cluster size is lower.

Metrics without DBR (versions below 8.12.0)

The chart shows the number of operations that the replication process needs to perform to catch up with issue changes and propagate those to all the nodes. The job runs every 5 seconds and makes sure each issue gets indexed in the local Lucene index.

The blue dots show a healthy index and data being consistently replicated between the nodes. The scattered colored dots show the index getting delayed which results in data inconsistency. Ideally there is no or minimal delay in data indexing. 

 

A 0ms index time delay means Jira DC runs without apps impacting issue indexing. It looks good and shows a healthy Jira DC. The dots are frequent, which means that the replication job consumes the queue every 5 seconds, and nothing is piling up.

The bad state of Jira DC shows up when the index delay is 100, 200 or 500 ms. In this state, the Jira DC with 4 nodes we've used for tests can't handle the installed apps that delay the issue indexing time by more than 100ms. The dots get further apart, which means that each run takes more than 5 seconds to catch up. Also, the queue eventually grows, and the job finds more and more items with each run.

Metrics with DBR (8.12.0+)

 

With DBR on, the “bad state” is nearly completely gone. Jira DC can stand even a 20 sec index time delay. In conclusion, DBR can mitigate consistency problems even if apps take time indexing data.

The test was carries out with the following set-up:

  • 4 Jira DC nodes, running on c5.9xlarge instances.

  • Jira version (8.12.0-SNAPSHOT)

  • Test duration: 40 minutes

  • Load: 400 active concurrent users (ramp-up time of 10 minutes, then flat load for 20 minutes).

  • 2M issues

Stress tests

Our stress test on an 8-node cluster using Jira DC 8.5 shows that the index became highly inconsistent after the load of 200 requests per second.

 

Jira DC 8.13 was able to handle higher flat load of 330 requests per second with the index remaining consistent throughout. It proves that Jira DC can now handle the index consistency issues caused by high load.


Our tests used apps that introduced a synthetic delay in order to replicate real world conditions, but the actual results can vary based on apps in use.

Technical background

Jira DC uses the replicated index operations (RIO) database table to replicate index changes across the cluster. When a user undertakes an action that changes the index (create issue, add comment, …) on a node that node creates an entry in the RIO as a sender node. Then, other nodes execute the replay process (RP) every 5 seconds to process the RIO table, load the issue from the DB, and create Lucene entries for the issue. RP is trying to catch up with all changes in a single thread. As long as each run takes less than 5 seconds and is able to process all created RIOs since the last execution, Jira DC is healthy and the index is consistent across the cluster.


The Document-Based Replication (DBR) uses the cache replication mechanism to share the generated Lucene documents created on the sender node to other nodes directly. The other nodes don’t have to read the full issue data from the DB or prepare the Lucene document (which is the expensive part). They apply what is being sent directly to their indexes (which is the cheap part).

The RIO/RP method is still there, but it uses entity versioning (see below) to figure out if DBR has done all the work, and corrects anything DBR missed. It’s also used as a fallback method in case there are network issues.

DBR ensures that the indexing delay is between 10-100 miliseconds on each node irrespective of the number of issues being changed.

Entity versioning

To keep the index consistent we introduced entity versioning. With entity versioning each update of an issue is also reflected by bumping the version - a number that goes up with every change. We have associated an issue with that number in the DB and in the index. It is assumed that the latest version (the one with the highest version number) is the one that should exist in the index. This prevents a version mismatch in the cluster resulting, for example, from issue status changes that happen at the same time, and ensures consistency between the database and the nodes.

Actions using DBR

The DBR mechanism is generally applied for user actions that trigger reindexing.

Actions using DBR

Actions not using DBR

Background reindexing

Full foreground reindexing

Project archiving

Issue archiving


Reindexing when a node starts

DBR in the logs

To find more data about DBR actions and monitor the cluster behavior, look up the logs in the atlassian-jira.log file.

In the standard atlassian-jira.log you can find DBR-related aggregated logs showing DBR properties on a given node. All DBR logs are prefixed by:[DBR] .

DBR Sender Stats Logs

The DBR sender logs describe the part responsible for preparing the DBR message with the Lucene document(s) and have the following format:

[DBR] [SENDER] snpapshot stats...

[DBR] [SENDER] total stats...

The snapshot stats cover the last ~5min and the total stats cover the time from the start of a node.

The following are the most important stats:

  • createDBRMessageUpdateInMillis / createDBRMessageUpdateWithRelatedInMillis - the time to create a DBR message; note that this is done asynchronously and not in the request thread generating the change in the local index;

  • createDBRMessageUpdateBytes / createDBRMessageUpdateWithRelatedBytes - the size of the DBR message;

  • sendDBRMessage - the number of DBR messages successfully delivered to the transport layer (not the destination nodes); this should be equal to the number of index updates; note that a single DBR message may contain a single issue and some or all related entities (comments, worklogs, change history);

  • sendDBRMessageErrors - the number of DBR messages which weren't delivered to the transport layer.

See an example of a DBR sender log
[DBR] [SENDER] total stats period: PT3H19M53.382S, data: 
{
  "createDBRMessageUpdateIssueIndex": 0,
  "createDBRMessageUpdateCommentIndex": 12,
  "createDBRMessageUpdateWorklogIndex": 7,
  "createDBRMessageUpdateInMillis": {
    "count": 19,
    "min": 0,
    "max": 2,
    "sum": 2,
    "avg": 0,
    "distributionCounter": {
      "0": 18,
      "1": 0,
      "5": 1,
      "10": 0,
      "50": 0,
      "100": 0,
      "500": 0,
      "1000": 0
    }
  },
  "createDBRMessageUpdateBytes": {
    "count": 19,
    "min": 936,
    "max": 1574,
    "sum": 22994,
    "avg": 1210,
    "distributionCounter": {
      "0": 0,
      "1000": 7,
      "50000": 12,
      "100000": 0,
      "500000": 0,
      "1000000": 0,
      "2000000": 0,
      "5000000": 0,
      "10000000": 0
    }
  },
  "createDBRMessageUpdateErrors": 0,
  "createDBRMessageUpdateWithRelatedIssueIndex": {
    "count": 736,
    "min": 1,
    "max": 1,
    "sum": 736,
    "avg": 1,
    "distributionCounter": {
      "0": 0,
      "1": 736
    }
  },
  "createDBRMessageUpdateWithRelatedCommentIndex": {
    "count": 736,
    "min": 0,
    "max": 8,
    "sum": 121,
    "avg": 0,
    "distributionCounter": {
      "0": 713,
      "1": 4,
      "10": 19,
      "100": 0
    }
  },
  "createDBRMessageUpdateWithRelatedWorklogIndex": {
    "count": 736,
    "min": 0,
    "max": 2,
    "sum": 6,
    "avg": 0,
    "distributionCounter": {
      "0": 731,
      "1": 4,
      "10": 1,
      "100": 0
    }
  },
  "createDBRMessageUpdateWithRelatedChangesIndex": {
    "count": 736,
    "min": 1,
    "max": 78,
    "sum": 11609,
    "avg": 15,
    "distributionCounter": {
      "0": 0,
      "1": 66,
      "10": 182,
      "100": 488
    }
  },
  "createDBRMessageUpdateWithRelatedInMillis": {
    "count": 736,
    "min": 0,
    "max": 331,
    "sum": 1677,
    "avg": 2,
    "distributionCounter": {
      "0": 539,
      "1": 145,
      "5": 38,
      "10": 7,
      "50": 2,
      "100": 0,
      "500": 5,
      "1000": 0
    }
  },
  "createDBRMessageUpdateWithRelatedBytes": {
    "count": 736,
    "min": 26405,
    "max": 227050,
    "sum": 42706124,
    "avg": 58024,
    "distributionCounter": {
      "0": 0,
      "1000": 0,
      "50000": 459,
      "100000": 171,
      "500000": 106,
      "1000000": 0,
      "2000000": 0,
      "5000000": 0,
      "10000000": 0
    }
  },
  "createDBRMessageUpdateWithRelatedErrors": 0,
  "sendDBRMessage": 754,
  "sendDBRMessageErrors": 0,
  "maxErrorsSample": 10,
  "createDBRMessageUpdateErrorsSample": {},
  "createDBRMessageUpdateWithRelatedErrorsSample": {},
  "sendDBRMessageErrorsSample": {}
}

DBR Receiver Stats Logs

The DBR receiver logs describe the part responsible for receiving DBR messages with the Lucene document(s) from other nodes and have the following format:

[DBR] [RECEIVER] snapshot stats...

[DBR] [RECEIVER] total stats...

The snapshot stats cover the last ~5min and the total stats cover the time from the start of a node.

The following are the most important stats:

  • receiveDBRMessage - the number of received DBR messages;

  • receiveDBRMessageDelayedInMillis - the delay between creating a DBR message on the source node and accepting the DBR message on the destination node; note that this is based on comparing local times from two different nodes;

  • processDBRMessageUpdateSerializeInMillis / processDBRMessageUpdateWithRelatedSerializeInMillis - the partial time to process the DBR message locally - the time to deserialise Lucene document from the DBR message;

  • processDBRMessageUpdateIndexInMillis / processDBRMessageUpdateWithRelatedIndexInMillis - the partial time to process the DBR message locally - the time to perform a conditional update of the local index with the documents from the DBR message.

See an example of a DBR receiver log
[DBR] [RECEIVER] total stats period: PT2H10M25.204S, data: 
{
  "receiveDBRMessage": 1879,
  "receiveDBRMessageUpdate": 5,
  "receiveDBRMessageUpdateWithRelated": 1874,
  "receiveDBRMessageDelayedInMillis": {
    "count": 1879,
    "min": 4,
    "max": 3019,
    "sum": 61452,
    "avg": 32,
    "distributionCounter": {
      "500": 1861,
      "1000": 4,
      "1500": 1,
      "2000": 1,
      "3000": 11,
      "4000": 1,
      "5000": 0
    }
  },
  "skipDBRMessageWhenIndexNotAvailable": 0,
  "skipDBRMessageWhenIndexReplicationPaused": 0,
  "processDBRMessageUpdateIssueIndex": 0,
  "processDBRMessageUpdateCommentIndex": 3,
  "processDBRMessageUpdateWorklogIndex": 2,
  "processDBRMessageUpdateBytes": {
    "count": 5,
    "min": 932,
    "max": 2567,
    "sum": 7075,
    "avg": 1415,
    "distributionCounter": {
      "0": 0,
      "1000": 2,
      "50000": 3,
      "100000": 0,
      "500000": 0,
      "1000000": 0,
      "2000000": 0,
      "5000000": 0,
      "10000000": 0
    }
  },
  "processDBRMessageUpdateSerializeInMillis": {
    "count": 5,
    "min": 0,
    "max": 0,
    "sum": 0,
    "avg": 0,
    "distributionCounter": {
      "0": 5,
      "1": 0,
      "5": 0,
      "10": 0,
      "50": 0,
      "100": 0,
      "500": 0,
      "1000": 0
    }
  },
  "processDBRMessageUpdateIndexInMillis": {
    "count": 5,
    "min": 0,
    "max": 2,
    "sum": 2,
    "avg": 0,
    "distributionCounter": {
      "0": 4,
      "1": 0,
      "5": 1,
      "10": 0,
      "50": 0,
      "100": 0,
      "500": 0,
      "1000": 0
    }
  },
  "processDBRMessageUpdateErrors": 0,
  "processDBRMessageUpdateWithRelatedIssueIndex": {
    "count": 1873,
    "min": 1,
    "max": 1,
    "sum": 1873,
    "avg": 1,
    "distributionCounter": {
      "0": 0,
      "1": 1873
    }
  },
  "processDBRMessageUpdateWithRelatedCommentIndex": {
    "count": 1873,
    "min": 0,
    "max": 4,
    "sum": 9,
    "avg": 0,
    "distributionCounter": {
      "0": 1868,
      "1": 3,
      "10": 2,
      "100": 0
    }
  },
  "processDBRMessageUpdateWithRelatedWorklogIndex": {
    "count": 1873,
    "min": 0,
    "max": 0,
    "sum": 0,
    "avg": 0,
    "distributionCounter": {
      "0": 1873,
      "1": 0,
      "10": 0,
      "100": 0
    }
  },
  "processDBRMessageUpdateWithRelatedChangesIndex": {
    "count": 1873,
    "min": 1,
    "max": 363,
    "sum": 35490,
    "avg": 18,
    "distributionCounter": {
      "0": 0,
      "1": 15,
      "10": 280,
      "100": 1568,
      "9223372036854775807": 10
    }
  },
  "processDBRMessageUpdateWithRelatedBytes": {
    "count": 1873,
    "min": 26311,
    "max": 333676,
    "sum": 113819327,
    "avg": 60768,
    "distributionCounter": {
      "0": 0,
      "1000": 0,
      "50000": 1357,
      "100000": 290,
      "500000": 226,
      "1000000": 0,
      "2000000": 0,
      "5000000": 0,
      "10000000": 0
    }
  },
  "processDBRMessageUpdateWithRelatedSerializeInMillis": {
    "count": 1873,
    "min": 0,
    "max": 464,
    "sum": 1842,
    "avg": 0,
    "distributionCounter": {
      "0": 1589,
      "1": 206,
      "5": 58,
      "10": 10,
      "50": 4,
      "100": 0,
      "500": 6,
      "1000": 0
    }
  },
  "processDBRMessageUpdateWithRelatedIndexInMillis": {
    "count": 1873,
    "min": 1,
    "max": 2699,
    "sum": 65728,
    "avg": 35,
    "distributionCounter": {
      "0": 0,
      "1": 12,
      "5": 580,
      "10": 99,
      "50": 886,
      "100": 140,
      "500": 153,
      "1000": 0,
      "9223372036854775807": 3
    }
  },
  "processDBRMessageUpdateWithRelatedErrors": 0,
  "maxErrorsSample": 10,
  "processDBRMessageUpdateErrorsSample": {},
  "processDBRMessageUpdateWithRelatedErrorsSample": {}
}


Cache Replication Stats Logs

These are not specific to the DBR feature but contain stats for all remote caches replicated by value and have the following format:

[LOCALQ] [VIA-INVALIDATION] Cache replication queue stats per node: ... snapshot stats ...

[LOCALQ] [VIA-INVALIDATION] Cache replication queue stats per node: ... total stats ...

The snapshot stats cover the last ~5min and the total stats cover the time from the start of a node.

The following are the most important stats:

  • timeToAddMillis - the time to store a cache replication message in local store before sending it to the other node;

  • sendCounter - the number of cache replication messages sent from the current node to the destination node;

  • timeToSendMillis - the time to deliver the cache replication message from current node to the destination node.

See an example of a cache replication log
[LOCALQ] [VIA-INVALIDATION] Cache replication queue stats per node: node2 total stats: 
{
  "timestampMillis": 1597321617576,
  "nodeId": "node2",
  "queueSize": 0,
  "startQueueSize": 0,
  "startTimestampMillis": 1597313148439,
  "startMillisAgo": 8469137,
  "closeCounter": 0,
  "addCounter": 28568,
  "droppedOnAddCounter": 0,
  "criticalAddCounter": 0,
  "criticalPeekCounter": 0,
  "criticalRemoveCounter": 0,
  "peekCounter": 0,
  "peekOrBlockCounter": 28837,
  "removeCounter": 28568,
  "backupQueueCounter": 0,
  "closeErrorsCounter": 0,
  "addErrorsCounter": 0,
  "peekErrorsCounter": 0,
  "peekOrBlockErrorsCounter": 0,
  "removeErrorsCounter": 0,
  "backupQueueErrorsCounter": 0,
  "lastAddTimestampMillis": 1597321617072,
  "lastAddMillisAgo": 504,
  "lastPeekTimestampMillis": 0,
  "lastPeekMillisAgo": 0,
  "lastPeekOrBlockTimestampMillis": 1597321617072,
  "lastPeekOrBlockMillisAgo": 504,
  "lastRemoveTimestampMillis": 1597321617074,
  "lastRemoveMillisAgo": 502,
  "lastBackupQueueTimestampMillis": 0,
  "lastBackupQueueMillisAgo": 0,
  "timeToAddMillis": {
    "count": 28568,
    "min": 1,
    "max": 116,
    "sum": 50334,
    "avg": 1,
    "distributionCounter": {}
  },
  "timeToPeekMillis": {
    "count": 0,
    "min": 0,
    "max": 0,
    "sum": 0,
    "avg": 0,
    "distributionCounter": {}
  },
  "timeToPeekOrBlockMillis": {
    "count": 28837,
    "min": 0,
    "max": 60007,
    "sum": 84373765,
    "avg": 2925,
    "distributionCounter": {}
  },
  "timeToRemoveMillis": {
    "count": 28568,
    "min": 0,
    "max": 132,
    "sum": 27294,
    "avg": 0,
    "distributionCounter": {}
  },
  "timeToBackupQueueMillis": {
    "count": 0,
    "min": 0,
    "max": 0,
    "sum": 0,
    "avg": 0,
    "distributionCounter": {}
  },
  "staleCounter": 0,
  "sendCounter": 28300,
  "droppedOnSendCounter": 0,
  "timeToSendMillis": {
    "count": 28300,
    "min": 0,
    "max": 4153,
    "sum": 200252,
    "avg": 7,
    "distributionCounter": {}
  },
  "sendRuntimeExceptionCounter": 1,
  "sendCheckedExceptionCounter": 0,
  "sendNotBoundExceptionCounter": 536
}

Node Index Replay Stats Logs

Node Index Replay is the process responsible for replaying index operation from other nodes. As of Jira 8.12 the replay will be done conditionally. If an index operation to replay has already been delivered by DBR the reindexing process is skipped. This can be used to measure DBR effectiveness. When all index replication operations are skipped, it means all indexing updates are propagated through the cluster via DBR (100% DBR effectiveness).

The logs have the following format:

[INDEX-REPLAY] [STATS] Node replay index operations stats (total)...

[INDEX-REPLAY] [STATS] Node replay index operations stats (snapshot)...

The snapshot stats cover the last ~5min and the total stats cover the time from the start of a node.

The following are the most important stats:

  • numberOfRemoteOperations - the number of processed index operations from other nodes;

  • numberOfLocalOperations - the number of processed index operations from the current node;

  • timeInMillis - the time of processing a batch of index operations; the replay process is run every 5secs and timeInMillis should be much lower than 5sec;

  • filterOutAlreadyIndexedBeforeCounter - the number of index replay operations that may be processed locally;

  • filterOutAlreadyIndexedAfterCounter - the number of index replay operations that are processed locally; if filterOutAlreadyIndexedAfterCounter equals 0 we have 100% DBR effectiveness;

See an example of an index replay log
[INDEX-REPLAY] [STATS] Node replay index operations stats (total): nodeId=node1, 
{
  "numberOfZeroOperations": 815,
  "numberOfRemoteOperations": {
    "count": 828,
    "min": 1,
    "max": 76,
    "sum": 1582,
    "avg": 1,
    "distributionCounter": {
      "10": 823,
      "100": 5,
      "1000": 0,
      "10000": 0
    }
  },
  "numberOfLocalOperations": {
    "count": 655,
    "min": 1,
    "max": 9,
    "sum": 1212,
    "avg": 1,
    "distributionCounter": {
      "10": 655,
      "100": 0,
      "1000": 0,
      "10000": 0
    }
  },
  "timeInMillis": {
    "count": 1145,
    "min": 4,
    "max": 55449,
    "sum": 85451,
    "avg": 74,
    "distributionCounter": {
      "100": 1115,
      "500": 27,
      "1000": 0,
      "5000": 2,
      "10000": 0,
      "30000": 0,
      "60000": 1
    }
  },
  "errors": 0,
  "period": "5.084 min",
  "compactInMillis": {
    "WORKLOG": {
      "count": 1145,
      "min": 0,
      "max": 0,
      "sum": 0,
      "avg": 0,
      "distributionCounter": {
        "0": 1145,
        "1": 0,
        "10": 0
      }
    },
    "ISSUE": {
      "count": 1145,
      "min": 0,
      "max": 13,
      "sum": 15,
      "avg": 0,
      "distributionCounter": {
        "0": 1143,
        "1": 0,
        "10": 1,
        "9223372036854775807": 1
      }
    },
    "COMMENT": {
      "count": 1145,
      "min": 0,
      "max": 0,
      "sum": 0,
      "avg": 0,
      "distributionCounter": {
        "0": 1145,
        "1": 0,
        "10": 0
      }
    }
  },
  "compactBeforeCounter": {
    "WORKLOG": {
      "count": 1145,
      "min": 0,
      "max": 1,
      "sum": 2,
      "avg": 0,
      "distributionCounter": {}
    },
    "ISSUE": {
      "count": 1145,
      "min": 0,
      "max": 70,
      "sum": 2732,
      "avg": 2,
      "distributionCounter": {}
    },
    "COMMENT": {
      "count": 1145,
      "min": 0,
      "max": 6,
      "sum": 54,
      "avg": 0,
      "distributionCounter": {}
    }
  },
  "compactAfterCounter": {
    "WORKLOG": {
      "count": 1145,
      "min": 0,
      "max": 1,
      "sum": 2,
      "avg": 0,
      "distributionCounter": {}
    },
    "ISSUE": {
      "count": 1145,
      "min": 0,
      "max": 57,
      "sum": 5532,
      "avg": 4,
      "distributionCounter": {}
    },
    "COMMENT": {
      "count": 1145,
      "min": 0,
      "max": 5,
      "sum": 62,
      "avg": 0,
      "distributionCounter": {}
    }
  },
  "compactVersionedCounter": {
    "WORKLOG": {
      "count": 1145,
      "min": 0,
      "max": 1,
      "sum": 2,
      "avg": 0,
      "distributionCounter": {}
    },
    "ISSUE": {
      "count": 1145,
      "min": 0,
      "max": 143,
      "sum": 6598,
      "avg": 5,
      "distributionCounter": {}
    },
    "COMMENT": {
      "count": 1145,
      "min": 0,
      "max": 6,
      "sum": 65,
      "avg": 0,
      "distributionCounter": {}
    }
  },
  "compactUnVersionedCounter": {
    "WORKLOG": {
      "count": 1145,
      "min": 0,
      "max": 0,
      "sum": 0,
      "avg": 0,
      "distributionCounter": {}
    },
    "ISSUE": {
      "count": 1145,
      "min": 0,
      "max": 0,
      "sum": 0,
      "avg": 0,
      "distributionCounter": {}
    },
    "COMMENT": {
      "count": 1145,
      "min": 0,
      "max": 1,
      "sum": 1,
      "avg": 0,
      "distributionCounter": {}
    }
  },
  "filterOutAlreadyIndexedInMillis": {
    "WORKLOG": {
      "count": 2,
      "min": 4,
      "max": 6,
      "sum": 10,
      "avg": 5,
      "distributionCounter": {
        "0": 0,
        "1": 0,
        "10": 2,
        "50": 0,
        "100": 0,
        "500": 0,
        "1000": 0
      }
    },
    "ISSUE": {
      "count": 1144,
      "min": 0,
      "max": 3149,
      "sum": 21845,
      "avg": 19,
      "distributionCounter": {
        "0": 616,
        "1": 73,
        "10": 48,
        "50": 342,
        "100": 38,
        "500": 24,
        "1000": 0,
        "9223372036854775807": 3
      }
    },
    "COMMENT": {
      "count": 44,
      "min": 0,
      "max": 19,
      "sum": 127,
      "avg": 2,
      "distributionCounter": {
        "0": 10,
        "1": 9,
        "10": 23,
        "50": 2,
        "100": 0,
        "500": 0,
        "1000": 0
      }
    }
  },
  "filterOutAlreadyIndexedBeforeCounter": {
    "WORKLOG": {
      "count": 2,
      "min": 1,
      "max": 1,
      "sum": 2,
      "avg": 1,
      "distributionCounter": {}
    },
    "ISSUE": {
      "count": 1144,
      "min": 1,
      "max": 57,
      "sum": 5532,
      "avg": 4,
      "distributionCounter": {}
    },
    "COMMENT": {
      "count": 44,
      "min": 1,
      "max": 5,
      "sum": 62,
      "avg": 1,
      "distributionCounter": {}
    }
  },
  "filterOutAlreadyIndexedAfterCounter": {
    "WORKLOG": {
      "count": 2,
      "min": 0,
      "max": 0,
      "sum": 0,
      "avg": 0,
      "distributionCounter": {}
    },
    "ISSUE": {
      "count": 1144,
      "min": 0,
      "max": 29,
      "sum": 31,
      "avg": 0,
      "distributionCounter": {}
    },
    "COMMENT": {
      "count": 44,
      "min": 0,
      "max": 3,
      "sum": 5,
      "avg": 0,
      "distributionCounter": {}
    }
  },
  "updateIndexInMillis": {
    "ISSUE": {
      "count": 3,
      "min": 5,
      "max": 54174,
      "sum": 54195,
      "avg": 18065,
      "distributionCounter": {
        "0": 0,
        "1": 0,
        "10": 1,
        "50": 1,
        "100": 0,
        "500": 0,
        "1000": 0,
        "5000": 0,
        "10000": 0,
        "30000": 0,
        "60000": 1,
        "300000": 0
      }
    },
    "COMMENT": {
      "count": 3,
      "min": 3,
      "max": 113,
      "sum": 137,
      "avg": 45,
      "distributionCounter": {
        "0": 0,
        "1": 0,
        "10": 1,
        "50": 1,
        "100": 0,
        "500": 1,
        "1000": 0,
        "5000": 0,
        "10000": 0,
        "30000": 0,
        "60000": 0,
        "300000": 0
      }
    }
  },
  "updateIndexCounter": {
    "ISSUE": {
      "count": 3,
      "min": 1,
      "max": 29,
      "sum": 31,
      "avg": 10,
      "distributionCounter": {}
    },
    "COMMENT": {
      "count": 3,
      "min": 1,
      "max": 3,
      "sum": 5,
      "avg": 1,
      "distributionCounter": {}
    }
  },
  "updateIndexBatchCounter": {
    "ISSUE": {
      "count": 3,
      "min": 1,
      "max": 1,
      "sum": 3,
      "avg": 1,
      "distributionCounter": {}
    },
    "COMMENT": {
      "count": 3,
      "min": 1,
      "max": 1,
      "sum": 3,
      "avg": 1,
      "distributionCounter": {}
    }
  }
}



Last modified on Feb 22, 2021

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.