Comparing sprint template data in Atlassian Analytics and Jira
If you compare your Atlassian Analytics dashboards with sprint data to a Jira JQL filter or sprint report and notice the data doesn't match, this article can help you determine what might be the cause. The Single sprint overview or Multiple sprints overview dashboard templates are two pre-built ways that you can view your sprint data in Atlassian Analytics. This article will cover a few common reasons that the data between either sprint template and data in Jira can differ and items you can investigate to determine the cause of the data differences.
Start your investigation here
These are some common reasons that your data on a sprint dashboard in Atlassian Analytics might differ from a Jira report:
Atlassian Analytics sprint dashboards | Jira reports |
---|---|
5 tables within the Atlassian Data Lake experience a 3-6 hour delay between when data is first created or updated in Jira and when it is available in Atlassian Analytics. | The Jira reports have data in near real time. |
The sprint templates in Atlassian Analytics will always use the latest story point value and does not take into consideration if the story point value has changed over time. This may cause the Atlassian Analytics chart to differ from any estimation changes that occur in the Scope changes log of the sprint burndown report. | The Scope changes log of the sprint burndown report shows issues that were added to the sprint, removed from the sprint, or had estimation changes, while the sprint was in progress. |
Issues completed before the start of the sprint are always included in the sprint templates. | The burnup report in team-managed projects includes issues completed outside the sprint in the “Work Scope” but not in the “Completed work”. The burndown chart and burnup chart for company-managed projects do not include issues completed outside the sprint in the calculations. |
The sprint templates consider any status in the Done status category as the time the issue was completed. | The Jira reports consider the issue status that is the rightmost column of the sprint board as the “Done” status. |
Sub-tasks or cloned issues are not displayed in the sprint templates. Our team is actively working on surfacing inherited sprint values in the Atlassian Data Lake: ANALYTICS-99, ANALYTICS-177. | Issues that inherit sprint info (like cloned issues) show up in Jira reports. The sprint burndown chart does not include sub-task issues. The burndown chart does not include Story Points data for sub-task issues. |
In the sprint templates, the commitment is defined as the workload in the sprint. This workload includes work added at any time before the end of a sprint, including time before the sprint starts. The same is true for Completed work. Anything after the sprint end is not considered. | In Jira, commitment refers to the amount of work that was in the sprint at the start of the sprint. |
If the differences in the table above aren’t the cause behind why your charts on the sprint templates in Atlassian Analytics then you can work through these questions:
Is the Jira project that the sprint is created in included in the Data Lake connection?
If the Jira project is not included in the Data Lake connection, ask an organization admin to edit the connection to include the project(s) desired.
Was the issue created less than 3-6 hours ago?
If the issue was created recently, then the table materialization delay might be affecting the data. Check the dashboard again after 6 hours to see if the issues appear then.
Is the issue(s) actually added to the sprint?
By checking the History tab of the issue in Jira, you can see if the issue was added to the sprint or if it inherited the sprint value. If the issue has inherited the sprint value then you may encounter issues with the data in the Issue sprint history table (see the two questions immediately below this one).
Was the issue cloned?
If the issue was cloned from another issue, and the user chooses to Clone sprint value during the cloning, then the sprint value isn’t recorded properly for the cloned issue in the Atlassian Data Lake due to ANALYTICS-177. This could impact your data in either sprint template.
A workaround to avoid the sprint value being from improperly recorded in the sprint tables, is to first clone the issue without immediately cloning the sprint value. Then after the cloned issue is created you can add it to your sprint or sprints of choice to have ths sprint value appear in the Atlassian Data Lake properly.
What is the issue type of the issue?
Sub-task issues also generally can experience inconsistent data in the sprint templates due to ANALYTICS-99.
Is your Jira report filtering by a specific board?
If you are using a Jira report that is filtering by board to compare your Atlassian Analytics sprint template to, then the Analytics dashboard will have higher counts until ANALYTICS-2 is implemented and board data is available in the Atlassian Data Lake.
What is the story points value in Jira and Atlassian Analytics?
If you are using a Jira report specific to your Sprint, then you’ll want to look at the initial story point value assigned to an issue in Jira. The Atlassian Analytics charts will always show the current story points value and not the initial value set when the sprint started, which can contribute to a discrepancy between the two charts.
Are the charts using the same filter values?
Where applicable, make sure that any dashboard control values in Atlassian Analytics or any individual chart filters match any filter values in the JQL search or Jira report. Double-check that the date ranges used by the two charts are the same, that they are looking at the same project, the same sprint name, etc.
Have you compared your chart data?
Open your chart in Atlassian Analytics and the Jira report you are comparing this chart to. After confirming the two are filtering by the same values, you can compare the two charts side by side to see if a specific issue has incorrect data, if an issue has an incorrect story point value, etc. The Sprint actions and issue status changes table on the “Single sprint overview” template can also be used for comparison. This table will track the entire status and sprint history for each issue that is added to the sprint and can make debugging at the issue level easier.
You can also query the “Issue” table under the Jira family of products section of tables and filter by your specific issue key to confirm that this issue has data in the Atlassian Data Lake. You can repeat this process with the “Issue sprint history” table under the Jira section of tables to confirm that the issue has data in this table. If your Atlassian Analytics instance does not have data in the “Issue” or “Issue sprint history” tables, please contact our support team for further assistance.
The History tab of the issue in Jira may also be able to help you determine where the disrepancy comes from. Take note of any of the following that might be relevant to your issue:
The date the issue was created
The date the issue was added to the sprint
The date the issue was removed from the sprint (if applicable)
The initial story points value when the issue was added to the sprint
If the story points have changed, make note of the dates of the change and what the value changed to
If the issue is cloned, a sub-task, or otherwise inherits its sprint value
If you’re still unable to find the cause of the discrepancy, please open a support ticket with the Atlassian Analytics support team. Please include a link to your sprint dashboard in Atlassian Analytics, a link to the Jira report that you are comparing, and any relevant information about what specific data is inaccurate.