How to find the Issues with most data in Jira Data Center
Platform Notice: Data Center Only - This article only applies to Atlassian products on the Data Center platform.
Note that this KB was created for the Data Center version of the product. Data Center KBs for non-Data-Center-specific features may also work for Server versions of the product, however they have not been tested. Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.
*Except Fisheye and Crucible
Summary
Issues with a lot of data can slow down page load times and REST API calls response times if such data is being fetched. They may also be known as "expensive issues" or "heavy issues".
Some data sets that are known to be a common cause of these slowness perceptions include, but are not limited to:
Third-party apps may also provide custom fields that are dynamically calculated or formatted, which can also impact response times. This article doesn't address third-party-provided fields or features.
While these issues alone may only impact the performance of the requests handling them, enough concurrent requests may load too much data into memory and potentially cause an outage (Full GC cycles or even OutOfMemoryError).
Environment
All versions of Jira Data Center and Server.
This includes both Jira Software and Jira Service Management.
Solution
The queries shared below have an arbitrary data count threshold of 100 and fetch the 100 first rows. These can be adjusted as needed for your troubleshooting purposes.
You can also uncomment either of the WHERE clauses if you want to specify a particular Issue key or a set of Issue IDs.
Once identified, you can assess whether these issues are degrading the instance performance and proceed to Archive issues. If archiving is not an option, evaluate whether it's possible to create a very restrictive Security Level and set those Issues to it to prevent general audience access.
Issues with most comments
Postgres and MySQL syntax
1
2
3
4
5
6
7
8
9
10
SELECT i.id AS "Issue Id", concat(p.pkey, concat('-', i.issuenum)) AS "Issue key", count(i.id) AS "Comments"
FROM jiraaction a
JOIN jiraissue i ON i.id = a.issueid
JOIN project p ON p.id = i.project
-- WHERE p.pkey = 'JIRA' AND i.issuenum = 12345 -- For Issue JIRA-12345
-- WHERE i.id in (10001, 10002, 10003) -- for Issues Ids
GROUP BY i.id, p.pkey, i.issuenum
HAVING count(i.id) >= 100
ORDER BY 3 DESC
LIMIT 100;
Oracle syntax
1
2
3
4
5
6
7
8
9
10
SELECT i.id AS "Issue Id", concat(p.pkey, concat('-', i.issuenum)) AS "Issue key", count(i.id) AS "Comments"
FROM jiraaction a
JOIN jiraissue i ON i.id = a.issueid
JOIN project p ON p.id = i.project
-- WHERE p.pkey = 'JIRA' AND i.issuenum = 12345 -- For Issue JIRA-12345
-- WHERE i.id in (10001, 10002, 10003) -- for Issues Ids
GROUP BY i.id, p.pkey, i.issuenum
HAVING count(i.id) >= 100
ORDER BY 3 DESC
FETCH FIRST 100 ROWS ONLY;
MSSQL syntax
1
2
3
4
5
6
7
8
9
SELECT TOP 100 i.id AS "Issue Id", concat(p.pkey, concat('-', i.issuenum)) AS "Issue key", count(i.id) AS "Comments"
FROM jiraaction a
JOIN jiraissue i ON i.id = a.issueid
JOIN project p ON p.id = i.project
-- WHERE p.pkey = 'JIRA' AND i.issuenum = 12345 -- For Issue JIRA-12345
-- WHERE i.id in (10001, 10002, 10003) -- for Issues Ids
GROUP BY i.id, p.pkey, i.issuenum
HAVING count(i.id) >= 100
ORDER BY 3 DESC;
Issues with most change history records (changelog)
Postgres and MySQL syntax
1
2
3
4
5
6
7
8
9
10
11
SELECT i.id AS "Issue Id", concat(p.pkey, concat('-', i.issuenum)) AS "Issue key", count(i.id) AS "Change records"
FROM changeitem ci
JOIN changegroup cg ON cg.id = ci.groupid
JOIN jiraissue i ON cg.issueid = i.id
JOIN project p ON p.id = i.project
-- WHERE p.pkey = 'JIRA' AND i.issuenum = 12345 -- For Issue JIRA-12345
-- WHERE i.id in (10001, 10002, 10003) -- for Issues Ids
GROUP BY i.id, p.pkey, i.issuenum
HAVING count(i.id) >= 100
ORDER BY 3 DESC
LIMIT 100;
Oracle syntax
1
2
3
4
5
6
7
8
9
10
11
SELECT i.id AS "Issue Id", concat(p.pkey, concat('-', i.issuenum)) AS "Issue key", count(i.id) AS "Change records"
FROM changeitem ci
JOIN changegroup cg ON cg.id = ci.groupid
JOIN jiraissue i ON cg.issueid = i.id
JOIN project p ON p.id = i.project
-- WHERE p.pkey = 'JIRA' AND i.issuenum = 12345 -- For Issue JIRA-12345
-- WHERE i.id in (10001, 10002, 10003) -- for Issues Ids
GROUP BY i.id, p.pkey, i.issuenum
HAVING count(i.id) >= 100
ORDER BY 3 DESC
FETCH FIRST 100 ROWS ONLY;
MSSQL syntax
1
2
3
4
5
6
7
8
9
10
SELECT TOP 100 i.id AS "Issue Id", concat(p.pkey, concat('-', i.issuenum)) AS "Issue key", count(i.id) AS "Change records"
FROM changeitem ci
JOIN changegroup cg ON cg.id = ci.groupid
JOIN jiraissue i ON cg.issueid = i.id
JOIN project p ON p.id = i.project
-- WHERE p.pkey = 'JIRA' AND i.issuenum = 12345 -- For Issue JIRA-12345
-- WHERE i.id in (10001, 10002, 10003) -- for Issues Ids
GROUP BY i.id, p.pkey, i.issuenum
HAVING count(i.id) >= 100
ORDER BY 3 DESC;
Issues with most worklog entries
Postgres and MySQL syntax
1
2
3
4
5
6
7
8
9
10
SELECT i.id AS "Issue Id", concat(p.pkey, concat('-', i.issuenum)) AS "Issue key", count(i.id) AS "Worklog entries"
FROM worklog w
JOIN jiraissue i ON i.id = w.issueid
JOIN project p ON p.id = i.project
-- WHERE p.pkey = 'JIRA' AND i.issuenum = 12345 -- For Issue JIRA-12345
-- WHERE i.id in (10012, 12010, 12344) -- for Issues Ids
GROUP BY i.id, p.pkey, i.issuenum
HAVING count(i.id) >= 100
ORDER BY 3 DESC
LIMIT 100;
Oracle syntax
1
2
3
4
5
6
7
8
9
10
SELECT i.id AS "Issue Id", concat(p.pkey, concat('-', i.issuenum)) AS "Issue key", count(i.id) AS "Worklog entries"
FROM worklog w
JOIN jiraissue i ON i.id = w.issueid
JOIN project p ON p.id = i.project
-- WHERE p.pkey = 'JIRA' AND i.issuenum = 12345 -- For Issue JIRA-12345
-- WHERE i.id in (10012, 12010, 12344) -- for Issues Ids
GROUP BY i.id, p.pkey, i.issuenum
HAVING count(i.id) >= 100
ORDER BY 3 DESC
FETCH FIRST 100 ROWS ONLY;
MSSQL syntax
1
2
3
4
5
6
7
8
9
SELECT TOP 100 i.id AS "Issue Id", concat(p.pkey, concat('-', i.issuenum)) AS "Issue key", count(i.id) AS "Worklog entries"
FROM worklog w
JOIN jiraissue i ON i.id = w.issueid
JOIN project p ON p.id = i.project
-- WHERE p.pkey = 'JIRA' AND i.issuenum = 12345 -- For Issue JIRA-12345
-- WHERE i.id in (10012, 12010, 12344) -- for Issues Ids
GROUP BY i.id, p.pkey, i.issuenum
HAVING count(i.id) >= 100
ORDER BY 3 DESC;
Issues with most Issue links
Postgres and MySQL syntax
1
2
3
4
5
6
7
8
9
10
SELECT i.id AS "Issue Id", concat(p.pkey, concat('-', i.issuenum)) AS "Issue key", count(i.id) AS "Issue links"
FROM issuelink il
JOIN jiraissue i ON (il.source = i.id OR il.destination = i.id)
JOIN project p ON p.id = i.project
-- WHERE p.pkey = 'JIRA' AND i.issuenum = 12345 -- For Issue JIRA-12345
-- WHERE i.id in (10012, 12010, 12344) -- for Issues Ids
GROUP BY i.id, p.pkey, i.issuenum
HAVING count(i.id) >= 100
ORDER BY 3 DESC
LIMIT 100;
Oracle syntax
1
2
3
4
5
6
7
8
9
10
SELECT i.id AS "Issue Id", concat(p.pkey, concat('-', i.issuenum)) AS "Issue key", count(i.id) AS "Issue links"
FROM issuelink il
JOIN jiraissue i ON (il.source = i.id OR il.destination = i.id)
JOIN project p ON p.id = i.project
-- WHERE p.pkey = 'JIRA' AND i.issuenum = 12345 -- For Issue JIRA-12345
-- WHERE i.id in (10012, 12010, 12344) -- for Issues Ids
GROUP BY i.id, p.pkey, i.issuenum
HAVING count(i.id) >= 100
ORDER BY 3 DESC
FETCH FIRST 100 ROWS ONLY;
MSSQL syntax
1
2
3
4
5
6
7
8
9
SELECT i.id AS "Issue Id", concat(p.pkey, concat('-', i.issuenum)) AS "Issue key", count(i.id) AS "Issue links"
FROM issuelink il
JOIN jiraissue i ON (il.source = i.id OR il.destination = i.id)
JOIN project p ON p.id = i.project
-- WHERE p.pkey = 'JIRA' AND i.issuenum = 12345 -- For Issue JIRA-12345
-- WHERE i.id in (10012, 12010, 12344) -- for Issues Ids
GROUP BY i.id, p.pkey, i.issuenum
HAVING count(i.id) >= 100
ORDER BY 3 DESC;
Issues too much data in text fields
Issues with a lot of text in fields like Description or Comments are known to impact performance:
The character limit for such fields is defined through the Advanced Setting jira.text.field.character.limit in Jira's Admin UI. On Jira 8 it defaults to 32767 but can be overwritten by admins anytime.
Top Description lengths
1
2
3
4
5
6
7
8
9
10
11
12
13
select
i.id as "Issue Id",
concat(p.pkey, concat('-', i.issuenum)) as "Issue Key",
length(i.description) as "Description length"
from
jiraissue i
join
project p on p.id = i.project
where
length(i.description) >= 10000
order by
"Description length" desc
;
Top Comment lengths
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
select
i.id as "Issue Id",
concat(p.pkey, concat('-', i.issuenum)) as "Issue Key",
length(a.actionbody) as "Comment length"
from
jiraaction a
left join
jiraissue i on i.id = a.issueid
left join
project p on p.id = i.project
where
a.actiontype = 'comment'
and length(a.actionbody) >= 10000
order by
"Comment length" desc
;
MSSQL syntax
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
select
i.id as "Issue Id",
concat(p.pkey, concat('-', i.issuenum)) as "Issue Key",
DATALENGTH(i.description) as "Description length"
from
jiraissue i
join
project p on p.id = i.project
where
DATALENGTH(i.description) >= 10000
order by
"Description length" desc;
select
i.id as "Issue Id",
concat(p.pkey, concat('-', i.issuenum)) as "Issue Key",
datalength(a.actionbody) as "Comment length"
from
jiraaction a
left join
jiraissue i on i.id = a.issueid
left join
project p on p.id = i.project
where
a.actiontype = 'comment'
and datalength(a.actionbody) >= 10000
order by
"Comment length" desc;
Issues with fields too big to be indexed
Jira limits fields to 32766 characters on the Lucene Index (JSWSERVER-20133—Fix the Lucene immense field indexing failure). If a field exceeds this limit, it won't be indexed. Instead, every time the Issue is fetched or searched through the Browser or REST API, Jira loads the data from the DB.
With enough fields in this condition and enough concurrent search or fetch/browse requests, Jira may quickly go into an OutOfMemoryError state.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
select
i.id as "Issue Id",
concat(p.pkey, concat('-', i.issuenum)) as "Issue Key",
cf.id as "Custom field Id",
cf.cfname as "Custom field Name",
cf.customfieldtypekey as "Custom field type",
cfv.id as "CF value id",
cfv.updated "CF value updated",
length(cfv.textvalue) as "CF textvalue length",
cfv.valuetype as "CF value type"
from customfieldvalue cfv
join customfield cf on cf.id = cfv.customfield
join jiraissue i on i.id = cfv.issue
join project p on p.id = i.project
where length(cfv.textvalue) >= 32766
order by 8 desc, 4 asc, 2 asc;
These Issues should be archived as soon as possible until Admins can revisit the usage of such large fields. Restricting the context of such fields to Projects they're not used may also be a workaround to prevent memory shortages if these fields are the cause.
If these fields are provided by 3rd party apps, they may not follow Jira's jira.text.field.character.limit restriction.
Was this helpful?