Getting Duplicate Attachment Errors: "findSingleObject Uh oh - found more than one object when single object requested" in the Logs
Platform notice: Server and Data Center only. This article only applies to Atlassian products on the Server and Data Center platforms.
Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.
*Except Fisheye and Crucible
Purpose
The purpose of this documentation is to consolidate the following bug reports:
- CONF-7882 - Getting issue details... STATUS
- CONF-18970 - Getting issue details... STATUS
This issue was once a known bug in the previous version of confluence (3.0 and below). However, Confluence users still often find duplicated attachments exists in the database with an unknown reason. This documentation provides a workaround on how to delete one of the duplicated attachments from the database.
Problem
The following appears in the atlassian-confluence.log
2016-02-17 14:32:44,338 ERROR [http-nio-8090-exec-2] [com.atlassian.hibernate.HibernateObjectDao] findSingleObject Uh oh - found more than one object when single object requested: [Attachment: charlie.png v.1 (9995208) charlie, Attachment: charlie.png v.1 (9995209) charlie]
-- referer: http://confluence_base_url:8090/pages/viewpage.action?pageId=1278210 | url: /rest/likes/1.0/content/1278210/likes | userName: charlie
Diagnosis
Find all the occurrence of the duplicated attachments:
- Open atlassian-confluence.log located in the <confluence_home_folder>/logs
Find all the occurrence of the stacktrace:
findSingleObject Uh oh - found more than one object when single object requested: [Attachment:
- You will find stacktrace that includes the duplicated attachment's name, such as the example provided in the beginning of this documentation.
Confirm the existence of duplicated attachments in the database:
From the stacktrace obtained, find out and list all the duplicated attachments from the database using SQL query.
SELECT * FROM CONTENT WHERE TITLE = '<DUPLICATED ATTACHMENT TITLE>';
If the query above returns too many results, it's possible to further narrow down those results:
select title, version, pageid, count(pageid)
from CONTENT
where title='<DUPLICATED ATTACHMENT TITLE>'
and contenttype='ATTACHMENT'
group by title, version, pageid having count(pageid) > 1;
SELECT contentid, title, version, pageid, tableWithCount.count FROM
(SELECT *, count(*)
OVER
(PARTITION BY
version,pageid
) AS count
FROM content) tableWithCount
WHERE title='<DUPLICATED ATTACHMENT TITLE>' and contenttype='ATTACHMENT' and tableWithCount.count > 1 order by pageid;
SELECT * FROM ATTACHMENTS WHERE TITLE = '<DUPLICATED ATTACHMENT TITLE>';
Take note of the attachments' name. For example, from the above logs, the attachment name in question is 'charlie.png'. Therefore, replace '<DUPLICATED ATTACHMENT TITLE>' with 'charlie.png'
The SQL queries above will list you with the details of the duplicated attachments. From here, decide which attachments that should be kept and which one should be deleted. Take note of their CONTENTID (5.7 and above) or their ATTACHMENTID (5.6 and below).
Workaround
The log messages are harmless and the attachments will work as expected. However, if you need to remove those messages, the workaround is to delete one of the duplicated attachments from the database. Use the following DELETE SQL queries:
DELETE FROM CONTENT WHERE CONTENTID = <CONTENT_ID>;
Use the queries below to avoid FK constraint error:
DELETE FROM IMAGEDETAILS where ATTACHMENTID = <CONTENT_ID>; DELETE FROM CONTENTPROPERTIES WHERE CONTENTID = <CONTENT_ID>; DELETE FROM CONTENT WHERE CONTENTID = <CONTENT_ID>;
DELETE FROM ATTACHMENTS WHERE ATTACHMENTID = <ATTACHMENT_ID>;
Replace the <CONTENT_ID> and <ATTACHMENT_ID> with the CONTENTID and the ATTACHMENTID that was obtained from the Diagnosis step
Always back up your data before performing any modifications to the database. If possible, test any alter, insert, update, or delete SQL commands on a staging server first.