Troubleshooting XML backups that fail on restore

Troubleshooting failed XML site backups

On this page

Still need help?

The Atlassian Community is here for you.

Ask the community

Since Confluence 8.3, we have changed the way we do backup and restore. Learn more about these changes in the Confluence 8.3 Release Notes.

As a result of this change, many issues with the old system were resolved. That means the recommendations listed below will not be applicable to backup and restore anymore.

XML site backups are only necessary for migrating to a new database. Upgrading Confluence, Setting up a test server or Production Backup Strategy is better done with an SQL dump.

Seeing an error when creating or importing a site or space backup?

Problem

Solution

Exception while creating backup

See Troubleshooting failed XML site backups

Exception while importing backup

See instructions below

On this page:

Related Topics:

Common problems

You can't restore a backup into an earlier version of Confluence. 

For example, if your XML backup was generated from Confluence 8.3, you can't import it into Confluence 7.19.


To check whether your backup can be successfully restored:

  • Check which Confluence version you are using in Administration  > General Configuration> System Information. The version will be listed next to Confluence Version.
  • Check which Confluence version your XML backup was generated from. See How to Determine XML Backup Confluence Version.

(tick) If you are restoring a backup to a later version, it can be restored successfully.

(error) If you are restoring a backup to an earlier version, this is not supported and your import may fail.

Resolve errors when attempting to restore an XML backup

The errors may be caused by a slightly corrupt database. You will need to find the XML backup file entry that is violating the DB rules, modify the entry and recreate the XML backup:

  1. On the instance being restored, follow the instructions to disable batched updates (for simpler debugging), log SQL queries and log SQL queries with parameters at Enabling Detailed SQL Logging.
  2. Once all three changes have been made, restart Confluence.
  3. Attempt another restore.
  4. Once the restore fails, check your application log files and the catalina.<datestamp>.log  (in your installation directory) to find out what object could not be converted into XML format. 
  5. Scroll to the bottom of the file and identify the last error relating to a violation of the database constraint. For example:

    2006-07-13 09:32:33,372 ERROR [confluence.importexport.impl.ReverseDatabinder] endElement net.sf.hibernate.exception.ConstraintViolationException:
      could not insert: [com.atlassian.confluence.pages.Attachment#38]
    net.sf.hibernate.exception.ConstraintViolationException: could not insert: [com.atlassian.confluence.pages.Attachment#38]
    ...
    Caused by: java.sql.SQLException: ORA-01400: cannot insert NULL into ("CONFUSER"."ATTACHMENTS"."TITLE")
    at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
    

    This example indicates a row in your attachment table with ID = 38 that has a null title.

  6. Go to the server that the backup was created on. You must have a copy of the database from which the backup was created. If you do not have this, use a DBA tool to restore a manual backup of the database.
  7. Open a DBA tool and connect to the original database instance and scan the table names in the schema. You will have to modify a row in one of these tables.
  8. To work out which table, open the log files check the first line of the exception. To work out what table an object maps to in the database, here's a rough guide:
    • Pages, blogposts, comments --> CONTENT table.
    • attachments --> ATTACHMENTS table.
  9. To correct the example error, go to the attachment table and find that attachment object with id 38. This will have a a null title. Give a title using the other attachments titles as a guide. You may have a different error and should modify the database accordingly.
  10. Once the entry has been corrected, create the XML backup again.
  11. Import the backup into the new version.
  12. If the import succeeds, revert the changes made in your SQL logging to re-enable disable batched updates and turn off log SQL queries and log SQL queries with parameters.
  13. Restart Confluence.

Troubleshooting "Duplicate Entry" for key "cp_" or "cps_"

If you are encountering an error message such as:

com.atlassian.confluence.importexport.ImportExportException: Unable to complete import because the data does not match the constraints in the Confluence schema. Cause: MySQLIntegrityConstraintViolationException: Duplicate entry '1475804-Edit' for key 'cps_unique_type'

This indicates that the XML export came from a version of Confluence with a corrupt permissions database, caused by some 3rd party plugin. This is an issue that was fixed when CONF-22123 was implemented in Confluence 3.5.2. The simplest workaround is to export the space again after upgrading the instance to 3.5.2 or above. If that is not an option, then either the export will need to be edited manually to remove the duplicate permission entries or the source instance will need to have the offending entries removed. The following SQL queries can be used to look for such entries:

SELECT * FROM CONTENT_PERM WHERE USERNAME IS NULL AND GROUPNAME IS NULL;

SELECT cp.ID, cp.CP_TYPE, cp.USERNAME, cp.GROUPNAME, cp.CPS_ID, cp.CREATOR,
cp.CREATIONDATE, cp.LASTMODIFIER, cp.LASTMODDATE
FROM CONTENT_PERM cp
WHERE cp.USERNAME IS NOT NULL AND cp.GROUPNAME IS NOT NULL;

SELECT cps1.ID, cps1.CONTENT_ID, cps1.CONT_PERM_TYPE FROM CONTENT_PERM_SET cps1, CONTENT_PERM_SET cps2
WHERE cps1.ID <> cps2.ID AND
cps1.CONTENT_ID = cps2.CONTENT_ID AND
cps1.CONT_PERM_TYPE = cps2.CONT_PERM_TYPE
ORDER BY cps1.CONTENT_ID, cps1.CONT_PERM_TYPE, cps1.CREATIONDATE ASC;

SELECT cp.ID, cp.CP_TYPE, cps.CONTENT_ID,
(SELECT scps.ID FROM CONTENT_PERM_SET scps WHERE scps.CONTENT_ID = cps.CONTENT_ID AND scps.CONT_PERM_TYPE = cp.CP_TYPE) AS suggested_cps_id
FROM CONTENT_PERM cp, CONTENT_PERM_SET cps
WHERE cp.CPS_ID = cps.ID AND
cp.CP_TYPE <> cps.CONT_PERM_TYPE;

SELECT DISTINCT cp1.ID, cp1.CP_TYPE, cp1.USERNAME, cp1.GROUPNAME, cp1.CPS_ID,
cp1.CREATOR, cp1.CREATIONDATE, cp1.LASTMODIFIER, cp1.LASTMODDATE
FROM CONTENT_PERM cp1, CONTENT_PERM_SET cps1, CONTENT_PERM cp2, CONTENT_PERM_SET cps2
WHERE
cp1.CPS_ID = cps1.ID AND
cp2.CPS_ID = cps2.ID AND
cp1.ID <> cp2.ID AND
cps1.CONTENT_ID = cps2.CONTENT_ID AND
cp1.CP_TYPE = cp2.CP_TYPE AND
cp1.USERNAME = cp2.USERNAME
ORDER BY cp1.CPS_ID, cp1.CP_TYPE, cp1.USERNAME, cp1.CREATIONDATE;

SELECT DISTINCT cp1.ID, cp1.CP_TYPE, cp1.USERNAME, cp1.GROUPNAME, cp1.CPS_ID,
cp1.CREATOR, cp1.CREATIONDATE, cp1.LASTMODIFIER, cp1.LASTMODDATE
FROM CONTENT_PERM cp1, CONTENT_PERM_SET cps1, CONTENT_PERM cp2, CONTENT_PERM_SET cps2
WHERE
cp1.CPS_ID = cps1.ID AND
cp2.CPS_ID = cps2.ID AND
cp1.ID <> cp2.ID AND
cps1.CONTENT_ID = cps2.CONTENT_ID AND
cp1.CP_TYPE = cp2.CP_TYPE AND
cp1.GROUPNAME = cp2.GROUPNAME
ORDER BY cp1.CPS_ID, cp1.CP_TYPE, cp1.GROUPNAME, cp1.CREATIONDATE;

SELECT * FROM CONTENT_PERM_SET
WHERE ID NOT IN (SELECT DISTINCT CPS_ID FROM CONTENT_PERM);

Remove all matching entries and perform the export again.

Troubleshooting "Duplicate Key" related problems

If you are encountering an error message such as:

could not insert: [bucket.user.propertyset.BucketPropertySetItem#bucket.user.propertyset.BucketPropertySetItem@a70067d3]; SQL []; Violation of PRIMARY KEY constraint 'PK_OS_PROPERTYENTRY314D4EA8'. Cannot insert duplicate key in object 'OS_PROPERTYENTRY'.; nested exception is java.sql.SQLException: Violation of PRIMARY KEY constraint 'PKOS_PROPERTYENTRY_314D4EA8'. Cannot insert duplicate key in object 'OS_PROPERTYENTRY'.

This indicates that the Primary Key constraint 'PK_OS_PROPERTYENTRY_314D4EA8' has duplicate entries in table 'OS_PROPERTYENTRY'.
You can locate the constraint key referring to 'PK_OS_PROPERTYENTRY_314D4EA8' in your table 'OS_PROPERTYENTRY' and locate any duplicate values in it and remove them, to ensure the "PRIMARY KEY" remains unique. An example query to list duplicate entries in the 'OS_PROPERTYENTRY' table is:

SELECT ENTITY_NAME,ENTITY_ID,ENTITY_KEY,COUNT(*) FROM OS_PROPERTYENTRY GROUP BY ENTITY_NAME,ENTITY_ID,ENTITY_KEY HAVING COUNT(*)>1

Troubleshooting "net.sf.hibernate.PropertyValueException: not-null" related problems

If you're receiving a message like:

ERROR [Importing data task] [confluence.importexport.impl.ReverseDatabinder] endElement net.sf.hibernate.PropertyValueException: not-null property references a null or transient value: com.atlassian.user.impl.hibernate.DefaultHibernateUser.name

This means there's an unexpected null value in a table. In the above example, the error is in the name column in the USERS table. We've also seen them in the ATTACHMENTS table.

Remove the row with the null value, redo the xml export, and reimport.

To help prevent this issue from recurring

  1. If you are using the embedded database, be aware that it is bundled for evaluation purposes and does not offer full transactional integrity in the event of sudden power loss, which is why an external database is recommended for production use. You should migrate to an external database.
  2. If you are using an older version of Confluence than the latest, you should consider upgrading at this point.

The problem with different settings for case sensitivity varies between databases. The case sensitivity of the database is usually set through the collation that it uses. Please vote on the existing issue

Last modified on May 22, 2023

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.