Confluence Fails with ORA-01555 Error
Platform Notice: Data Center - This article applies to Atlassian products on the Data Center platform.
Note that this knowledge base article was created for the Data Center version of the product. Data Center knowledge base articles for non-Data Center-specific features may also work for Server versions of the product, however they have not been tested. Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.
*Except Fisheye and Crucible
Symptoms
The following appears in the atlassian-confluence.log
:
ORA-01555: snapshot too old: rollback segment number with name "" too small
You may also see ORA-22924
appear as a part of the error message:
ORA-01555: snapshot too old: rollback segment number with name "" too small
ORA-22924: snapshot too old
Cause
The root cause is unknown at this time. This may be caused by:
- An unclean shutdown or other problems in the Oracle Database.
- Certain third party plugins that trying to update the values in some table when space that contains large/long pages are being view or edited.
Resolution
Always back up your data before performing any modifications to the database. If possible, test any alter, insert, update, or delete SQL commands on a staging server first.
First, we must track down where the problem is localized. By exporting the data, we can see which table (or tables) fail:
- Shut down Confluence, and perform a full database backup
- Using the
expdp
command, perform a full export of the Confluence Database Schema.- An example command used could be
expdp confuser/confpass DATA_PUMP_DIR=oracle_dir DUMPFILE=filename
- Note that the Confluence user must have full access to this directory - and that the directory is an Oracle Directory Object, rather than a file system directory
- An example command used could be
- The export process should produce an error for one or more tables, similar to the one present in the Confluence logs.
We'll use the BANDANA
table as an example for this resolution. Your affected table(s) may vary.
- Create a new version of the affected table, that has the same columns, data types, constraints, and indexes.
You can use something like this:
select dbms_metadata.get_ddl( 'TABLE', 'BANDANA', 'confuser' ) from dual
confuser
refers to the schema that the table is located in. Usually, that will be the same as your username that Confluence uses to access the database; although it may need to be adjusted to suit your environment.- Use the
CREATE TABLE
query returned to create a new table, such asBANDANA_NEW
Now that we have a duplicate of the affected table, we'll insert rows into that table until we find the corrupt rows. Execute the following SQL:
SELECT CONCAT('INSERT INTO BANDANA_NEW (SELECT * FROM BANDANA WHERE BANDANAID=', CONCAT(BANDANAID, ');')) FROM BANDANA;
This produces a list of insert statements from the original table. Execute all of the statements. One or more of those statements should fail - at which point, you can identify the failing row, and delete it from the original table. Assuming that the row with the ID 12345
fails to insert into the duplicate table, you would use the following queries:
DELETE FROM BANDANA WHERE BANDANAID = 12345;
Repeat this process for all rows that failed to insert.
Cleaning Up
Delete the duplicate tables you created. In this example:
DROP TABLE BANDANA_NEW;
Then, start Confluence again.