Database Corruption - Bamboo Import fails - AUTH_ATTEMPT_INFO
This KB article is applicable for Bamboo versions up to 5.10 only. This is due to the changes in the backup structure - BAM-15093 - Getting issue details... STATUS
Symptoms
Bamboo fails while restoring from backup zip file:
Import failed. Please contact Atlassian at https://support.atlassian.com/ and attach your export file.
Import has failed. Errors encountered while importing. This Bamboo instance may be corrupt.
org.springframework.dao.DataIntegrityViolationException: Hibernate operation: Could not execute JDBC batch update; SQL []; Violation of UNIQUE KEY constraint 'UQ_AUTH_ATTE0B75F6F0FB750B3'. Cannot insert duplicate key in object 'dbo.AUTH_ATTEMPT_INFO'.; nested exception is java.sql.BatchUpdateException: Violation of UNIQUE KEY constraint 'UQAUTH_ATTE0B75F6F0FB750B3'. Cannot insert duplicate key in object 'dbo.AUTH_ATTEMPT_INFO'. Caused by: java.sql.BatchUpdateException: Violation of UNIQUE KEY constraint 'UQAUTH_ATT_E0B75F6F0FB750B3'.
Cannot insert duplicate key in object 'dbo.AUTH_ATTEMPT_INFO'.
at net.sourceforge.jtds.jdbc.JtdsStatement.executeBatch(JtdsStatement.java:944)
at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeBatch(NewProxyPreparedStatement.java:1723)
at net.sf.hibernate.impl.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:54)
at net.sf.hibernate.impl.BatcherImpl.executeBatch(BatcherImpl.java:128)
at net.sf.hibernate.impl.BatcherImpl.prepareStatement(BatcherImpl.java:61)
at net.sf.hibernate.impl.BatcherImpl.prepareStatement(BatcherImpl.java:58)
at net.sf.hibernate.impl.BatcherImpl.prepareBatchStatement(BatcherImpl.java:111)
at net.sf.hibernate.persister.EntityPersister.insert(EntityPersister.java:454)
...
Cause
Extract the exported .zip file and check to see whether loginfos.xml file from the db-export directory of the exported archive contains different cases ('username1' - 'Username1') for the failing user record:
<?xml version='1.1' encoding='UTF-8' standalone='yes'?>
<bamboo>
<date>Mon Jul 16 10:34:18 EDT 2012</date>
<version>4.1.2</version>
<buildDate>21-Jun-2012</buildDate>
<build>3103</build>
<serverID>BAXF-BAXF-BAXF-BAXF</serverID>
<loginInfos>
<loginInfo>
<userName>username1</userName>
<authCount>0</authCount>
<authTimestamp>1341288000000</authTimestamp>
</loginInfo>
<loginInfo>
<userName>Username1</userName>
<authCount>3</authCount>
<authTimestamp>1341288000000</authTimestamp>
</loginInfo>
<loginInfo>
<userName>user2</userName>
<authCount>0</authCount>
<authTimestamp>1341892800000</authTimestamp>
</loginInfo>
</loginInfos>
</bamboo>
The DB was Case Sensitive when the records were inserted into the database, and the import is failing on a case insensitive database.
Resolution
The AUTH_ATTEMPT_INFO table keeps track of the login attempts to protect against brute force attacks, the information can be deleted (especially when the content of that table causes problems). There are 3 options:
Shutdown Bamboo. Make sure that you have a backup of your Bamboo database. Run this query to manually clean the DB:
delete from AUTH_ATTEMPT_INFO;
Start the Bamboo instance and get a new export.
Extract the data from the exported archive. Manually edit the db-export/loginfos.xml file and remove the "duplicate" <loginInfo> section(s):
<?xml version='1.1' encoding='UTF-8' standalone='yes'?> <bamboo> <date>Mon Jul 16 10:34:18 EDT 2012</date> <version>4.1.2</version> <buildDate>21-Jun-2012</buildDate> <build>3103</build> <serverID>BAXF-BAXF-BAXF-BAXF</serverID> <loginInfos> <loginInfo> <userName>Username1</userName> <authCount>3</authCount> <authTimestamp>1341288000000</authTimestamp> </loginInfo> <loginInfo> <userName>user2</userName> <authCount>0</authCount> <authTimestamp>1341892800000</authTimestamp> </loginInfo> </loginInfos> </bamboo>
Archive the extracted data after saving the changes in the db-export/loginfos.xml file. Make sure to archive the 3 directories (builds, configuration, db-export) that you extracted - DO NOT archive a directory that contains those 3 directories. Import .zip file again.
- Use a Case Sensitive Database as per our documentation.