After database migration to Amazon Aurora Postgres, running builds throws internal server errors

Still need help?

The Atlassian Community is here for you.

Ask the community

Platform notice: Server and Data Center only. This article only applies to Atlassian products on the Server and Data Center platforms.

Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.

*Except Fisheye and Crucible

Problem

After database migration to Amazon Aurora Postgres database, an internal server error is received when running a build plan. 

The following appears in the atlassian-bamboo.log

com.atlassian.activeobjects.internal.ActiveObjectsInitException: bundlg [com.atlassian.bamboo.plugins.brokenbuildtracker.atlassian-bamboo-plugin-brokenbuildtracker]
	at com.at1assian.activeobjects.osgi.TenantAwareActiveObjects$1 131.call(TenantAwareActiveObjects.javaz95)
	at com.at1assian.activeobjects.osgi.TenantAwareActiveObjects$1 131.call(TenantAwareActiveObjects.java:86)
	at com.at1assian.sal.core.executor.ThreadLocalDelegateCallable.call(ThreadLocaIDelegateCaIlable.java:38)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorkertThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor‘Norker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.runtThread.java:745)
Caused by: java.tang.IllegalStateException: TABLE: A0_7A45FB_AOTRACKING_ENTRY: ACTIVE - PLAN_ID - TRACKING_ID - can't find type 5 (precision=5) in field ACTIVE
	at net.java.ao.schema.helper.DatabaseMetaDataReaderImpl.getFields(DatabasefletaDataReaderImpl.java:86)
	at net.java.ao.schema.ddl.SchemaReader.readFields(SchemaReader.java:122)
	at net.java.ao.schema.ddl.SchemaReader.readTab1e(SchemaReader.java:107)
	at net.java.ao.schema.ddl.SchemaReader.access$000(SchemaReader.javaz59)
	at net.java.ao.schema.ddl.SchemaReadersl.apply(SchemaReader.javaz96)
	at net.java.ao.schema.ddl.SchemaReadersl.apply(SchemaReader.javaz94)
	at com.google.common.collect.Iteratorsss.transform(Iterators.java:799)
	at com.google.common.collect.TransformedIterator.next(Transfo edIterator.java:48)
	at com.google.common.collect.Iterators.addAll(Iterators.java:;32)
	at com.google.common.collect.Lists.newArrayList(Lists.java:160}
	at com.google.common.collect.Lists.newArrayList(Lists.java:144}
	at net.java.ao.schema.ddl.SchemaReader.readSchema(SchemaReader.java:94)
	at net.java.ao.schema.ddl.SchemaReader.readSchema(SchemaReader.javazss)
	at net.java.ao.schema.ddl.SchemaReader.readSchema(SchemaReader.javaz78)
	at net.java.ao.schema.SchemaGenerator.generateImpl(SchemaGenerator.java:107)
	at net.java.ao.schema.SchemaGenerator.migrate(5chemaGenerator.java:84)
	at net.java.ao.EntityManager.migrate(EntityManager.java:128)
	at com.at1assian.activeobjects.internal.EntityHanagedActiveObjacts.migrate(EntityHanagedActiveObjects.java:45)
	at com.at1assian.activeobjects.internal.AbstractActiveObjectsFactorysl.doInTransaction(AbstractActiveObjectsFactory.javaz77)
	at com.at1assian.activeobjects.internal.AbstractActiveObjectsFactorysl.doInTransaction(AbstractActiveObjectsFactory.javaz72)
	at com.at1assian.sal.core.transaction.HostContextTransactionTemplatesl.doInTransaction(HostContextTransactionTemplate.java:21)
	at com.at1assian.sal.spring.component.SpringHostContextAccessor51.doInTransaction(SpringHostContextAccessor.javaz71)
	at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTempIate.java:133)
	at com.at1assian.sal.spring.component.SpringHostContextAccessor.doInTransaction(SpringHostContextAccessor.java:68)
	at sun.reflect.NativeMethodAccessorImpl.invokeOtNative Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpI.java:62)
	at sun.reflect.DelegatingHethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoketMethod.java:498)
	at com.at1assian.plugin.util.ContextClassLoaderSettingInvocationHandler.invoke(ContextClassLoaderSettingInvocationHandIer.java:26)

Cause

After the migration to Amazon Aurora Postgres database, there will be two schemas, one a public schema with uppercase tables and Bamboo's schema with lowercase tables.

Resolution

Drop the Amazon Aurora public schema with uppercase tables and restart Bamboo.

 

Last modified on Mar 9, 2017

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.