Crowd locks down during concurrent directory sync

Still need help?

The Atlassian Community is here for you.

Ask the community

Symptoms

Crowd UI becomes non-responsive and Tomcat log shows massive ongoing concurrent directory synchronization, such as this:

011-03-04 13:20:50,037 scheduler_Worker-6 INFO [persistence.hibernate.batch.AbstractBatchProcessor] processed batch [ 6 ] of [ 7 ] [ 85.7% ]
2011-03-04 13:20:50,044 scheduler_Worker-2 INFO [persistence.hibernate.batch.AbstractBatchProcessor] processed batch [ 2 ] of [ 7 ] [ 28.6% ]
2011-03-04 13:20:50,050 scheduler_Worker-6 INFO [persistence.hibernate.batch.AbstractBatchProcessor] processed batch [ 7 ] of [ 7 ] [ 100.0 % ]
2011-03-04 13:20:50,050 scheduler_Worker-9 INFO [atlassian.crowd.directory.DbCachingRemoteDirectoryCache] added [ 133 ] user members to [ C-group-3k-50 ] in [ 103ms ]
2011-03-04 13:20:50,051 scheduler_Worker-5 INFO [atlassian.crowd.directory.DbCachingRemoteDirectoryCache] scanned and compared [ 0 ] group members from [ C-group-3k-50 ] in [ 1ms ]
2011-03-04 13:20:50,051 scheduler_Worker-6 INFO [directory.ldap.cache.UsnChangedCacheRefresher] found [ 124 ] remote user-group memberships in [ 0ms ]
2011-03-04 13:20:50,051 scheduler_Worker-6 INFO [directory.ldap.cache.UsnChangedCacheRefresher] found [ 0 ] remote group-group memberships in [ 0ms ]
2011-03-04 13:20:50,053 scheduler_Worker-3 INFO [atlassian.crowd.directory.DbCachingRemoteDirectoryCache] scanned and compared [ 124 ] user members from [ C-group-3k-51 ] in [ 1ms ]
2011-03-04 13:20:50,056 scheduler_Worker-2 INFO [persistence.hibernate.batch.AbstractBatchProcessor] processed batch [ 3 ] of [ 7 ] [ 42.9% ]
2011-03-04 13:20:50,068 scheduler_Worker-2 INFO [persistence.hibernate.batch.AbstractBatchProcessor] processed batch [ 4 ] of [ 7 ] [ 57.1% ]
2011-03-04 13:20:50,074 scheduler_Worker-9 INFO [persistence.hibernate.batch.AbstractBatchProcessor] processed batch [ 1 ] of [ 7 ] [ 14.3% ]
2011-03-04 13:20:50,080 scheduler_Worker-5 INFO [persistence.hibernate.batch.AbstractBatchProcessor] processed batch [ 5 ] of [ 7 ] [ 71.4% ]
011-03-04 13:20:50,107 scheduler_Worker-2 INFO [directory.ldap.cache.UsnChangedCacheRefresher] found [ 110 ] remote user-group memberships in [ 1ms ]
2011-03-04 13:20:50,107 scheduler_Worker-5 INFO [directory.ldap.cache.UsnChangedCacheRefresher] found [ 0 ] remote group-group memberships in [ 0ms ]
2011-03-04 13:20:50,108 scheduler_Worker-2 INFO [atlassian.crowd.directory.DbCachingRemoteDirectoryCache] scanned and compared [ 110 ] user members from [ C-group-3k-52 ] in [ 1ms ]
2011-03-04 13:20:50,111 scheduler_Worker-7 INFO [persistence.hibernate.batch.AbstractBatchProcessor] processed batch [ 4 ] of [ 7 ] [ 57.1% ]
2011-03-04 13:20:50,123 scheduler_Worker-6 INFO [persistence.hibernate.batch.AbstractBatchProcessor] processed batch [ 5 ] of [ 7 ] [ 71.4% ]
2011-03-04 13:20:50,130 scheduler_Worker-4 INFO [persistence.hibernate.batch.AbstractBatchProcessor] processed batch [ 1 ] of [ 6 ] [ 16.7% ]
2011-03-04 13:20:50,136 scheduler_Worker-6 INFO [persistence.hibernate.batch.AbstractBatchProcessor] processed batch [ 6 ] of [ 7 ] [ 85.7% ]
2011-03-04 13:20:50,142 scheduler_Worker-1 INFO [persistence.hibernate.batch.AbstractBatchProcessor] processed batch [ 2 ] of [ 6 ] [ 33.3% ]

Cause

Directory sync executions take up all the connections available in Crowd database connection pool.

Workaround

  1. Increase the database connection pool size. This file should be at <CROWD_HOME_DIRECTORY>/crowd.cfg.xml

    <property name="hibernate.c3p0.max_size">50</property>
    

    The number should be as high as possible while still under the max connections allowed by your database application of choice. For default Postgresql 8+ and MySQL 5+, 30 connections should be sufficient.

  2. Restart Crowd
Last modified on Jun 15, 2015

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.