Crowd Data Center display duplicate nodes in Clustering information.
Platform Notice: Data Center - This article applies to Atlassian products on the Data Center platform.
Note that this knowledge base article was created for the Data Center version of the product. Data Center knowledge base articles for non-Data Center-specific features may also work for Server versions of the product, however they have not been tested. Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.
*Except Fisheye and Crucible
Summary
A single node Crowd Data Center is showing duplicate nodes in Clustering information.
Environment
Crowd 4.1.0
Diagnosis
The following stack trace found in Crowd startup log:
2020-08-14 13:10:41,750 localhost-startStop-1 INFO [ContainerBase.[Catalina].[localhost].[*/crowd03*]] Initializing Spring root WebApplicationContext
2020-08-14 13:11:11,107 localhost-startStop-1 INFO [ContainerBase.[Catalina].[localhost].[*/crowd*]] Initializing Spring root WebApplicationContext
Two different context paths found on each node startup log:
- /crowd03
- /crowd
Cause
/crowd Context path present in the server.xml file is modified to a custom path:
<Context path="/crowd03" docBase="../../crowd-webapp">
<Manager pathname=""/>
</Context>
Crowd will load two different context roots, which essentially will start two instances of Crowd if the /crowd context path is altered or added to the server.xml
Solution
Remove the default context path set by Crowd in <Crowd-Install>/apache-tomcat/conf/Catalina/localhost/crowd.xml file.
Example steps
1. First perform a backup of the crowd.xml file in <Crowd-Install>/apache-tomcat/conf/Catalina/localhost to another directory.
2. From <Crowd-Install>/apache-tomcat/conf/Catalina/localhost, remove the crowd.xml file to prevent Tomcat from loading the /crowd context.
3. Restart Crowd
If you have more than one Data Center node, please make sure that those steps are run for each node in your setup.
Please note that you are probably being impacted by the following bug: