'Unexpected bytes from remote node' error after upgrading Confluence Data Center to 7.18.1 or later
Platform Notice: Data Center - This article applies to Atlassian products on the Data Center platform.
Note that this knowledge base article was created for the Data Center version of the product. Data Center knowledge base articles for non-Data Center-specific features may also work for Server versions of the product, however they have not been tested. Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.
*Except Fisheye and Crucible
Problem
Confluence fails to start with a "Unexpected bytes from remote node, closing socket" error after:
Upgrading Confluence Data Center to one of the following versions:
7.18.1 or later
7.17.4 or later 7.17.x version
7.16.4 or later 7.16.x version
7.15.2 or later 7.15.x version
7.14.3 or later 7.15.x version
7.13.7 or later 7.13.x version
7.4.17 or later 7.4.x version
Or, if you have a Confluence cluster deployed using Kubernetes and Helm charts, even though the cluster started, the logs are filled with the same warning messages.
The following error appears in the atlassian-confluence.log:
2022-06-03 09:42:44,473 WARN [hz.confluence.cached.thread-2] [hazelcast.nio.tcp.TcpIpAcceptor] log [10.9.32.224]:5801 [confcluster1] [3.12.11] com.atlassian.confluence.impl.cluster.hazelcast.interceptor.authenticator.NodeConnectionException: Unexpected bytes from remote node, closing socket
com.atlassian.confluence.impl.cluster.hazelcast.interceptor.authenticator.NodeConnectionException: Unexpected bytes from remote node, closing socket
at com.atlassian.confluence.impl.cluster.hazelcast.interceptor.authenticator.DefaultClusterJoinManager.checkNodeAuthenticationEnabled(DefaultClusterJoinManager.java:68)
at com.atlassian.confluence.impl.cluster.hazelcast.interceptor.authenticator.DefaultClusterJoinManager.accept(DefaultClusterJoinManager.java:47)
at com.atlassian.confluence.impl.cluster.hazelcast.interceptor.ClusterJoinSocketInterceptor.onAccept(ClusterJoinSocketInterceptor.java:49)
at com.hazelcast.nio.NodeIOService.interceptSocket(NodeIOService.java:300)
at com.hazelcast.nio.tcp.TcpIpAcceptor$AcceptorIOThread.configureAndAssignSocket(TcpIpAcceptor.java:316)
at com.hazelcast.nio.tcp.TcpIpAcceptor$AcceptorIOThread.access$1400(TcpIpAcceptor.java:138)
at com.hazelcast.nio.tcp.TcpIpAcceptor$AcceptorIOThread$1.run(TcpIpAcceptor.java:305)
at com.hazelcast.util.executor.CachedExecutorServiceDelegate$Worker.run(CachedExecutorServiceDelegate.java:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at com.hazelcast.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:64)
at com.hazelcast.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:80)
Collapse
Cause
Confluence fails to start
This problem occurs when you attempt to start the second node before the first node has fully started up after the upgrade. You must wait for node 1 to start completely before attempting to start the next node. This particular issue only happens the first time Confluence is started after upgrading to a version listed above.
Confluence logs in Kubernetes cluster with warning messages
If you notice your Confluence logs filled with the same warning messages about the Unexpected bytes from a remote node in your Kubernetes cluster and the IP mentioned in the ticket is not part of Confluence pods in the cluster, you might be using the service type in the YAML file as LoadBalancer and missing configuration.
Resolution
Confluence fails to start
To resolve this issue, restart the first Confluence node.
We always recommend starting your nodes one at a time, and waiting for each the 'Confluence is ready to serve' message in the atlassian-confluence.log
before attempting to start the next node.
Confluence logs in Kubernetes cluster with warning messages
If you are not making Confluence available from outside of the Kubernetes cluster without using an ingress controller, please consider using the default service type from the Confluence Helm chart, ClusterIP.
Otherwise, if you need to make Confluence available from outside the Kubernetes cluster without an ingress controller, review the configuration for the services here.