SSH Fail Intermittently due to upload-pack: Resource temporarily unavailable

Platform notice: Server and Data Center only. This article only applies to Atlassian products on the Server and Data Center platforms.

Support for Server* products ended on February 15th 2024. If you are running a Server product, you can visit the Atlassian Server end of support announcement to review your migration options.

*Except Fisheye and Crucible

Problem

  • SSH git clone fail intermittently while HTTP(S) works as per normal.
  • This is noticed after an upgrade to Stash 3.6.0 - 3.10.x

The following appears in the catalina.log

30-Jun-2015 04:38:43.453 SEVERE [ajp-nio-127.0.0.1-8009-exec-11] org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun 
 java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:713)

The following appears in the atlassian-stash.log

2015-06-29 05:17:36,139 ERROR [ajp-nio-127.0.0.1-8009-exec-31] xxxx @xxxx 127.0.0.1 "GET /scm/something/something.git/info/refs HTTP/1.1" c.a.s.i.s.g.p.h.GitSmartExitHandler something/something_shared[292]: Read request from 127.0.0.1 failed: com.atlassian.utils.process.ProcessException: Non-zero exit code: 255
The following was written to stderr:
error: cannot fork() for git-http-backend: Resource temporarily unavailable 

Diagnosis

Environment

  • Large Enterprise server with similar environment as follows:

     <cpus>64</cpus>

Diagnostic Steps

  • Enable git debug with below will show the debug logging stuck while trying to identity file .ssh cert

    export GIT_TRACE_PACKET=1
    export GIT_TRACE=1
    git clone ssh://admin@<mystash.com>/something/something.git

Cause

The default max thread on Stash 3.10.x is too high for an Enterprise server with lots of hyper-threaded cores. This changes was made for BSERV-6868 - Default executor.max.threads may be too small for larger instances Closed to fix insufficient threads. However, this is then to be an issue when internal thread pool can grow up to 100 + 20 * 64 = 1380 threads which at some point the server could not create any more threads because it ran out of (virtual) memory.

The following are the default set per Stash version:

3.11.0: executor.max.threads=${scaling.concurrency}
3.6.0 - 3.10.x: executor.max.threads=100+20*${scaling.concurrency} (where the later is # of cpu)
2.11.x: executor.max.threads=100

 

Workaround

This is a known bug and a workaround would be to re-configure the default value of the executor.max.threads.

  • Shutdown Stash
  • Add the following in stash-config.properties

    executor.max.threads=2*cpu
    
  • Startup Stash

Resolution

This bug has been fixed in Stash 3.11.0, see BSERV-7616 - executor.max.threads is too big for instances with too many CPUs Closed

Last modified on Mar 30, 2016

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.