'receive-pack' timed out on server.fatal: The remote end hung up unexpectedly

Still need help?

The Atlassian Community is here for you.

Ask the community

Symptoms

The following appears after a git push:

git push origin --all
Counting objects: 9554, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (5064/5064), done.
Writing objects: 100% (9554/9554), 2.79 GiB | 444.00 KiB/s, done.
Total 9554 (delta 4382), reused 9554 (delta 4382)
'receive-pack' timed out on server.fatal: The remote end hung up unexpectedly
fatal: The remote end hung up unexpectedly

 

Cause

This is a large push (e.g. initial push of a very big repository) and Bitbucket Server is timing out.

Resolution

Git is a very fast and efficient SCM when it comes to text-based content such as source files. It uses delta compression to efficiently compress the contents and history of the repository, which results in a compact repository. You'll often find that a git repository will be half the size of the equivalent Subversion repository or even smaller. This compression does come at a cost: calculating the delta is CPU and memory intensive, especially for big files. It also affects some of the consistency checks git does: instead of dealing with simple objects, it has to consider a chain of deltas.

For a code repository, this works really well. Changes and pushes are usually fairly small so the added overhead of delta compression is small. The result is a small repository that is faster to clone (and fetch). Unfortunately, these optimisations make git less than ideal for repositories with large binary contents. Here the overhead of delta compression and the consistency checks are really noticeable. The individual objects are large, which means that the internal caches that git uses are often not big enough and the consistency checks take a long time to complete because git has to re-read objects from disk all the time. There are a couple of things you can do, but each has its own tradeoffs:

Option 1 - set the process timeout large enough to be sure that the initial push will complete

The initial push will be particularly slow. Subsequent pushes won't be nearly as bad, but will still be fairly slow. If you go with this option, set the process timeout large enough to be sure that the initial push will complete (set it to a day if you want). After the initial push completes, it'd be best to set it back to a lower value. The timeout is there purely as a safety mechanism to ensure that runaway processes get cleaned up, should they occur. To perform this action, tweak the following parameter plugin.bitbucket-git.backend.timeout.execution on Bitbucket Server config properties.

Option 2 - Configure the repository to not compress particular file types

This will ensure that the large binary files in your repository won't be compresses and will make pushing faster. However, it will also make the repository significantly larger and therefore slower to clone. It'll also make all local clones of that repository larger, so it's a tradeoff. If you want to try this option, you can add a .gitattributes file to your repository (on master). As an example, the following .gitattributes file would disable delta compression for .psd files and mark them as binary so git won't try to perform textual diffs on them.

*.psd binary
*.psd -delta

Commit the file and have git repack the repository before pushing it to Bitbucket Server (this will probably be slow):

git repack

Option 3 - Remove large objects from your repository

http://blogs.atlassian.com/2014/05/handle-big-repositories-git/ describes a number of techniques you can use to either strip large objects from your history, split your repository into multiple repositories or use git extensions such as git-annex to manage large binary assets.

Last modified on Nov 2, 2018

Was this helpful?

Yes
No
Provide feedback about this article
Powered by Confluence and Scroll Viewport.