Users getting "Packet corrupted" from SSH sessions (BB-15701)

Issue #12314 open
Jesse Yowell
staff created an issue

We're getting numerous reports that there are connections that are failing with "Packet corrupted" when trying to SSH to our servers. This is still under investigation by our network engineering team and one of our upstream providers (Level3) -- For now, this ticket will be a placeholder for notifications on any updates we come across

Official response

Comments (54)

  1. Claire Knight

    This has been open since 2nd February and it's now the 22nd... Nothing found out? I've just been suggested to use another ISP. You do realise that one signs up to a contract with these people. And in the UK at least, even if I changed company, the actual line and local network is the same?

  2. Tim Burt

    I've tried with the altssh link and get the same error on my affected repo.

    Received disconnect from 104.192.143.16: 2: Packet corrupt
    fatal: The remote end hung up unexpectedly error: pack-objects died of signal 13

    For me at least, this isn't a global issue. It only affects one of the repositories I have. I was pointed to this ticket by support, but I'll also update my specific ticket with the above information.

  3. David Peck

    I couldn't switch back to HTTPS. After disabling Multi-factor Authentication, it still wouldn't accept anything pushed to HTTPS. Separate issue I guess.

  4. David Peck

    I was able to resolve my issue by VPNing to another location (TunnelBear have a free service), and after pushing the troublesome commit all works fine.

  5. Jim Britton

    I'm seeing the same thing - pushing a 33mb commit (from the UK). SSH fails with 'type 2 error, packet corrupt'. HTTPS fails with error 10054 (connection closed by remote host).

    Edit... after retrying lots of times I've been able to complete this push over SSH

  6. Peter Keszthelyi

    Similar issues from Belgium.

    Edit: Just got home from the office and successfully pushed the same commit that didn't work from the office. Same ISP though. Could be some network configuration issue.

  7. Toby Stokes

    Also been having problems pushing to bitbucket from the UK (ISP TalkTalk) And workaround is to tunnel to a different location (Yes, TunnelBear to Canada good for me too!)

    From my experience, this seems to affect connections via https and ssh, and about 0.5Mb appears to be the max size limit before fails.

  8. Stuart Ellis

    FWIW, we have consistently had this problem with one repository over the past few days, and I'm happy to give details to a BitBucket engineer and run diagnostics to help isolate this issue.

  9. Jesse Yowell staff reporter

    If you're still experiencing this issue, we recommend trying to power cycle your modem then attempt to push. We're aware that there is some issue with TCP tx checksumming, but the power cycle appears to fix this in some cases.

  10. Joe Mosley

    This had gone away so I was quietly watching to see what the fix was. However been working all weekend on new code for major project, went to push it this morning and lo and behold

    fatal: The remote end hung up unexpectedly fatal: sha1 file '<stdout>' write error: Broken pipe

  11. Jack Chen

    i added some files into my repo today. The size of each single file is less 800k. However i can not git push to the bitbucket, even i tied to increase http post buffer size.

    Counting objects: 47, done. Delta compression using up to 4 threads. Compressing objects: 100% (36/36), done. packet_write_poll: Connection to 104.192.143.2: Broken pipe Writing objects: 53% (25/47), 2fatal: .45 MiB The remote end h|ung up unexpe284.ctedly00 KiB/s fatal: sha1 file '<stdout>' write error: Broken pipe error: failed to push some refs to ....

  12. stíobhart matulevicz

    Another "me too", I'm afraid. Also from UK. The repo in question has only successfully pushed once, in about 20+ attempts. Same error every time:

    Delta compression using up to 4 threads.
    Compressing objects: 100% (273/273), done.
    Received disconnect from 104.192.143.2: 2: Packet corrupt
    Disconnected from 104.192.143.2
    fatal: The remote end hung up unexpectedly
    error: failed to push some refs to <repo name>
    
  13. James Britton

    No update on this? It's making it very hard to work with BitBucket. Last post on this from Atlassian was over a month ago. The suggestion to reboot the router doesn't make any difference (unsurprisingly...)

  14. Rob Gilton

    I too have been experiencing this problem. I tried some rate-limiting, but that didn't seem to help. Pushing via a remote machine helped.

    For those who have a remote machine they can shell into, which is on a internet connection that seems to work pushing to bitbucket, you can stick something like this in your ~/.ssh/config:

    Host bitbucket.org
    ProxyCommand ssh $REMOTE_HOST nc bitbucket.org 22
    

    (Replacing $REMOTE_HOST with the host name of your remote machine.)

    The bandwidth limiting approach I tried was successful at limiting bandwidth, but failed to make it work. In case it's useful to anyone:

    Host bitbucket.org
    ProxyCommand pv -q -L 100k | nc bitbucket.org 22 | pv -q -L 100k
    
  15. stíobhart matulevicz

    @Claire Knight "This has been open since 2nd February and it's now the 22nd..."

    Well, in a day or two it'll be 2nd June. So this ticket will have been open for 4 months with no progress.

    I've tried the bandwidth limiting trick, which seems to have worked for a few folks, using the OSX equivalent Network Link Conditioner [direct download link], but no joy. Even dropping bandwith down through 500, 400, 300, 200, 100 to 50kb/s, I still got the same error every time.

  16. Marv NA

    Have just experienced this issue (also on BT); after several attempts I ended up running aggressive garbage collection & using gitgui's compress database option; Took the number of objects down from 216 to 208 then 205. second attempt with 205 objects to push went through, all others either had packet corrupt or broken pipe.

  17. Flo

    Me and my colleges facing currently the same issue on different Workstations Mac OSX and Windows by using Sourcetree. How to reproduce this error: Its simple I just change a binary file (e.g. PNG-Image color some pixels) which is larger then 1 MB and then I commit the changes and try to push it afterwards.

    We receiving the following repsonses...

    ...on push via SSH:

    git -c diff.mnemonicprefix=false -c core.quotepath=false push -v --tags origin master:master

    Pushing to ssh://....@bitbucket.org/.....git

    FATAL ERROR: Server sent disconnect message type 2 (protocol error): "Packet corrupt"

    fatal: sha1 file '<stdout>' write error: Broken pipe fatal: The remote end hung up unexpectedly

    error: failed to push some refs to 'ssh://....@bitbucket.org/.....git'

    ...on push via HTTPS:

    git -c diff.mnemonicprefix=false -c core.quotepath=false push -v --tags origin master:master POST git-receive-pack (2129696 bytes)

    fatal: The remote end hung up unexpectedly fatal: The remote end hung up unexpectedly

    error: RPC failed; result=55, HTTP code = 0

    Pushing to 'https://...@bitbucket.org/.....git'

    It looks like some of my college received this response for the first time on the 2016-05-19 while at this time it was working for other repositories correctly. Since 2016-06-01 it is definitely the same behavior for all our current running projects and on all workstations. We are based in Austria if this is an interesting point?

    I also tried to reconfigure git by the following commands:

    git config http.postBuffer 524288000

    git config https.postBuffer 524288000

    git config ssh.postBuffer 524288000

    and

    git config --global http.postBuffer 524288000

    git config --global https.postBuffer 524288000

    git config --global ssh.postBuffer 524288000

    By using a mobile internet connection via (mobile tethering) it is possible to push such a local commit. I also tried to use NetBalancer (https://netbalancer.com/) to slow down my upload and download speed (to simulate a mobile internet connection) but the push still returns with the same response.

    Hope this informations will help you to find a solution.

  18. stíobhart matulevicz

    I'm starting to think this is an issue with git itself, rather than BitBucket per se.

    After trying unsuccessfully many times to push the repo I was having problems with [and trying the fixes outlined here, such as using HTTPS, increasing postBuffer,etc], I tried pushing the repo to Gitlab instead and got exactly the same error.

    The repo in question was a website and one of the directories contained a couple of Photoshop files –not huge, but a few MB in size. When I added this folder to my .gitignore and used git rm -r --cached . to remove these from the repo history and then pushed again, it worked first time –on both Bitbucket and Gitlab.

    The odd thing is that the huge majority of my repos contain the source for various websites, including Photoshop docs for the graphical assets, and I've never run into this problem before.

    I wonder has there been a recent update to git itself, which might have broken something when it comes to handling files over a couple of MB in size? And, given that most of us experiencing this problem seem to be in Europe, if this putative bug is somehow triggered geographically?

  19. Uldis Pirags

    Same problem here with my repo:

    cmd$ git count-objects -Hv
    count: 2629
    size: 82.48 MiB
    in-pack: 0
    packs: 0
    size-pack: 0 bytes
    prune-packable: 0
    garbage: 0
    size-garbage: 0 bytes
    
    cmd$ git push origin develop
    Counting objects: 2625, done.
    Delta compression using up to 4 threads.
    Compressing objects: 100% (2456/2456), done.
    packet_write_wait: Connection to 104.192.143.3: Broken pipe
    fatal: The remote end hung up unexpectedly
    error: pack-objects died of signal 13
    error: failed to push some refs to 'git@bitbucket.org:team/repo.git'
    

    Increasing the postBuffer did not help, but connecting Macbook to my iPhone via Hotspot and Network Link Conditioner with 3G preset made it possible to push.

    ...
    Writing objects:  36% (945/2625), 34.82 MiB | 25.00 KiB/s
    
    ...
    
    Writing objects: 100% (2625/2625), 70.64 MiB | 26.00 KiB/s, done.
    Total 2625 (delta 564), reused 0 (delta 0)
    

    This obviously is not an acceptable solution in a long term.

  20. Simon Olsen

    I'm having issues like this as well in Australia. Both my Macs have issues pulling and pushing with Bitbucket. I get Packet corrupt or Packet integrity error. Yet, when I tether my iPhone it works without issue. I even got my ISP (Telstra) to send out a new modem (not just for this problem, for multiple problems), but this didn't fix it either.

  21. stíobhart matulevicz

    Can some of you do what I did [mentioned in my previous post] and try pushing one of your affected repos to GitLab as well, and see if the problem persists.

    It would be helpful to know if this is a Git problem generally, rather than a BitBucket problem specifically.

  22. Flo

    We could solve the problem by switching to another firewall. Now it is possible for us to push without any problems.

    @ stíobhart matulevicz During the time we had this problem, we tested as well to push something to our old git lab installation. It was working without any problems! But the installation is quite old so I am not sure if this is representative: GitLab 8.4.4 GitLab Shell 2.6.10 GitLab API v3 Git 2.6.2 Ruby 2.1.8p440 Rails 4.2.5.1

  23. Chad Monroe

    I have a connection directly to the Level3 backbone (500Mbps uplink) and a backup Cox cable connection. I see the above issue primarily on pull using either connection. Have also tried altssh to no avail. It happens randomly.. sometimes the clone works other times it fails. If I put the same repo on github I have no issues.

  24. stíobhart matulevicz

    Is this ever going to get fixed? Or, if the problem lies not with Bitbucket, but with Git or SSH, are we any nearer a solution? It's making Bitbucket nigh on unusable for me.

    Any time I work on a project involving files over a few kb in size [which is the majority of my projects, seeing as they are mostly websites], I'm getting this problem almost every time I try and push changes.

    We're not talking huge files here, either. Maybe a handful of web optimised images in a new commit; 1 or 2mb at most. But I get the fatal: The remote end hung up unexpectedly every time. The only way round it I've found is to remove all the new images from the project and then add them back one at a time, doing a push after each individual one –which is a major PITA. And, even that doesn't always work.

    Here's the result of running git push in verbose mode [GIT_SSH_COMMAND="ssh -v" git push]:

    <snip authentication stuff>
    
    debug1: Authentication succeeded (publickey).
    Authenticated to bitbucket.org ([104.192.143.2]:22).
    debug2: fd 7 setting O_NONBLOCK
    debug2: fd 8 setting O_NONBLOCK
    debug1: channel 0: new [client-session]
    debug3: ssh_session2_open: channel_new: 0
    debug2: channel 0: send open
    debug1: Entering interactive session.
    debug2: callback start
    debug2: fd 5 setting TCP_NODELAY
    debug3: ssh_packet_set_tos: set IP_TOS 0x08
    debug2: client_session2_setup: id 0
    debug1: Sending environment.
    debug3: Ignored env TERM_PROGRAM
    debug3: Ignored env TERM
    debug3: Ignored env SHELL
    debug3: Ignored env TMPDIR
    debug3: Ignored env Apple_PubSub_Socket_Render
    debug3: Ignored env TERM_PROGRAM_VERSION
    debug3: Ignored env TERM_SESSION_ID
    debug1: Sending env LC_ALL = en_GB.UTF-8
    debug2: channel 0: request env confirm 0
    debug3: Ignored env ZSH
    debug3: Ignored env USER
    debug3: Ignored env SSH_AUTH_SOCK
    debug3: Ignored env __CF_USER_TEXT_ENCODING
    debug3: Ignored env PAGER
    debug3: Ignored env LSCOLORS
    debug3: Ignored env PATH
    debug3: Ignored env _
    debug3: Ignored env PWD
    debug3: Ignored env ITERM_PROFILE
    debug3: Ignored env XPC_FLAGS
    debug3: Ignored env XPC_SERVICE_NAME
    debug3: Ignored env SHLVL
    debug3: Ignored env HOME
    debug3: Ignored env COLORFGBG
    debug3: Ignored env LANGUAGE
    debug3: Ignored env GIT_SSH_COMMAND
    debug3: Ignored env ITERM_SESSION_ID
    debug3: Ignored env LESS
    debug3: Ignored env LOGNAME
    debug1: Sending env LC_CTYPE = UTF-8
    debug2: channel 0: request env confirm 0
    debug3: Ignored env GOPATH
    debug3: Ignored env SECURITYSESSIONID
    debug1: Sending command: git-receive-pack 'madra/cometocomino.com.git'
    debug2: channel 0: request exec confirm 1
    debug2: callback done
    debug2: channel 0: open confirm rwindow 2097152 rmax 32768
    debug2: channel_input_status_confirm: type 99 id 0
    debug2: exec request accepted on channel 0
    Counting objects: 160, done.
    Delta compression using up to 4 threads.
    Compressing objects: 100% (145/145), done.
    debug2: channel 0: rcvd adjust 153
    Writing objects: 100% (160/160), 1.90 MiB | 0 bytes/s, done.
    Total 160 (delta 88), reused 0 (delta 0)
    debug2: channel 0: read<=0 rfd 7 len 0
    debug2: channel 0: read failed
    debug2: channel 0: close_read
    debug2: channel 0: input open -> drain
    debug2: channel 0: rcvd adjust 8192
    debug2: channel 0: rcvd adjust 8192
    debug2: channel 0: rcvd adjust 32768
    debug2: channel 0: rcvd adjust 16384
    debug2: channel 0: rcvd adjust 16384
    debug2: channel 0: rcvd adjust 16384
    debug2: channel 0: rcvd adjust 16384
    debug2: channel 0: rcvd adjust 16384
    debug2: channel 0: rcvd adjust 16384
    debug2: channel 0: rcvd adjust 16384
    debug2: channel 0: rcvd adjust 16384
    debug2: channel 0: rcvd adjust 16384
    debug2: channel 0: rcvd adjust 16384
    debug2: channel 0: rcvd adjust 16384
    debug2: channel 0: rcvd adjust 16384
    debug2: channel 0: rcvd adjust 16384
    debug2: channel 0: rcvd adjust 32768
    debug2: channel 0: rcvd adjust 32768
    debug2: channel 0: rcvd adjust 32768
    debug2: channel 0: rcvd adjust 32768
    debug2: channel 0: rcvd adjust 32768
    debug2: channel 0: rcvd adjust 32768
    debug2: channel 0: rcvd adjust 32768
    debug2: channel 0: rcvd adjust 32768
    debug2: channel 0: rcvd adjust 32768
    packet_write_wait: Connection to 104.192.143.2: Broken pipe
    fatal: The remote end hung up unexpectedly
    fatal: The remote end hung up unexpectedly
    

    Does that mean anything to anybody?

    The debug2: channel 0: open confirm rwindow 2097152 rmax 32768 line looks suspect to me, as 'something' [the upload?] seems to be exceeding its max allowed size. But I know little, if anything about the inner workings of SSH, so am probably mis-interpreting that.

    BTW. I've tried all the fixes mentioned previously in the thread. Nothing seems to make any difference.

  25. Jim Britton

    Why is this now less important than before? Has it been fixed?

    The above post from Kaleb isn't very clear. An issue "should have" been fixed by adding SSH keep-alives but I don't know what this issue is. Is it the issue where pushing via SSH fails with 'type 2 error, packet corrupt' and pushing via HTTPS fails with error 10054 (connection closed by remote host)? Or something else?

    If the specific issue on this ticket is still not fixed, what's changed to make it "major" rather than "critical"?

  26. Kaleb Elwert

    Hi @Jim Britton. My above comment was related to a different issue. The remote end hung up unexpectedly was an issue where we were hanging up on users while a pack operation was running, either on the server or the client, because our load balancer thought the connection was idle and closes idle connections after 5 minutes. This was fixed by adding SSH keepalives on the server side to ensure that there's activity on the connection even though no other data is being transmitted.

    As far as we've been able to track it down, this packet corrupt issue is happening in the transport layer, as in other instances, it has been cleared up by rebooting the router on the client network. However, this one seems more specific. packet corrupt generally means that the ssh packet was mangled so the ssh checksum failed, but the tcp checksum was still valid.

    One possibility is that the server is sending invalid data in a number of corner cases. However, we have not been able to reproduce this issue in a number of different scenarios over a long period of time, trying specific instances users have reported as not working. Because of this, we think the problem lies elsewhere.

    Another possibility is that it's happening at the ISP or network level. Something between the Bitbucket servers and some clients, whether that's one of our ISPs or something between them and everyone else is yet to be determined. Our current guess is that something may be handling TCP checksumming improperly (recalculating the checksum when a bit has flipped in the packet itself), but as we haven't been able to reproduce this when trying from multiple different networks, we haven't been able to verify this hypothesis.

    I hope this clears up a bit of the information around this issue and why it's taking so long for us to resolve it. We are still tracking this, but because there isn't much more we can do at this time, we're downgrading it from critical to major. If you still have additional questions or any new information, please feel free to contact support.

  27. stíobhart matulevicz

    Still no takers for my comment above?

    https://bitbucket.org/site/master/issues/12314/users-getting-packet-corrupted-from-ssh#comment-28460201

    I've been getting this behaviour, pushing repos to both Bitbucket.org and Gitlab.com, which is what makes me think the problem may lie with git itself, rather than relate specifically to Bitbucket or Gitlab --unless you've both [Bitbucket & Gitlab] got the same problem with your server config.

    It might be useful if someone else could try pushing a problematic repo to Gitlab as well [and maybe even Github]. If you get the same errors [as I've been doing], then it looks like the problem is more widespread than just a Bitbucket issue.

    [As luck would have it, I'm actually finding that Bitbucket has been behaving itself of late. Fingers crossed, I've not had any repos fail to push over the past few months. Unfortunately I can't say the same about pushing them to Gitlab which, if anything, has got even worse!]

  28. Log in to comment