the lfs fetch is very fast compared to doing it locally (we are using Bitbucket LFS)
I'm assuming this needs to be a public repository to work?
Do you mind expanding on how you managed to get it to work? I'm just getting spammed with warning: current Git remote contains credentials
Your workaround does not seem to work anymore. In the build log I can see that it finds the objects, but it's unable to download the objects. Git LFS does not support ssh:// as protocol, so I'm not sure how you get it to work as that.
Also interesting that not being able to download any objects has an exit status of exit 1, in my mind that seems like a pretty massive failure and should terminate the build. Took some time for me to actually see that it was failing. You also need to enable tracing to see that what it's actually failing on: GIT_TRACE=1 git lfs pull. In short it seems to fall back on basic authentication, using the tokens, which does not work. You get 401.
For others who manages to find their way in here through the jungle of SEO which makes everything related to Atlassian-products impossible to debug and Google: In the current state you should only consider to use pipelines and git lfs if your repositories are public. Don't waste your time until Atlassian addresses these somewhat fundamental issues directly or provides documentations that shows you how to make it work. Not worth your time.
Ah, I see the confusion. We are using a custom docker image that has our ssh credentials burned in. It's a version of the image we deploy, so of course it must be able to pull the private repo (and does).
The documentation on git lfs is so contradictive. On the one hand they say only http(s) endpoints with basic authentication works. But on the other hand I have never set that up locally, and it works perfectly. Looking at the issue tracker they seem to interchangeably use the term ssh and http. But it looks like the actual download is done over http(s). It's just very confusing.
Looking at git lfs env it's pretty obvious why it's failing, they are adding an endpoint to that doesn't exist or doesn't work with git lfs. Normally when you use an SSH endpoint / remote it should ask for git-lfs-authenticate which returns a token that is valid for a certain time. This is then in turn used to download the files over http(s). This does not currently work since the endpoint is set to a non-functioning URL. I don't even understand where that URL comes from.
It's in the form like this, if you look locally you will see that you have two endpoints, one for https and one for ssh: Endpoint=https://x-token-auth:%7Baccess_token%7D@bitbucket.org/foo/bar.git/info/lfs (auth=none)
To clarify my position on this issue, I did not offer my workaround as justification for the bug being marked "minor", it was more to share a solution with fellow travelers, having experienced Atlassian's legendary customer support on previous cloud offerings (living and EOL'd).
I completely understand, a little odd that we get to assign the priority when reporting issues.
I'm just keeping my finding in this thread in case someone finds it and to let the staff have it all at the same place.
Same problem here. Is there a known workaround for private repositories? I don't like to add ssh credentials or deployment keys within my docker container.
Issue #13541 was marked as a duplicate of this issue.
Same problem here. I disagree that this is a 'minor' issue.
To add my 50 cents: I managed to get LFS working in this way:
- created a public base image with git LFS binary installed (in addition to tools I needed for the build): see example here: https://hub.docker.com/r/huippujanne/python-27-awscli-git-lfs/~/dockerfile/
- in my bitbucket-pipelines.yml, I'm setting up a bitbucket ssh read-only access (my user is read-only user) for the same repository, using the environment variable I added holding base64 encoded private key ("secret environment variable"). Assuming the variable name is MY_PEM_BASE64 and you clone your repo using url firstname.lastname@example.org:my-repository.git, you could do it like this in the pipelines:
do you really think it's a minor bug? who can we build an lfs enabled repo in this case? since it seems bitbucket heavily support/push the usage of the lfs but running a pipelines with these repo currently very dirty!?
Maybe it's documented elsewhere, but this issue was the only official recognition of the limitation I've found.
This was a real stumper for me after I found out that Deployment keys don't work with LFS. Thanks Janne for the suggestion to create a read-only user. Works great for my small team but that's sadly not a good option for teams that already have 5/10/25/50/100 users and would need to pay more to upgrade just to create a read-only user to work around this limitation. If BB could add deployment key support to LFS, that would at least solve the extra user problem. I know I could use an "App password", but the fact that links to my user for automated processes put me off.
I too would like to see this elevated above a minor bug. It's a real pain that this doesn't just work. I also agree with Matt about adding it to the "Current Limitations" page -- it's confusing that this thread is essentially the only place online where you can find out that LFS and BB pipelines don't play well together.
Sounds like a great feature to have out of the box down the the line!
For those who want a solve now: I've just extended my node image with git-lfs and it works like a charm.
Last but not least add it to your pipeline.yml file: image: rebelpoidoctor/node-4-6-0_git-lfs
Hope it helps anyone!
I second the notion that this is not a minor bug. Given Atlassian is pushing Bitbucket with lfs, then please support git-lfs pull...please.
I just want to say that Janne Nykänen's answer works in the interim while Atlassian gets their act together. It is quite annoying that using LFS breaks pipelines without a hacky work around. I hope they do something to fix this soon.
I have also do a custom docker image with git-lfs already installed and add in script :
git config remote.origin.url git config remote.origin.url | sed -n "s,.*\?:,https://$GIT_CRED@bitbucket.org/,p"
git lfs pull
This issue has been open for about a year now and I fail to see the complexity of adding this.
Can you please take an action?
I've just enabled LFS and encountered the same issue - it breaks my build.
Don't want to do a hacky workaround, would be nice to have git lfs support in the pipeline.
The real "fun fact" is this ticket is opened for 16 months. I will have fun reverting my LFS changes...
Will it be fixed? 16 months for simple git lfs pull at checkout during pipelines build?
I think I have found a workaround that should work with any Docker.
One needs to add the pipeline ssh public key as a key in bitbicket.
The key is in setting->pipelines->ssh keys no need to add known host.
the script does this:
install git-lfs repository using the standard command
Ok, after all the suggestions mentioned in this issue I managed to build my repository that has some LFS files.
Steps to do this:
Prepare custom docker image with built-in Git LFS (alpine based Dockerfile example, lines 18-25) and in case of alpine distribution I also added openssh software in order to be able to use SSH from within alpine image.
Allow SSH access from Pipelines to your repository:
go to '<Repository> \ Settings \ Pipelines \ SSH keys' and copy public key (click 'Copy public key' button)
go to '<Repository> \ Settings \ General \ Access keys' and add this copied pipelines' public key to the list of allowed SSH keys on your repository (click 'Add key' button and insert copied public key, give it a reasonable label)
add an additional Git LFS pull at the beginning of your build script in order to pull large files from repository.
In my case bitbucket-pipelines.yml build script looks like this:
image:trustypanda/maven:alpine-3.5.2-openjdk8-lfspipelines:default:-step:caches:-mavenscript:# Modify the commands below to build your repository.-gitlfspull-cdsrc# change to project directory-mvn-Bverify# -B batch mode makes Maven less verbose
Enjoy successful builds on your LFS repo!
Of course, you can avoid building custom docker image with pre-built Git LFS and do Git LFS install step during your build step instead (as @emmanuellange mentioned), but then you'll spend your Pipelines build time usage on download\installation of Git LFS everytime you run your Pipelines builds and time is money in terms of Pipelines :)
Thanks for sharing your steps to get this working, @pure-apricot.
We realise the steps required for this are far from ideal, and it seems relatively straightforward to fix this on our side. So we're now planning to add built-in LFS support to Pipelines in the near future.
The clone command above has the skip smudge flag, which indicates the feature is working as expected.
What are the differences between your default and custom pipelines?
and have you tried running the clone command with the flag locally at those commits?
The flag ensures missing LFS objects are not downloaded, but files that are still in git history and exist in your .git/lfs/objects will still be downloaded.
If you think it is still an issue with Pipelines, please raise a support ticket here so the Atlassian team can investigate further.
My pipeline is using microsoft/dotnet2.1 as base image.
I am adding the config to enable lfs during clone as stated in the documentation. However, no files get downloaded from lfs during the clone.
If I explicitly download git-lfs and run git lfs install before cloning, then it works.