Pipelines randomly fail with: "Error occurred whilst uploading artifact". (BP-1174)

Issue #15062 resolved
Pierre Barrau created an issue

Bitbucket pipeline is often failing because of a "System error" saying "Error occurred whilst uploading artifact":

Capture d’écran 2017-10-20 à 10.59.31.png

Here is the bitbucket-pipeline section of the step where the System error occurred:

- step:
      name: Generate APIBlueprint documentation
      image:
        name: <custom docker image hosted on aws ecr>
        aws:
          access-key: $AWS_ACCESS_KEY_ID
          secret-key: $AWS_SECRET_ACCESS_KEY
      script:
        - <generate documentation.apib file>
        - cp documentation.apib dist/documentation.apib
      artifacts:
        - dist/documentation.apib

Re-running the pipeline usually solves the problem, letting me think it's not an issue in our bitbucket-pipeline.yml file.

Comments (28)

  1. Assaf Aloni

    Same here... Just started using bitbucket pipelines, Unexpected behavior like this is very problematic...

    bitbucket-pipelines-artifact-error.png

  2. Samuel Tannous staff

    Hi all, we've identified the issue as being caused by a bug in a third party library. A fix has been developed and we're waiting on a release. ETA on the fix is 1 week. We're investigating potential workarounds in the meantime.

  3. Marco Pfeiffer

    even an automatic retry of the upload (something like 3 attempts) would already be a huge help.

  4. Samuel Tannous staff

    We've deployed a workaround for this issue which should prevent all occurrences until the proper fix is released. Please let us know if you encounter any further issues.

  5. Pierre Barrau Account Deactivated reporter

    @Gary Sackett We're still getting random System Errors. It just doesn't state it's caused while uploading artifact anymore. Should I reopen or file a new report?

  6. Martin

    Same thing still happens for us. It has never complained about uploading artifacts though, just a system error with no other details.

    Screen Shot 2017-12-15 at 09.29.54.png

    We have a support ticket open BBS-68093 if you need further details but they have pointed us here.

  7. Gary Sackett staff

    Hi everyone, We confirmed a new bug causing this issue. Our team is currently investigating, and will add an update once we are a bit further along with the fix. Gary

  8. Seppe Stas

    For me this issue happens all the time. Note that I'm using Pipelines to build fairly big (112MB) Linux Image. Pipelines does not seem to have any issues with the large amount of caches I have set up though.

    I'm a bit confused as to why the artifacts need to "uploaded" anywhere. Wouldn't something like Docker Volumes make more sense for sharing artifacts between build steps?

    My pipelines config:

    default:
        - step:
            image: productize/oe-build-repo
            name: Download sources and build image
            script:
              - mkdir -p ~/.ssh
              - (umask  077 ; echo $SSH_KEY | base64 --decode -i > ~/.ssh/id_rsa)
              - echo "bitbucket.org $BITBUCKET_HOST_KEY" >> ~/.ssh/known_hosts
              - mkdir -p yocto
              - cd yocto
              - repo init -q -u ${BITBUCKET_CLONE_DIR} -b ${BITBUCKET_BRANCH}
              - repo sync -q
              - rm -rf <a bunch off stuff>
              - MACHINE=imx7-var-som DISTRO=fsl-framebuffer EULA=1 . setup-environment build
              - cat conf/local.conf
              - bitbake core-image-minimal
            caches:
              - sources
              - downloads
              - yocto-cache
            artifacts:
              - yocto/build/tmp/deploy/fsl-image-gui-imx7-var-som.sdcard
              - yocto/build/tmp/deploy/fsl-image-gui-imx7-var-som.tar.bz2
    
        - step:
            name: Store image in Bitbucket downloads
            script:
              - curl -s -X POST https://${BITBUCKET_AUTH}@api.bitbucket.org/2.0/repositories/$BITBUCKET_REPO_OWNER/$BITBUCKET_REPO_SLUG/downloads \
                -F files=fsl-image-gui-imx7-var-som.sdcard \
                -F files=fsl-image-gui-imx7-var-som.tar.bz2
    
    definitions:
      caches:
        sources: ~/yocto/sources
        downloads: ~/yocto/downloads
        yocto-cache: ~/yocto/build/sstate-cache
    
  9. Matt Ryall

    @Seppe Stas - sorry to hear you're still having trouble with this. We actually fixed the root cause of this bug back in December, and verified the fix with the affected customers at the time.

    Can you please raise a support ticket, so we can investigate your problem in a bit more detail? Please include a link to an affected build, so one of our engineers can jump straight in and take a look at it.

    Thanks!

  10. Seppe Stas

    @Matt Ryall Last time I saw the issue was 18/01. I'll rerun a build and report the issue if it occurs again.

    Note that I'm caching quite a lot of stuff (sources is ~86M, downloads ~6GB and the cache is ~3GB on my local machine, but the CI build should be smaller). Could it be I'm hitting some sort of upload quota causing the upload of the artifacts to fail?

  11. Samuel Tannous staff

    Hi Seppe,

    Artifacts and caches are both restricted to 1GB. There are several options if you are hitting this limit, one is to reconsider if you need to pass on your entire build directory as an artifact as Pipelines clones the source in every step and dependencies can be downloaded again if needed. Another is to use your own storage solution as recommended in our documentation:

    "If you need artifact storage for longer than 7 days (or more than 1 GB), we recommend using your own storage solution, like Amazon S3 or a hosted artifact repository like JFrog Artifactory." - https://confluence.atlassian.com/bitbucket/using-artifacts-in-steps-935389074.html

  12. Seppe Stas

    Hmm, in that case it would be nice to have a more useful error like “failed to upload cache: cache exceeds maximum size of 1GB” instead of the generic “SYSTEM ERROR”.

    Using an external storage solution does not make a lot of sense to me. I thought the whole purpose of caching was to reduce internet traffic. Using the bitbucket pipelines cache allows everything to stay in the same data center making it quicker and cheaper to receive. Uploading caches to an external storage solution kind of negates this.

    Also note that the only reason for me to use artifacts is to be able to use an image that can actually upload artifacts. The image I use for building can’t do this. For my usecases having a more short lived artifact that is efficiently passed between containers with different images makes way more sense.

    Using a Docker volume that gets removed after the pipeline completes sounds like a no-brainer solution to me...

  13. Samuel Tannous staff

    Hi Seppe,

    Indeed the error message should be more meaningful. I will look in to improving that.

    I don't fully understand your use case but caches and artifacts are separate things. Caches are designed to persist dependencies between pipelines so that subsequent builds are faster whereas artifacts are designed to pass files between steps so that they are available in subsequent steps (i.e. not necessarily as a performance improvement but to achieve build once semantics). You can use both artifacts and caches in a single pipeline to achieve different goals depending on your requirements and each artifact/cache has a separate 1GB limit so defining several smaller artifacts is a possible solution to your problem.

    Also note that storing artifacts in your own s3 bucket should not come with a significant performance impact over the built in artifacts as our build cluster runs in AWS (us-east-1) and is similar to the solution we employ for built in artifacts.

    We explored using Docker volumes for persistent state but it comes with added complexity as different steps are not necessarily run on the same underlying host instance.

  14. Seppe Stas

    To explain my use case a bit: I use the caches to cache git repos containing build configurations (the sources), downloaded source files (downloads) and re-usable build artifacts (yocto-cache). These are used by the Yocto build system to build embedded Linux systems. I enabled pipelines on the “root” repository, making sure the build always works from a clean state (without changes to local configuration).

    I also want to store these images so they can be retrieved later. I did this in a seperate step because my build image did not have cURL installed (but now it does) and because I wanted this step to be optional in the future. This is why I need the artifacts.

    Using my own storage solution requires me to manage it and using AWS in the same region sounds a bit like leaky abstraction. Either way the data would still be sent past the edge routers which is not that great from a performance and security perspective.

    Wouldn’t it be possible to enforce that some steps are ran on the same instance? I think the main thing missing to accomplish this is some sort of “required-artifacts” property for subsequent steps.

  15. Log in to comment