Cache Docker layers between builds

Issue #14144 open
Joshua Tjhin
staff created an issue

Builds that build and push Docker could be faster if the layers of the image were cached.

Official response

  • Matt Ryall staff

    Yes, there's been some good news for Docker users on Pipelines lately.

    Last week, we changed the filesystem driver used by the Pipelines Docker-in-Docker daemon (the one used for Docker commands within Pipelines), switching from vfs (the default for DinD) to overlay2. This has the benefits of matching how our hosts are configured and also dramatically improving the performance of disk operations during Docker builds.

    We're also starting work on Docker image caching this week, which we hope to have available to customers in February. This is planned to work automatically, creating a docker cache behind the scenes that works the same as our other build caches, and will store up to 1 GB of image layers across builds.

    Thanks for your continued interest in this issue.

Comments (36)

  1. Marc-Andre Roy

    Same here! That would save us a lot of time since our private docker repo is kind of slow on upload and our builder image is kind of huge (~900mb) with all the requirement we need..

  2. Matt Ryall staff

    A quick update on this ticket. We investigated whether we could do this, but the caching introduced some bugs in correctly building docker images when running in our infrastructure.

    So unfortunately this turned out not to be a quick fix, and we had to postpone further work until we get a few other higher priority tickets finished off. We hope to get back to fixing this before the end of the year, and will keep you posted here with any progress.

    We're aware that this is a high priority issue for many people building Docker images on Pipelines, so this is very close to the top of our list of priorities.

  3. Paul Carter-Brown

    Really need this feature. Builds & push on our Jenkins which took 30s take 4 minutes in BitBucket Pipeline. Considering the fact we pay for build time and the productivity impact, this is really a must-have feature.

  4. Nick Boultbee

    Yep, this is the difference between builds of 17 - 60mins and less than a minute locally for us (with base image cached, i.e. almost always). This is particularly bad as we have heavy (>3GB) builder images with a cache of precompiled (Haskell) binaries - to save (CPU) time, ironically...

  5. Kyle Cordes

    This really seems like a showstopper for anything other than occasional, minor use of the Docker build feature. I was really, really surprised to see it doing the whole thing again on the second build, it sent me researching for quite a while to figure out if I was maybe using Docker wrong.

  6. Matt Ryall staff

    Thanks for all the interest in this issue. We're well aware that building Docker images on Pipelines is not as fast as it could be.

    As mentioned above, we've been closing off a few higher priority improvements, so this ticket now is close to the top of our development queue. We aim to start work on it in January next year, and will have a further update around that time.

  7. Nicolás Martínez

    Happened to me 15 hours ago but I was not building a Docker image. The first "Pulling images" stage that pulls the base image of the pipeline had it's time reduced to a 20%-25%. So I think it's no exactly this issue the one that was fixed.

  8. Matt Ryall staff

    Yes, there's been some good news for Docker users on Pipelines lately.

    Last week, we changed the filesystem driver used by the Pipelines Docker-in-Docker daemon (the one used for Docker commands within Pipelines), switching from vfs (the default for DinD) to overlay2. This has the benefits of matching how our hosts are configured and also dramatically improving the performance of disk operations during Docker builds.

    We're also starting work on Docker image caching this week, which we hope to have available to customers in February. This is planned to work automatically, creating a docker cache behind the scenes that works the same as our other build caches, and will store up to 1 GB of image layers across builds.

    Thanks for your continued interest in this issue.

  9. Nick Boultbee

    @Matt Ryall thanks for the update. We thought it might be a filesystem driver change - everything is faster including pulling layers, and "lightweight" Docker build steps.

    Personally we could do with more than 1GB of Docker cache, but I'm sure this will increase eventually.

    Anyway good news and look forward to the updates.

  10. Log in to comment