• Our product teams collect and evaluate feedback from a number of different sources. To learn more about how we use customer feedback in the planning process, check out our new feature policy.

      Builds that build and push Docker could be faster if the layers of the image were cached.

            [BCLOUD-14144] Cache Docker layers between builds

            I'm getting the same output as @George Boot. My pipeline seems to have caches up to some arbitrary point in my first stage and then wont cache any more because it "already exists".

            Matt Robinson added a comment - I'm getting the same output as @George Boot. My pipeline seems to have caches up to some arbitrary point in my first stage and then wont cache any more because it "already exists".

            Attachment 2614718555-Capture.PNG has been added with description: Originally embedded in Bitbucket issue #14144 in site/master

            brettmichaelorr added a comment - Attachment 2614718555-Capture.PNG has been added with description: Originally embedded in Bitbucket issue #14144 in site/master

            Shidi Xu added a comment -

            Attachment 4059371088-Screen%20Shot%202019-06-09%20at%202.23.46%20am.png has been added with description: Originally embedded in Bitbucket issue #14144 in site/master

            Shidi Xu added a comment - Attachment 4059371088-Screen%20Shot%202019-06-09%20at%202.23.46%20am.png has been added with description: Originally embedded in Bitbucket issue #14144 in site/master

            Aneita added a comment -

            Hey 740a1f9b5dc2 ,

            Your cached docker image is ‘702.2MiB over the 1GiB upload limit’.

            Aneita added a comment - Hey 740a1f9b5dc2 , Your cached docker image is ‘702.2MiB over the 1GiB upload limit’.

            Shidi Xu added a comment -

            In my case the build case is around 700m after compressed but it still reports over 1GB limit. Can someone elaborate?

            Shidi Xu added a comment - In my case the build case is around 700m after compressed but it still reports over 1GB limit. Can someone elaborate?

            We use multi-stage builds, and the built in cache only seems to cache the first target:

            + docker build --target app -t app .
            Sending build context to Docker daemon    123MB
            Step 1/17 : FROM php:7.0-fpm-alpine as base
            7.0-fpm-alpine: Pulling from library/php
            Digest: sha256:b8ddafa001be63c0665e7c8501bdade02f29e77ceff88c57d9f142692d6401bb
            Status: Downloaded newer image for php:7.0-fpm-alpine
             ---> f8f280d888a9
            Step 2/17 : RUN ...
             ---> Using cache
             ---> 4e2ab674f07d
            Step 3/17 : ENV PATH="/app/vendor/bin:${PATH}"
             ---> Using cache
             ---> adcbff086e7c
            ... # When we do FROM again it will no longer use the cache
            Step 7/17 : FROM base as composer
             ---> 15f8d021ac02
            Step 8/17 : RUN mkdir /tmp/composer && chmod 777 /tmp/composer
             ---> Running in adbe2935d2f9
            Removing intermediate container adbe2935d2f9
             ---> b1587dff3fd4
            Step 9/17 : ENV COMPOSER_HOME=/tmp/composer
             ---> Running in 18b1b49e35a7
            Removing intermediate container 18b1b49e35a7
             ---> 22eb7b4f847d
            Step 10/17 : COPY --from=composer /usr/bin/composer /usr/bin/composer
            ...
            

            parerikssoneidentitet added a comment - We use multi-stage builds, and the built in cache only seems to cache the first target: + docker build --target app -t app . Sending build context to Docker daemon 123MB Step 1/17 : FROM php:7.0-fpm-alpine as base 7.0-fpm-alpine: Pulling from library/php Digest: sha256:b8ddafa001be63c0665e7c8501bdade02f29e77ceff88c57d9f142692d6401bb Status: Downloaded newer image for php:7.0-fpm-alpine ---> f8f280d888a9 Step 2/17 : RUN ... ---> Using cache ---> 4e2ab674f07d Step 3/17 : ENV PATH= "/app/vendor/bin:${PATH}" ---> Using cache ---> adcbff086e7c ... # When we do FROM again it will no longer use the cache Step 7/17 : FROM base as composer ---> 15f8d021ac02 Step 8/17 : RUN mkdir /tmp/composer && chmod 777 /tmp/composer ---> Running in adbe2935d2f9 Removing intermediate container adbe2935d2f9 ---> b1587dff3fd4 Step 9/17 : ENV COMPOSER_HOME=/tmp/composer ---> Running in 18b1b49e35a7 Removing intermediate container 18b1b49e35a7 ---> 22eb7b4f847d Step 10/17 : COPY --from=composer /usr/bin/composer /usr/bin/composer ...

            Same here. On the first build, it will build a cache and output some more details on what layers are getting cached etc.
            Then, on the next build, it will download the cache but still run the docker build from the first step onwards, without using cached layers.

            Also, after a build, it will not update the cache:

            Skipping assembly of docker cache as one is already present
            Cache "docker": Skipping upload for existing cache
            

            George Boot added a comment - Same here. On the first build, it will build a cache and output some more details on what layers are getting cached etc. Then, on the next build, it will download the cache but still run the docker build from the first step onwards, without using cached layers. Also, after a build, it will not update the cache: Skipping assembly of docker cache as one is already present Cache "docker" : Skipping upload for existing cache

            @kmacleod thanks, but still no reason of fail:

            Docker images saved to cache
            Cache "docker": Compressing
            Cache "docker": Compressed in 21 seconds
            Cache "docker": Uploading 381.5 MiB
            Cache "docker": Upload failed
            

            Maxim Karavaev added a comment - @kmacleod thanks, but still no reason of fail: Docker images saved to cache Cache "docker" : Compressing Cache "docker" : Compressed in 21 seconds Cache "docker" : Uploading 381.5 MiB Cache "docker" : Upload failed

            Folks,

            As of today, Pipelines is logging a bit more information about the assembly of the Docker layer cache in the Teardown section of the build. It now displays the reasoning behind whether or not the cache will be built and uploaded, as well as which images will be used to build the cache.

            Hopefully this will make this feature a little more transparent.

            Kenny MacLeod added a comment - Folks, As of today, Pipelines is logging a bit more information about the assembly of the Docker layer cache in the Teardown section of the build. It now displays the reasoning behind whether or not the cache will be built and uploaded, as well as which images will be used to build the cache. Hopefully this will make this feature a little more transparent.

            Hey,

            @pavelsavshenko, sorry to hear that. We are not throttling and the speed should be much higher than just 10MiB/s. If the problem still persists, I would recommend raising a support case so that we can analyse your specific case.

            @mochnatiy at the moment, in order to cache docker layers, the following conditions must be met:

            • The layer cache has to be < 1GB compressed
            • The size of the images in the docker daemon must be < 2GB for a cache to be created (you can check this by adding this command to your yml docker image inspect $(docker image ls -aq) --format {{.Size | awk ' {totalSizeInBytes += $0}

              END

              {print totalSizeInBytes}

              '}}

            Also, bear in mind that in Docker 1.13, a new option was introduced in the docker build command: --cache-from, which allows to specify one or more tagged images as a cache source. The image generated in the build can also be used as a cache source in another Docker build. This might help to improve the performance of your build and save build minutes, without having the limitation of 1GB.

            Example:

            docker build \
              --cache-from $IMAGE:latest \
              --tag $IMAGE:$BITBUCKET_BUILD_NUMBER \
              --tag $IMAGE:latest \
              .
            

            Regards,
            Raul

            Raul Gomis added a comment - Hey, @pavelsavshenko, sorry to hear that. We are not throttling and the speed should be much higher than just 10MiB/s. If the problem still persists, I would recommend raising a support case so that we can analyse your specific case. @mochnatiy at the moment, in order to cache docker layers, the following conditions must be met: The layer cache has to be < 1GB compressed The size of the images in the docker daemon must be < 2GB for a cache to be created (you can check this by adding this command to your yml docker image inspect $(docker image ls -aq) --format {{.Size | awk ' {totalSizeInBytes += $0} END {print totalSizeInBytes} '}} Also, bear in mind that in Docker 1.13, a new option was introduced in the docker build command: --cache-from , which allows to specify one or more tagged images as a cache source. The image generated in the build can also be used as a cache source in another Docker build. This might help to improve the performance of your build and save build minutes, without having the limitation of 1GB. Example: docker build \ --cache-from $IMAGE:latest \ --tag $IMAGE:$BITBUCKET_BUILD_NUMBER \ --tag $IMAGE:latest \ . Regards, Raul

              Unassigned Unassigned
              xtjhin Joshua Tjhin (Inactive)
              Votes:
              67 Vote for this issue
              Watchers:
              70 Start watching this issue

                Created:
                Updated:
                Resolved: