Building and pushing Docker images

Issue #12790 resolved
Mike Smith
created an issue

I realise this is not currently supported. Is there a workaround for it such as using a third party service or something like that? Are there any plans to support this in the future perhaps with docker in docker support?

Official response

Comments (117)

  1. Colin Hebert

    Not sure whether that's already there, but something that should be doable in the mean time is to run with docker-machine (to create a docker server somewhere else) to do docker in docker in the interim.

  2. Patrick Carriere

    Kind of needing this ASAP. I would like to be able to build a docker image, copy my build artifacts into it and push it to a registry. Using some workarounds for now but wish I could do it all in pipeline

  3. nasskach

    In my opinion, I think that Pipelines is useless if it is not allowing the build of an image from a Dockerfile and run tests using that image.

    Here the workflow that I imagine for the CI : - A base Dockerfile that contain common needs (softwares, system libs, prod packages...). - A test Dockerfile that is based on the base image + dev config/dev dependencies => we run tests on that image. - A prod Dockerfile that is based on the base image also + prod config => we push that image to the registry (if the previous step was successful of course).

  4. Tobias Rös

    we also think this would be a required feature. actually we use drone.io to build and push our container but using two build solutions can not be the answer

  5. Nick Humrich

    This feature would be amazing. To this date I cant find any CI that supports both building docker images and pipelines. Might be the straw that finally gets my whole org to move over from github.

  6. Mike Smith reporter

    For anyone needing to work around this issue, we did it by launching an ec2 instance running docker engine secured by certificates and our pipelines build image has the docker client baked into it. Works like a charm.

  7. Mike Smith reporter

    We run a docker server in EC2 with a well known DNS A record pointing to it, ie. docker-build.example.com. To secure your docker server, you need to follow the instructions here: https://docs.docker.com/engine/security/https/. Basically, you generate your certificates and then use them when you launch the docker engine with TLS security enabled and bake them into your build image for the client. Here's a systemd unit to launch the docker engine:

    [Unit]
    Description=Docker Application Container Engine
    Documentation=https://docs.docker.com
    After=network.target docker.socket
    Requires=docker.socket
    
    [Service]
    Type=notify
    # the default is not to use systemd for cgroups because the delegate issues still
    # exists and systemd currently does not support the cgroup feature set required
    # for containers run by docker
    ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2376 --tlsverify --tlscacert=/etc/docker/ca.pem  --tlscert=/etc/docker/server-cert.pem --tlskey=/etc/docker/server-key.pem
    ExecReload=/bin/kill -s HUP $MAINPID
    # Having non-zero Limit*s causes performance problems due to accounting overhead
    # in the kernel. We recommend using cgroups to do container-local accounting.
    LimitNOFILE=infinity
    LimitNPROC=infinity
    LimitCORE=infinity
    # Uncomment TasksMax if your systemd version supports it.
    # Only systemd 226 and above support this version.
    TasksMax=infinity
    TimeoutStartSec=0
    # set delegate yes so that systemd does not reset the cgroups of docker containers
    Delegate=yes
    # kill only the docker process, not all processes in the cgroup
    KillMode=process
    
    [Install]
    WantedBy=multi-user.target
    

    There's not much interesting in the bitbucket-pipelines.yaml file. We tend to drive everything using our build image and scripts.

    Here's the boilerplate for setting up the docker client in a build image for a java8 project. Add the rest of your tool chain below the boilerplate:

    FROM openjdk:8
    
    RUN apt-get update; \
        apt-get install apt-transport-https ca-certificates -y ;\
        apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D; \
        echo 'deb https://apt.dockerproject.org/repo debian-jessie main' > /etc/apt/sources.list.d/docker.list; \
        apt-get update; \
        apt-get install docker-engine -y
    
    RUN mkdir -p /root/.docker
    
    ADD docker_build/salt/ssl/key.pem /root/.docker/key.pem
    ADD docker_build/salt/ssl/ca.pem /root/.docker/ca.pem
    ADD docker_build/salt/ssl/cert.pem /root/.docker/cert.pem
    
    ENV DOCKER_HOST=tcp://docker-build.example.com:2376 DOCKER_TLS_VERIFY=1
    

    Then your build script can run docker build ... and docker push ... to where every you host your docker repo, such as Artifactory or ECR.

    That's about it. It all works really well so far.

  8. Christian Metzler

    I think the only thing you need is a way to start a docker-machine or some other virtual machine from the build pipeline... Then you could use build tools like gradle or maven for building, integration testing, push images and so on.

  9. Christian Günther

    We used Bamboo Cloud, which will be discontinued, and so we have to switch to another build server. Because of this missing feature we can not use Pipelines. After setting up another build server we will not come back to Pipelines - sorry Pipelines.

  10. Andy O'Brien

    +1

    Our project contains several Docker containers that a developer can modify, we would like these to be rebuilt each time to add a layer of testing on top of code.

  11. William Ward

    It's fairly easy to spin up KVM inside a docker container. I feel certain Atlassian could add an attribute to the pipline yml file that would instruct their servers to launch a dind container (or whatever image you give it) inside a containerized KVM instance with docker installed. Everyone's docker instances would be separated.

  12. Matt Hartstonge

    @Abdallah Hamdy @vsa-nick They aren't going to do that.. From the docker docs: "When the operator executes docker run --privileged, Docker will enable to access to all devices on the host as well as set some configuration in AppArmor or SELinux to allow the container nearly all the same access to the host as processes running outside containers on the host"

    That's a massive security hole ;)

    The best way is to map the docker socket into the host and have a docker client inside the container, but again, this means the client could fire up a new container mapping -V /:/ and then play around within the container as root with full access to every directory on the host. This is not an easy solution to fix...

    Essentially, a siloed VM will have to be auto-generated for each user on build and trashed after to avoid escaping out of the containers and into Atlassian's infrastructure. @William Ward is onto it. How that works enmasse with AWS... ?

    We've moved CI/CD to an AWS node with drone.io open source as our docker image builder, with hooks connected to Bitbucket. That way it's our docker, If we fubar it, well.. Tough haha. So far, really solid.

  13. SvenS

    +1 For us this feature is also very important to setup a release pipeline. Without it is only to some extent useful - only for executing the tests after a commit. Is there any time frame?

  14. Srinath Sankar

    +1! The tagline for pipelines is "Build, test and deploy from Bitbucket" and without the ability to build an image and run tests on it, I guess only the deploy step is currently supported.

  15. Derek Smith

    +1 I talked with the team at AWS re:Invent this week. They spoke of a bunch of new features coming out soon. Hopefully this is one of them. If I can't build a docker image and push it to a repository, that is a deal breaker for me.

  16. Janne Nykänen

    Interesting approach Chris, and this works. However, I recommend every AWS user to use codebuild instead - takes the good parts of pipelines, but is more mature and can build and push docker images out of the box.

  17. Janne Nykänen

    Danilo - if you need to keep Bitbucket (at least for now) and you cannot or don't want to migrate to AWS CodeCommit (but you use AWS), out of current options I find it the easiest solution to build docker images by communicating with CodeCommit build project by S3 and api calls instead of setting up and maintaining your own docker build server and expose the http api for bitbucket pipelines. If you'are willing to migrate repository, other options include GitLab and/or DockerHub.

  18. Ben Record

    Thanks Janne, We've been watching this issue since August. Our stack is monolithic on AWS except for source control. We'll be migrating to CodeCommit since Atlassian has been silent on this issue. Best of luck to those still experiencing this problem.

  19. Alper Kanat

    I'm also in the waiting list for Atlassian to allow docker builds. For those who expects an integration between AWS and Bitbucket, you don't need to wait. Use a AWS CLI installed Docker image, build your source code in Bitbucket Pipelines and zip the contents of the directory. Push it to an S3 bucket and from there on you can do pretty much anything on AWS. I'm able to build Docker images via CodeBuild and I can trigger it from Bitbucket Pipelines YML file. However I want a full pipeline so I'm not planning to integrate CodeDeploy or AWS Pipeline.

  20. William Ward

    Also, Docker Cloud has the ability to run scripts before and after the build process. I've been able to successfully use it for some of my two-step builds.

    -- Wills Ward

  21. Lou Greenwood

    @Sten Pittet

    I don't understand why Atlasssian chose to EOL Bamboo before they added this to BitBucket Pipelines - so many customers are going to screwed over by this (and we're just the vocal minority).

    Surely the best solution is to privately reach out to anyone who voices their concern over the lack of docker build capabilities in BB Pipelines and provide continued use of Bamboo cloud until that is done.

    At the moment Atlassian are making their problem the customer's problem and are not even providing any insight into whether this is actually going to ever be available on BB Pipelines, so it's very hard to find the appropriate solution, i.e should we:

    • Hack a working docker build setup in anticipation of an eventual, but unannounced BB Pipelines Docker build support (cluster fuck)
    • Migrate to Bamboo Server, and then later migrate back to BB Pipelines (double headache)
    • Just move to Bamboo server indefinitely (pain in the arse)

    This is an absolute mess - fair enough, notice was given, but not the right kind of notice...

  22. Chris Knight

    Massive fail. Extend bamboo until this is fixed!!! I'm not moving to bamboo server to then have to move to pipelines. Think I'll just move to something else once.

  23. Joshua Tjhin

    Hi everyone,

    Appreciate all the feedback. Firstly, I'd like to say that building applications as Docker containers is an important feature for us and definitely a capability we would like to add to Pipelines. However, as some have mentioned on this issue, this is more complex to add securely with our architecture. I can't comment on specific dates but we would like to start this work in the first half of 2017.

    @Mike Smith 's suggestion looks like a great workaround! I'll work with our team to add it to our documentation as a temporary workaround until we can support Docker build and push in a better way.

    For those migrating from Bamboo Cloud, unfortunately it won't be possible for us to extend the EOL date as we wouldn't be able to provide the necessary maintenance and security updates that our customers require. Our recommended migration path is still Bamboo Server and not Pipelines because it offers the exact same features and less restrictions. That is why we provided a free perpetual Bamboo Server license with 1-year of maintenance to most customers. Our migration hub also provides help on running Bamboo Server in the cloud on AWS. Meanwhile, our team will continue to work hard on adding more powerful capabilities to Pipelines.

    Thank you!

    Regards, Joshua Tjhin, Bitbucket Pipelines PM

  24. Jarrold Ong

    If you just want to build your image and push it to docker hub, another way is to:

    1. Create an automated build repo in docker hub
    2. Configure remote build triggers
    3. Add the curl command to your bitbucket-piplines.yml

    So now when you commit your code, it will trigger the pipeline which will trigger the autobuild in docker hub.

    If you are using docker cloud you can go one step future and add autoredeploy which will automatically redeploy your service whenever a new image is pushed or built. And you can also hook all of this up with slack and get some pretty cool notifications out of the box.

    One issue is that docker hub throttles the build triggers so if its currently building an image it will ignore any subsequent triggers until the build completes. So if you push 2 branches at the same time only the first will build. Also if you have a lot of people trying to build at the same time, this setup might not be so ideal for you. So its better to only trigger the autobuild on specific branches that you know you are going to use the images, e.g. autoredeploy.

  25. Joshua Tjhin

    Hi all,

    We've started planning for this feature and I'd like to get a few more detailed use cases to help us build it.

    If you are available in the next two weeks for a call to discuss how you build your application as a Docker containers, please send me an email at joshua.tjhin@atlassian.com with a short summary of what you use Pipelines for. I'd like to use this feedback to validate our plans against.

    Thanks! Joshua

  26. Erin Drummond

    Awesome to hear Joshua!

    Our current build system is based on quay.io because our "build artifact" is the docker image itself - which we then deploy to a Rancher cluster. We couldnt even consider moving to Pipelines unless we can access the docker image at the end of the build process.

    In fact, the entire concept of defining build steps in bitbucket-pipelines.yml is not necessary - we have Dockerfiles in our repositories to do that. All Pipelines would need to do is inject its environment variables and run a docker build, and then either store the resulting image or push it to a 3rd party docker registry such as dockerhub.

    After the image is built, the ability to run a container would be handy to be able to automatically deploy the image - this container would get injected with environment variables containing the image name / build details so that we could run whatever command we need to in order to deploy it

  27. Marc Sluiter

    @xtjhin Since calls might be difficult because of timezones and limited time, here a short description of what we need, and used to do with bamboo:

    • We have Java, Golang and NodeJS based projects

    • For Java and Golang the first build artifact is a runnable jar build by gradle or an executable binary build by go, which we want to publish.That's possible already with https://confluence.atlassian.com/bitbucket/deploy-your-pipelines-build-outputs-to-bitbucket-downloads-872124574.html

    • For many of the projects we also want to have a docker image, which we can deploy to our kubernetes cluster:

      • For the Java projects that means: use a small Java base image, add the jar and config files if any, and configure entrypoint / cmd.
      • For Golang it's similar: use a small linux base image (e.g. Alpine), add executable and configs, configure entrypoint / cmd.
      • For NodeJs it's also a small base image with added build artifacts (e.g. compiled Angular Frontend).
      • All images should not contain build dependencies like sources, intermediate artifacts, Golang / Node itself, etc., in order to keep the image as small as possible.
      • At the end the image should be pushed to a docker repository, so that it can be used by Kubernetes cluster. For continuous deployment we use docker hub webhooks atm.
  28. Joshua Tjhin

    Thanks everyone for your emails and comments. We're making good progress on adding this in a secure way.

    I have a few specific questions which I'd like to get more input on:

    • Does anyone build a Docker image then run tests against the image? Or does everyone run their tests before building the image?
    • If anyone relies on docker-run, could they please let me know via comment or email their use case.

    Appreciate all the feedback!

  29. Matt Hartstonge

    @xtjhin I think I mentioned this somewhere in the beta thread:

    When I heard of BitBucket Pipelines, I believed it would be an actual pipeline, that is:

    • test
    • build
    • deploy

    So, in my opinion, I believed it would be like the following (as I prototype in my head - heh):

    test:
      image-matrix:
        - node:4
        - node:6
        - node:7
    
      depends_on:
        mariadb:
           image: mariadb:latest
           cmd: ./start-db ?
           name: mysql
           ports:
             - "3306:3306"
        mongodb:
           image: mongodb:3.2
           name: mongodb
           ports:
             - "27017:27017"
        rabbitmq:
           image: rabbitmq:latest
           name: rabbit
           ports:
             - "3333:6666"
    
      branches:
        development:
          # Image implicitly decided based on matrix above
          cmd:
            - npm install
            - npm run test
          links:
            - mysql
            - rabbit
    
    build:
      branches:
        development:
          file: Dockerfile.development.yml
          registry: myowndocker.mydomain.com
          username: s*cret
          password: s*cret
          # Denotes whether or not to build on test based failure
          on_success: false
    
        production:
          file: Dockerfile.production.yml
          registry: myowndocker.mydomain.com
          username: s*cret
          password: s*cret
          # Denotes whether or not to build on test based failure
          on_success: true
    
    deploy:
      branches:
        development:
          rancher:
            environment: development
            api_url: api.myrancher.com
            access_key: adevaccesskey
            secret_token: s*crettoken
            # Denotes whether or not to deploy on build based failure
            on_success: true
    
        production:
          kubernetes:
            environment: production
            api_url: api.myrancher.com
            access_key: aproductionaccesskey
            secret_token: s*crettoken
            # Denotes whether or not to deploy on build based failure
            on_success: true
    

    Hope this brings better clarity! :)

  30. Erin Drummond

    Hi Joshua, we run the unit tests as part of the build so they're just another step in the Dockerfile list of commands to execute. Integration tests are a bit more tricky - often they depend on things like message queues being available. It would be very handy to be able to docker run things as part of a build - that way you don't have to pull in things via a package manager to run them and then uninstall them in a later step to keep the image size small

  31. Hannes Löhrmann

    Hi @xtjhin,

    our workflow looks like this:

    Most important thing for us is that we can make shure the new image has been pushed to the private docker hub before the new task definition is created at aws.

  32. Lee Hull

    This is how we build, test and deploy, currently we use teamcity to do the following

    • run npm test (unit tests)
    • run docker-compose to bring up my service and my dependencies (mongo)
    • run a separate docker container that runs cucumber tests against my running service, this is for integration tests
    • attach artifacts (zipped test results) to JIRA issue (we have a branch naming standard so we know the issue key, haven't found another way to determine task from branch)
    • push image to docker hub if branch is master (latest and commit id gets tagged)
    • REST call to our custom automation web service to deploy new version if branch is master
  33. Chris Dornsife

    We have docker images building on a remote server. Today the "DOCKER_HOST" variable we have set is being overwritten by the pipeline when it starts by a localhost value. This broke all of our pipelines. We now have to manually add DOCKER_HOST in each pipeline to override what pipeline sets. Have you added a sidecar that has docker onboard yet or is this an experiment that made it into your production systems?

  34. naumanh

    I would also like to echo what @Matt Hartstonge mentioned. I think the major limitation for BitBucket Pipelines at the moment is that you cannot actually create a pipeline with multiple stages and steps.

    @xtjhin But to answer your specific question we most oftentimes run the tests and then build and push the docker image.

    It would also be nice if you could tag stages as "manual" which would require and trigger within the pipeline for kick off. Some of our builds have not reached full CD and would need to depend on this trigger.

  35. Joshua Tjhin

    Thanks everyone for your feedback and use cases! We're getting closer.

    Hi @Chris Dornsife,

    Very sorry about this. This was changed as part of the work to add support for this feature. Part of the implementation requires us to set the $DOCKER_HOST variable as a default variable which overrides variables defined in the docker image. You are still able to override this as a repository variable or team variable (sets the variable for all repositories in the team).

  36. Lou Greenwood

    @xtjhin What's important to us is to be able to:

    • Store private keys in BB and allow our build scripts to access the values of keys to inject into config files
    • Build multiple docker images on Pipelines in one build run
    • Push docker images to AWS Elastic Container Service repo
    • Use AWS CLI to control AWS ECS
  37. Joshua Tjhin

    Hi all,

    Good news..Firstly, we appreciate everyone's patience on this issue. It was important that we add support for Docker in a secure way.

    Last week, we launched an Alpha group to provide early-access to upcoming Pipelines features. Find out and sign up here.

    Happy to mention that the first 2 features available to the alpha group are:

    Please sign up and give it a try and give us your feedback!

    Regards,
    Joshua Tjhin
    Bitbucket Pipelines PM

    p.s. @Lou Greenwood Last week we also added a way to let Pipelines generate and store SSH keys to be used in your build!

  38. Lou Greenwood

    Good to know @xtjhin - great timing too, as I spent my entire weekend setting up Bamboo server - maybe you should have not killed Bamboo Cloud last week and instead waited until this alpha release was ready... :)

    Also, I meant storing private keys and environment variables to inject into scripts, but ssh key generation sounds like a handy feature in any case.

  39. Joshua Tjhin

    Currently managing the alpha group is slightly manual which I'll do every 24-48 hours. You'll receive an email confirmation once your team has been activated.

    @Lou Greenwood that's exactly what the SSH key feature provides - Pipeline can generate an SSH key-pair, keeping the private key safe and injected to your scripts or you can upload your own private key.

  40. Radek Grebski

    Started using it, works just perfect, thanks!! FYI my scenario is: after push pipeline runs code style check, unit tests, builds docker image, pushes it to private repo, triggers aws update to use new image.

  41. Sam Caldwell

    (1) I am loving Pipelines.

    (2) However, as an Atlassian, I see a definite need to build container images so we can then push them to a docker hub for various projects.

    (3) As a part of my personal projects, I use Docker images religiously to keep my costs low. Being able to build images and push them to my digital ocean servers would go a great distance to helping me save $$ while hacking away at various hobbies.

    (4) I have talked to others in the industry who are staying with Bamboo because pipelines will not allow them to build containers.

  42. Matthias Wutte

    Great work on the alpha version! It does exactly what we needed. On some branches we build a docker image right after the tests and push it to a private repository. The only thing i noticed is that the build is kind of slow because there seems to be no cache for the layers of the image. But i guess this a problem thats hard to solve ...

  43. Igor Bljahhin

    My build is trying to push Docker image to Docker Hub, but always gets "authentication required" error message:

    docker push boracompany/login
    + docker push boracompany/login
    The push refers to a repository [docker.io/boracompany/login]
    f1e336c6587c: Preparing
    eb90119d873a: Preparing
    27f36920af49: Preparing
    23b9c7b43573: Preparing
    unauthorized: authentication required
    

    My bitbucket-pipelines.yml is following:

    image: maven:3.3.9
    
    pipelines:
        default:
            - step:
                image:
                    name: maven:3.3.9
                    username: $DOCKERHUB_USERNAME
                    password: $DOCKERHUB_PASSWORD
                    email: $DOCKERHUB_EMAIL
                script:
                    - mvn -B clean install
                    - docker build -t boracompany/login .
                    - docker push boracompany/login
    
    options:
      docker: true
    

    Variables DOCKERHUB_USERNAME, DOCKERHUB_PASSWORD and DOCKERHUB_EMAIL are defined in account variables.

    What did I miss?

  44. Joshua Tjhin

    Thank you all for using this in Alpha. Glad that I can now mark this ticket as resolved! Sorry that the feature took a while for us to ship. Docker commands can now be used in a repository's pipeline by simply adding the docker option to your bitbucket-pipelines.yml

    option:
      docker: true
    

    This will give you access to the Docker daemon and mount the docker-cli into your build container (so you don't need to burn it into your build image)

    Tomorrow our launch blog will come out. We would appreciate your help making this announcement big with retweets and stories about how Pipelines is helping your team. This will help us continue investing in improving Pipelines.

    From some of the feedback that we received, I've created additional issues. Please vote and watch them to get updates if they interest you. If I've missed any, please raise new tickets :)

    Thanks,
    Joshua

  45. Log in to comment