Support Amazon ECR for build image

Issue #13024 resolved
Michael Juliano
created an issue

The Amazon implementation of a Docker Registry automatically generates the docker login command with a call to the aws API. The credentials that it generates expire making them impractical to mine from the command for a normal bitbucket-pipelines.yml file. As far as I can tell, there is no way to set Amazon to do it differently, so if we could specify AWS credentials as follows:

image:
    name: <aws-ecr-image>
    aws_login:
        access_key_id: <access_key_id>
        secret_access_key: <secret_access_key>
        region: <region>

Then Pipelines could generate a file at ~/.aws/credentials that looks like this:

[default]
aws_access_key_id = <access_key_id>
aws_secret_access_key = <secret_access_key>

Then make the following AWS call to get the credentials and login:

eval $(aws ecr get-login --region <region>)

It could then proceed to download the Docker image and continue normally.

Alternatively, the values could be settings on the server side to avoid sharing all that information in a file in source control.

Official response

  • Sebastian Cole staff

    We've just enabled ECR support for all customers – currently in the process of releasing the documentation – you heard it here first!

    YAML format is:

    image:
      name: <aws_account_id>.dkr.ecr.<region>.amazonaws.com/java:8u66
      aws:
        access-key: $AWS_ACCESS_KEY
        secret-key: $AWS_SECRET_KEY
    

    cheers.

Comments (30)

  1. Sebastian Cole staff

    Hey @Michael Juliano thanks for the suggestion! we're always looking for ways to improve.

    A work around for you to try now is to use Bitbucket Pipeline Variables (see the "User-Defined Variables" section of https://confluence.atlassian.com/display/BITBUCKET/Environment+variables+in+Bitbucket+Pipelines) to create variables:

    AWS_ACCESS_KEY_ID

    AWS_SECRET_ACCESS_KEY

    AWS_SESSION_TOKEN (Only needed with Role based access keys - eg. ASIA keys)

    AWS_DEFAULT_REGION

    see for reference: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-environment

    After that eval $(aws ecr get-login) will work as you expect.

    I'll leave this ticket open for now to gather more feedback.

  2. Michael Juliano reporter

    Hi Sebastian,

    I'm pretty sure the eval statement needs to be run by the Bitbucket Pipelines process before it can download the correct Docker image within which to run user supplied code. Is there a way to customize the code run at that stage of the process?

    Thanks, Michael

  3. Michael Holt

    I just built an image which, among other things, had the awscli installed so that I could push to ECR. Of course once I was finished and went to store it on ECR I was met by a moment of cold reality when I realized I'd need that same awscli to pull the image I had just created, creating a catch-22. Native integration to at least pull from AWS would be great.

  4. Daryl Stultz

    I created environment variables for AWS_ECR_PASSWORD and AWS_ECR_USER (which always equals "AWS"). Then my YAML files starts with:

    image:
      name: MYACCOUNT.dkr.ecr.us-east-1.amazonaws.com/MYREPO:0.0.5
      username: $AWS_ECR_USER
      password: $AWS_ECR_PASSWORD
      email: notreally@needed.com
    

    Since the AWS_ECR_PASSWORD expires after 12 hours, I have a cron job on my dev machine that does this:

    TOKEN=`aws ecr get-login | cut -d' ' -f6`
    PAYLOAD="{\"value\":\"$TOKEN\"}"
    curl --user ME:PASSWORD -X PUT -H "Content-Type: application/json" -d "$PAYLOAD" https://api.bitbucket.org/2.0/repositories/MYCOMPANY/MYREPO/pipelines_config/variables/VAR_ID
    

    Documentation for the API call is here: https://developer.atlassian.com/bitbucket/api/2/reference/resource/repositories/%7Busername%7D/%7Brepo_slug%7D/pipelines_config/variables/%7Bvariable_uuid%7D#put

  5. James Wilson

    We have something similar to Daryl Stultz

    We have a Lambda function that is triggered through AWS CloudWatch rules every 6 hours.

    This seems to work well especially since we don’t need a server/cron to trigger it.

    The typescript code for the Lambda/Nodejs function is below:

    ///<reference path="../node_modules/@types/node/index.d.ts"/>
    
    import * as AWS from 'aws-sdk';
    import * as Lambda from 'aws-lambda';
    import * as https from 'https';
    import * as http from 'http';
    
    const ecr = new AWS.ECR({apiVersion: '2015-09-21'});
    
    let aws_ecr_auth_uuid: string;
    
    export function handler(event: {}, context: Lambda.Context, callback?: Lambda.Callback) {
        console.info('DEBUG: UpdateECRCredentialsOnBitbucket.handler(JSON.stringify(event) [' + JSON.stringify(event) + '], JSON.stringify(context) [' + JSON.stringify(context) + '], callback [...]');
    
        ecr.getAuthorizationToken({
        'registryIds': [
            '{INSERT YOUR REGISTRY ID HERE}'
        ]
        }, (err1: AWS.AWSError, data: AWS.ECR.Types.GetAuthorizationTokenResponse) => {
        if (err1) {
            const message = 'Error: Cannot get AuthorizationToken. Error [' + err1 + '].';
            console.error(message);
            callback(new Error(message), null);
            return;
        }
    
        // A base64-encoded string that contains authorization data for the specified Amazon ECR registry
        const authorization: string = Buffer.from(data['authorizationData'][0]['authorizationToken'], 'base64').toString();
        const token: string = (authorization).split(':')[1];
    
        console.log('INFO: Retrieved AuthorizationToken [' + token + ']');
    
        const get = https.get({
            host: 'api.bitbucket.org',
            path: '/2.0/teams/{INSERT BITBUCKET TEAM NAME HERE}/pipelines_config/variables/',
            method: 'GET',
            port: 443,
            headers: {
            'Authorization': 'Basic {INSERT YOUR BASE64 BITBUCKET USERNAME:PASSWORD HERE}'
            }
        }, (res1: http.IncomingMessage) => {
    
            res1.on('data', (data_chunk: string) => {
    
            const chunk_object = JSON.parse(data_chunk);
    
            for (const value of chunk_object.values) {
                if (value.key === 'AWS_ECR_AUTH') {
                aws_ecr_auth_uuid = value.uuid;
                break;
                }
            }
            });
    
            res1.on('end', () => {
    
            if (!aws_ecr_auth_uuid) {
                // Create new variable
                const post: http.ClientRequest = https.request({
                host: 'api.bitbucket.org',
                path: '/2.0/teams/{INSERT BITBUCKET TEAM NAME HERE}/pipelines_config/variables/',
                method: 'POST',
                port: 443,
                headers: {
                    Authorization: 'Basic {INSERT YOUR BASE64 BITBUCKET USERNAME:PASSWORD HERE}',
                    'Content-Type': 'application/json'
                }
                }, (res2: http.IncomingMessage) => {
                res2.on('data', (data_chunk: string) => {
                    const data_chunk_object: {error: {message: string, data: {arguements: any, key: string }}} = JSON.parse(data_chunk);
                    if (data_chunk_object && data_chunk_object.error) {
                    const message = 'Error: Cannot create new AWS_ECR_AUTH variable. Error [' + data_chunk_object.error.message + '].';
                    console.error(message);
                    callback(new Error(message), null);
                    } else {
                    console.info('Info: Successfully created new AWS_ECR_AUTH variable. Response [' + data_chunk + ']');
                    callback(null, true);
                    }
                });
                });
    
                post.on('error', function (err2: Error) {
                const message = 'Error: Cannot create new AWS_ECR_AUTH variable. Error [' + err2 + '].';
                console.error(message);
                callback(new Error(message), null);
                return;
                });
    
                // Write data to request body
                post.write('{ "key" : "AWS_ECR_AUTH", "value" : "' + token + '", "secured" : true }');
                post.end();
            } else {
                // Update existing variable
                const put: http.ClientRequest = https.request({
                host: 'api.bitbucket.org',
                path: '/2.0/teams/{INSERT BITBUCKET TEAM NAME HERE}/pipelines_config/variables/' + aws_ecr_auth_uuid,
                method: 'PUT',
                port: 443,
                headers: {
                    Authorization: 'Basic {INSERT YOUR BASE64 BITBUCKET USERNAME:PASSWORD HERE}',
                    'Content-Type': 'application/json'
                }
                }, (res3: http.IncomingMessage) => {
                res3.on('data', (data_chunk: string) => {
    
                    const data_chunk_object: {error: {message: string, data: {arguements: any, key: string }}} = JSON.parse(data_chunk);
                    if (data_chunk_object && data_chunk_object.error) {
                    const message = 'Error: Cannot update existing AWS_ECR_AUTH variable. Error [' + data_chunk_object.error.message + '].';
                    console.error(message);
                    callback(new Error(message), null);
                    } else {
                    console.info('Info: Successfully updated existing AWS_ECR_AUTH variable. Response [' + data_chunk + ']');
                    callback(null, true);
                    }
                });
                });
    
                put.on('error', function (err3: Error) {
                const message = 'Error: Cannot update existing AWS_ECR_AUTH variable. Error [' + err3 + '].';
                console.error(message);
                callback(new Error(message), null);
                return;
                });
    
                // Write data to request body
                put.write('{ "value" : "' + token + '", "secured" : true }');
                put.end();
            }
            });
        });
    
        get.on('error', (err4: Error) => {
            const message = 'Error: Cannot retrieve list of variables. Error [' + err4 + '].';
            console.error(message);
            callback(new Error(message), null);
            return;
        });
        });
    }
    
  6. Tim Hobbs

    We have the same requirements to use private images stored within AWS ECR. Using an external system with permission to update Pipeline Environment variables is not an acceptable work-around for us.

    I agree with the original suggestion of @Michael Juliano and suggest using AWS variables directly instead of using a file.

    @Sebastian Cole - please keep in mind that if anyone is using the principal of least privilege, they are not using AWS credentials with full access. It would be typical and good practice to have AWS credentials that have only pull (read-only) permission for the build image and different AWS credentials with push (write) permission for the image that will be created. Also, the images may be stored in different AWS registries.

    Set the necessary AWS variables at account or repo:

    AWS_PULL_ACCESS_KEY_ID

    AWS_PULL_SECRET_ACCESS_KEY

    AWS_PULL_SESSION_TOKEN (If required)

    AWS_PULL_DEFAULT_REGION

    Configure the image with newly created properties:

    image:
      name: <aws-ecr-image>
      aws:
        access_key_id: AWS_PULL_ACCESS_KEY_ID
        secret_access_key: AWS_PULL_SECRET_ACCESS_KEY
        session_token: AWS_PULL_SESSION_TOKEN  
        region: AWS_PULL_DEFAULT_REGION
    

    Please enhance pipelines to:

    • handle properties for aws login
    • install awscli (and keep up-to-date)
    • perform docker login with aws ecr get-login, which uses the environment variables and no additional files
  7. Sebastian Cole staff

    We've just enabled ECR support for all customers – currently in the process of releasing the documentation – you heard it here first!

    YAML format is:

    image:
      name: <aws_account_id>.dkr.ecr.<region>.amazonaws.com/java:8u66
      aws:
        access-key: $AWS_ACCESS_KEY
        secret-key: $AWS_SECRET_KEY
    

    cheers.

  8. Sebastian Cole staff

    @Chris Cannell this is for pulling the step image where your scripts get executed. If you're building and pushing from pipelines, you'll follow the same process as normal.

    eval $(aws ecr get-login --region <region>)
    docker push registry.com/user/format
    
  9. Dean Kayton

    Wow, as I post this (after trying various things last night)... I found what the problem is.

    It is one of two things, either the space/return between the image code-block and the pipelines code-block, in the yaml is not supported, or the format ${} is not supported.

  10. Javed Gardezi

    @Sebastian Cole can you please elaborate the step in more details. For this

    image:
      name: <aws_account_id>.dkr.ecr.<region>.amazonaws.com/java:8u66
      aws:
        access-key: $AWS_ACCESS_KEY
        secret-key: $AWS_SECRET_KEY
    

    How do I push my image to ECR with your script? Currently, my way of doing is

    - step:
              name: Build & Register image with production registery
              # python image with aws-cli installed
              image: tstrohmeier/awscli:3.6.3
              script:
                # aws login
                - echo Logging in to Amazon ECR...
                - eval $(aws ecr get-login --region ${AWS_DEFAULT_REGION} --no-include-email)
                # docker
                - export BUILD_ID=$BITBUCKET_BRANCH_$BITBUCKET_COMMIT_$BITBUCKET_BUILD_NUMBER
                - docker build -t ${AWS_REGISTRY_URL}:$BUILD_ID .
                - docker push ${AWS_REGISTRY_URL}:$BUILD_ID
                - docker tag ${AWS_REGISTRY_URL}:$BUILD_ID ${AWS_REGISTRY_URL}:development
                - docker push ${AWS_REGISTRY_URL}:development
    

    As you can see that I can build the docker image and pushing it ECR using mage: tstrohmeier/awscli:3.6.3. How does your support (We've just enabled ECR support for all customers) help us to do the above steps?

    Kind regards, Javed Gardezi

  11. Philip Hodder staff

    Hi @Javed Gardezi,

    This feature is only to add native support for pulling images from ECR as the container your build runs inside, as this was previously not possible to do without extensive workarounds.

    Pushing images to ECR remains the same as you are doing right now (setting up auth for docker and then using docker commands to push).

    This ticket here may be closer to what you are requesting (steps that take parameters and can do more complex operations): https://bitbucket.org/site/master/issues/12751/allow-more-capable-steps-that-can-be

    Thanks,

    Phil

  12. Javed Gardezi

    Hi @Philip Hodder,

    Thank you, for your reply.

    Is there any similar coming for bitbucket pipeline?

    Furthermore, is there any guideline in relation to once pushes to ECR it automatically deploys the new build to ECS Cluster?

    Regards,

    Javed Gardezi

  13. Log in to comment