-
Suggestion
-
Resolution: Fixed
-
Our product teams collect and evaluate feedback from a number of different sources. To learn more about how we use customer feedback in the planning process, check out our new feature policy.
At the moment all the steps are limited to a list of commands that can be run. While useful, a more powerful solution would be to allow other types of scripts. An example of this would be an AWS Lambda step where you only have to provide your settings and everything else is already done for you. Currently you need to go through several steps, which includes adding a script to your repository, and that's not an optimal solution.
An example of how this works in a different CI tool is this step that I created for Lambda deployments. Especially the ability to share steps like this across your projects (or with other people), would make Pipelines even more powerful.
Obviously, this would depend on BCLOUD-12750
Form Name |
---|
[BCLOUD-12751] Allow more capable steps that can be reused
Hi everyone,
I've created two new tickets to track the main requests that we've identified on this issue. With this current issue, it has become difficult to understand who is interested in what, and which request is more important to our users - the two main asks that we've identified are slightly different projects for us.
- Issue
BCLOUD-17182now tracks the request to have better pre-defined templates to help configure Pipelines to work with other tools (e.g. deployments to AWS). - Issue
BCLOUD-17181tracks the ability for users to define their own configuration which can then be reused across multiple repositories. - If you are interested in being able to reuse configuration within the same YML file, you can check out YML references and anchors.
Please vote for and watch the new tickets linked above based on what you're interested in seeing. This will help us gauge the importance of each request. If you have a suggestion that's not covered by the two tickets above, please raise a new ticket so that we can assess and track it separately.
Thanks for the help!
Aneita
Hi,
Thanks for the feedback! Exactly, the proposed task model will be a step forward to help you keep your bitbucket-pipelines.yml configuration files clean and maintainable across repositories. In the future, we could also consider reusing parts of bitbucket-pipelines.yml file or even the whole file. We'd like to keep getting feedback about all your specific reuse needs and concrete use cases / examples and validate wether the proposed tasks model is something our users might be interested in.
@mohammadnorouzi, regarding the specific details of the proposal (keep in mind that this is not yet finalised and things might change slightly):
- Task name would be optional.
- The base image of the task would be declared in the metadata file in other repo, not when using the task. The parameter IMAGE in the example was just an example of an task that deploys a docker image into a registry (nothing related to a task declaration).
- Parameters would be passed as environment variables to the task script. In the task creation script users would be able to set mandatory parameters and default values, so that as little code as possible is required when using tasks in the bitbucket-pipelines.yml file. Setting mandatory parameters would make tasks more readable as parameters are explicitly declared. However, with this approach, parameter lists might be longer when using a task, especially if you use several tasks that require same parameters. That's not yet decided, but it's something we'll take into account.
For example, task metadata file for Kubernetes might look like this:
#!bash name: Deploy to Kubernetes description: This task deploys to Kubernetes baseImage: atlassian/pipelines-kubectl version: 1.0 environment: # required fields only - name: APP_NAME default: "my app name" - name: CLUSTER_NAME default: "my cluster name" - name: KUBERNETES_HOST - name: KUBERNETES_USERNAME - name: KUBERNETES_PASSWORD - name: IMAGE script: - kubectl config set-cluster $CLUSTER_NAME --server=$KUBERNETES_HOST - kubectl config set-credentials deploy-user --username=$KUBERNETES_USERNAME --password=$KUBERNETES_PASSWORD - kubectl config set-context deployment --cluster=$CLUSTER_NAME --user=deploy-user - kubectl config use-context deployment - kubectl set image deployment/$APP_NAME $APP_NAME=$IMAGE:$BITBUCKET_BUILD_NUMBER
and this is how it'd be used across your repositories:
#!bash pipelines: branches: master: - step: name: Deploy to test deployment: test script: - echo "Starting deployment" - task: account/my-kube-task:1.0 parameters: APP_NAME: my-app-name CLUSTER_NAME: my-cluster-name IMAGE: my-image:latest KUBERNETES_HOST: my-kube-host KUBERNETES_USERNAME: $KUBERNETES_USERNAME KUBERNETES_PASSWORD: $KUBERNETES_PASSWORD - echo "Finish deployment"
@rgomish Thanks. It seems good. I especially like the version of the task, it lets us to upgrade the scripts smoothly and without breaking all repos.
Just wondering, what is name: Deploy docker image to test environment and is it mandatory? I think it's better to be optional to avoid too many lines.
Also, do we need to pass the image as a parameter to the task? In other words, isn't that better some parameters like the image and also env variables defined in the current context to automatically be sent to the task without the need of declaring them? This makes bitbucket-pipelines.yml tidy and easy to read.
My concern is, we end up have a huge list of parameters. For example consider 3 different tasks each of which has 5 to 10 parameters. Unless we come up with an idea to define context and perhaps those parameters that are marked as 'context-aware' can be automatically be transferred across all the tasks.
Something like this:
#!yaml pipelines: branches: master: - step: name: Deploy to test deployment: test script: - echo "Starting deployment" - context: set USERNAME=abcd - context: set PASSWORD=abcd - context: set IMAGE=my-image:latest - task: account/my-build-task:1.0 # all env variables defined as 'context' will be pass in to the tasks and are available here as well - task: account/my-semver-task:2.1 # all env variables defined as 'context' will be pass in to the tasks and are available here as well parameters: UPGRADE_TYPE:$MAJOR_CHANGE - task: account/my-deploy-task:1.0 # all env variables defined as 'context' will be pass in to the tasks and are available here as well - echo "Finish deployment"
Also, I am in favor of @choeflake 's idea
@rgomish This is great news! Personally I like this solution! It is clean and effective.
Just to have some things on the whishlist , allowing a shared bitbuket-pipelines.yml would be even more great. That allows us to use convention over configuration with minimal configuration per repository. For example:
#!json pipelines: shared: config: account/my-build-config:1.0 parameters: TARGET: xyz DEPLOY: value
But the given solution is a great step forward!
Hi everyone,
Thanks for all of the feedback on this issue. Some good news - we've started speccing out a feature that will let you share scripts between multiple repositories. This will help reduce the amount of repeated configuration across your repositories and make your bitbucket-pipelines.yml file more concise. We'd love to understand whether this is something that will suit the needs of your team:
- You can share scripts across repositories by creating a task.
- Tasks are built on Docker which gives you the flexibility to use any language or CLI to define your task, and also has the benefits of isolation (tasks won't affect the execution of another task) and reproducibility.
- Tasks are defined in separate repositories which helps provide versioning capabilities.
- To define a task, you will provide a YAML file with information about the task including:
-
- name
- description
- base Docker image
- parameters that are required by the script
- script commands.
- We will use Pipelines in the task repository to automatically build a Docker image for your task and push it to your specified Docker registry.
- You can use tasks in your bitbucket-pipelines.yml file by providing account and repo name (account/repository) as the task ID, version number and the parameters required by the task (passed as environment variables).
An example of how you might use a task looks like this (keep in mind that this is just an example and that the syntax has not yet been finalised):
#!yaml pipelines: branches: master: - step: name: Deploy to test deployment: test script: - echo "Starting deployment" - task: account/my-deploy-task:1.0 name: Deploy docker image to test environment parameters: IMAGE: my-image:latest USERNAME: $SERVER_USER PASSWORD: $SERVER_PASS - echo "Finish deployment"
- This model also allows us to provide a set of supported tasks which will simplify the configuration for using Pipelines with other tools (e.g. AWS Lambda, AWS S3, Kubernetes, etc.).
As mentioned, this solution allows you to define and share scripts between repositories. If you're interested in reusing configuration within the same bitbucket-pipelines.yml file, you can use YAML references to do this today. We'd love your feedback on our proposed solution and to understand whether it suits the use cases described on this issue - you can comment on this issue or send us an email at pipelines-feedback@atlassian.com with your thoughts.
Thanks for helping us improve Pipelines!
@raddna Well my assumption was that we could set env variable and that will be available on all steps
That's a good point, @raddna. If there was a way to have team/account-level scripts like I described above, you could pass params. As it is right now, you can use that custom Docker image method and accept params to your custom script when you call it in bitbucket-pipelines.yml, but again, it's kind of clunky and would be much nicer if this was an integrated Bitbucket feature.
BTW you may already know this, but there are default environment variables which are automatically set that you can use to determine a decent amount of context when running scripts: https://confluence.atlassian.com/bitbucket/environment-variables-in-bitbucket-pipelines-794502608.html#Environmentvariables-Defaultvariables
I think we're missing one important thing in the discussion: passing parameters to steps! (it doesn't help much if [namedStep] is not aware of the context in which it runs).
https://confluence.atlassian.com/bitbucket/yaml-anchors-960154027.html