Limit concurrent pipelines so deployment scripts don't clash

Issue #12821 resolved
Daniel Faria Gomes
created an issue

Hi.

I think the option to enable a pipeline to run sequentially (one at a time per pipeline / branch, ordered by the commit date) would be very useful.

That would be useful (essential, even) in cases where one commit results in one deploy.

The way it is today, you can't commit multiple times in a short period of time, or it will result in multiple deploys running at the same time.

The configuration could be like this:

image: MY_IMAGE
pipelines:
  branches:
    master:
      sequential: true
      - step:
          script:
            - MY_SCRIPT

or

image: MY_IMAGE
pipelines:
  branches:
    master:
      parallel: false
      - step:
          script:
            - MY_SCRIPT

Thanks.

Official response

  • Aneita Yang staff

    Hi everyone,

    We're super excited to share that deployment concurrency control is now available to all users. If you're tracking your deployments using Bitbucket Deployments, we will automatically ensure that only one deployment is in-progress to each tracked environment. If there is already an in-progress deployment to an environment, later deployments to the same environment will be paused. You can then manually resume the paused deployments.

    paused_pipeline_resume_highlight.png

    Due to the low number of users that experience deployment concurrency issues, we have decided not to support queuing and automatic resuming of paused deployment steps at this point in time. If you are interested in seeing this functionality or have other requests that were not covered by this change, please raise a new feature request so that we can track the request separately.

    We hope you enjoy this feature addition.

    Thanks,
    Aneita

Comments (49)

  1. Daniel Faria Gomes reporter

    Explaining a little more:

    I have a pipeline that does:

    1. Build
    2. Test
    3. Deploy

    Let's say there were two commits in the project:

    1. First commit - 2016-09-01 12:00:04 - By user_1
    2. Second commit - 2016-09-01 12:00:08 - By user_2

    The way Pipelines works today, both commits pipelines would run at the same time if the first one wasn't finished before the second comes in.

    Now, let's say that the Second commit tests finished before the First commit tests.

    What would happen is: The deploy would trigger for the second commit; Then, the deploy would trigger for the first commit;

    That means, in practice, that the last version deployed is actually an older version of the software (and there's nothing the user can do to prevent it, except aborting the build manually).

  2. Sergey Parhomenko

    We also ran into a similar issue, both with deployments and with jobs that update SonarQube (static code analysis server) tool. Any pipelines that update a shared remote resource will run into problems if the resource is updated concurrently, or is updated with a less recent code version. This issue limits the usage of Bitbucket Pipelines to CI, which is just first the first step in most CD pipelines.

  3. Daniel Faria Gomes reporter

    In my use case, it would also be nice if the pipeline triggered only in the last commit of the push.

    Today it works like this: If I make 3 local commits and then do a push, it will start 3 pipelines... 1 for each commit;

    It would be nice to enable something like: If I make 3 local commits and then do a push, it will start 1 pipeline, considering only the last commit;

  4. Andrew James

    There seem to be two fairly standard CI features missing from Pipelines:

    • Quiet period - A period of time to wait after a commit before triggering the pipeline, any commits in that period resets the wait timer.
    • Simultaneous Builds - Number of builds that can be run in parallel on a given branch.

    These should be configured independently:

    image: MY_IMAGE
    pipelines:
      branches:
        master:
          quiet-period: 120
          simultaneous-builds: 1
          - step:
              script:
                - MY_SCRIPT
    
  5. Gordon Johnston

    Any news on this? I would like to be able to restrict across all branches to ensure only one pipeline is running at once.

    Due to some restrictions on a service I integrate with I have to ensure I only have one concurrently running deployment, otherwise the service will get in a mess if it receives multiple concurrent updates.

  6. Nate Silva

    Essential for our deploy process. We want to get away from having to ask "hey is anybody running a deploy right now?" (tap tap the shoulder of the guy wearing headphones, "okay if I run a deploy?"). If we can limit concurrent pipelines we can just push to master knowing it won't clobber another in-progress deploy.

  7. Matt Ryall staff

    Thanks for your interest in this issue. We'll be looking to address this later this year, as part of work on #12844. You might want to also watch that issue for updates related to deployments in Pipelines.

  8. Wolfgang Meyers

    This also causes errors for test suites that need to open up a test server on a certain port. If a test suite is already running with that port open the second parallel run will fail.

  9. Aneita Yang staff

    Hi everyone,

    We're in the process of investigating this piece of work and I'm interested in chatting with some customers about their needs for limiting concurrent pipelines. If you have time and would like to discuss your use case / requirements for this feature and how you envision this feature to work, please reach out to me at ayang@atlassian.com.

    Thanks!
    Aneita

  10. Marcel Gwerder

    Hi @Aneita Yang

    How can we automatically rerun the skipped pipelines? We would like to keep things automated yet make sure everything is sequential. The documentation doesn't indicate an option for this. If we need the manual step we can just keep running the pipelines manually in the first place for most cases.

    Thanks, Marcel

  11. Matthias Gaiser (K15t Software)

    Thank you for the update @Aneita Yang

    The skipped pipelines is better for us than the current state. What I miss from the docs - is every run in the same repo skipped? or is this done per branch? I'd like the per branch skip better. In our setup, pipelines on different branches perform updates on different environments. So a per branch lock/skip would be ideal.

  12. Sander de Boer

    I prefer also to make is configurable. Example: - a deploy pipeline may not run parallel, but has to be queued - a psr2 check pipeline can run parallel with another ps2r check pipeline, or parallel with a deploy pipeline

  13. Dale Anderson

    This is a step in the right direction, but still falls short of supporting many scenarios. Two issues from my perspective:

    • Skipping builds / deployments could leave the deployed state as outdated. Queueing the most recent request would solve it for me, but many shops will want to execute tests against every commit.
    • We have a need to limit the concurrency of entire pipelines, not just the deployment step. A deployment needs to occur before functional tests can be executed against the deployed app - these also need to be queued along with the actual deployment.

    Thanks for your work on this. It's getting there!

  14. Samuel Tannous staff

    Thanks for the feedback all!

    Dale with respect to your second point: "A deployment needs to occur before functional tests can be executed against the deployed app", would running those tests in the same deployment step be an acceptable solution for that requirement?

  15. Dale Anderson

    I'm sure I could have figured that out for myself, please forgive my laziness haha. Out of curiosity, what are the underlying differences between a "step" and a "deployment"?

  16. Aneita Yang staff

    Hi everyone,

    We're super excited to share that deployment concurrency control is now available to all users. If you're tracking your deployments using Bitbucket Deployments, we will automatically ensure that only one deployment is in-progress to each tracked environment. If there is already an in-progress deployment to an environment, later deployments to the same environment will be paused. You can then manually resume the paused deployments.

    paused_pipeline_resume_highlight.png

    Due to the low number of users that experience deployment concurrency issues, we have decided not to support queuing and automatic resuming of paused deployment steps at this point in time. If you are interested in seeing this functionality or have other requests that were not covered by this change, please raise a new feature request so that we can track the request separately.

    We hope you enjoy this feature addition.

    Thanks,
    Aneita

  17. Marcel Gwerder

    The need to manually rerun the jobs, unfortunately, renders the whole change useless for us. We now got paused deployments instead of failed ones, both can be rerun manually, no real benefit :(

  18. Aneita Yang staff

    Hi @Marcel Gwerder,

    Thanks for the feedback. I definitely understand why queuing is useful; but as explained above, only a small number of users run into deployment concurrency problems, so we have decided not to do this for now. If you would like to see this functionality, please raise a new feature request for queuing and automatic resuming of deployments and we can track the interest for that separately.

    Aneita

  19. Loi Nguyen Thanh

    Hi @Aneita Yang, I have 2 env staging with different git branch. But when I push code into 2 branch, only 1 pipeline runs meanwhile other is paused. I have to resume. :(

    This is more difficult when I have 14 env staging (Because I outsource for 14 company with same product, they wanna separate to calculate cost).

    Could I propose we only pause the pipeline that same environment with same git branch? Thank you.

  20. Aneita Yang staff

    Hi @Loi Nguyen Thanh,

    The deployments dashboard is intended to track non-shared environments. It sounds like with 14 staging environments, you might be more interested in issue #15362 which is a request to support more flexible deployment environments. That will give you the ability to configure additional environments and track your staging environments separately. In the meantime, you can remove the deployment tracking on your staging environment so you don't have to resume paused pipelines.

    Aneita

  21. Ernesto Serrano

    An easier approach can be an option to cancel the current running pipelines for the current branch to certify that the last one will be the really last to finish. In my team we are using pipelines to build and push Docker images and if a previous push takes more time in finish the pipeline (docker cache, internet connection issues...) we can have a Docker image without the code of the last push

  22. Log in to comment