Thanks for reaching out and for the suggestion. I'll open this issue to gauge the interest of other users on this request.
As mentioned on #12821, we have decided to not support this for now as the number of users who run into deployment concurrency issues is low, but I will revisit this request in a couple of months to reassess whether this is something that we will support.
We would like to be able to configure pipelines to kill/stop PREVIOUS deployments when there are overlapping deployments.
Here is our use-case:
We have branches for development, staging and production (master)
Deployments are triggered automatically when someone pushes to each of these three branches to their respective servers using pipelines
When working on an active sprint with several developers, it is possible that multiple deployments are triggered in an overlapping way. This is especially true for the development server.
This is a problem because part of the deployment script runs on our servers, and this slows them down to the point at which the CPU load gets maxed out and the memory is used up and the server grinds to a halt.
Since the most recent code pushed to the branch is always the latest/most up to date, it would be safe to stop PREVIOUS deployments and only allow the latest/most up to date to complete. That would ensure the server has the latest code and that unnecessary deployments are killed.
Queuing paused deployments is less helpful because although it will result in the eventual right state of the server and reduce the load on our servers, it delays the up to date/final state of the server longer than necessary. This affects/slows our QA process when near deadline, which creates confusion and frustration.
Some of our pipelines involve an external build server that we cannot easily dockerize. The build process contains steps that require access to a shared resource that should not be run in parallel (typical example: setting up a database or doing migrations). Therefore we need to serialize the builds to avoid subsequent commits clashing with each other and corrupting the build process or the environment.
The second use case is heavier integration and UI tests, which also involve steps that cannot run in parallel: restore a database to a predefined state or certain test runners want to bind to a local port. Integration tests also tend to run longer, so the probability of having an overlap is higher. Therefore running a second instance of the test suite, while the first one is still running, will typically fail both.
Our current workaround is to add the 'deployment' parameter to the first step of the any pipeline. However this causes lots of paused jobs even in a small team. Manually reviewing and resuming them is a pain and defeats the purpose of automation. I'm not sure if I can keep a straight face if I go to my team and request to "please try to limit the number of commits per hour" or "ask before you push".
This is a problem for my team and i expect for any team who use limited external resources in a build... database + integration tests is a common example which deserves a better response than not many people have requested the feature. The current deployment function can be used as a hack to limit concurrent builds, but then creates a problem when wanting to use the function for the intended purpose. We are using the deployment hack to limit concurrent builds for complete database teardown and build combined with integration tests. It would be good to have the feature completed so the paused build resumes when the blocking build in progress completes, paused builds can be cancelled, and concurrency limits can applied to any step (rather than deployment only).
Build status is also problematic for manually triggered builds and I suspect paused builds. They register as as successful when viewing a pull request despite being incomplete.
In minimum viable product terms the current implementation only has concurrency limitation...a nice start but a few critical features short of what is needed in a tool intended to assist development and automate work. It can be used as a hack in the short term (which allows our team, who are new to bitbucket, to use the tool) , it would be nice to know it will be finished in the medium term.
Almost every day I merge two small PRs, the second deployment gets paused and needs to be manually prodded. On some days I have to do this up to 10 times.
Queue is good, the fact that once one of the builds finishes, another is not being un-paused automatically, is very annoying. Can I make it so the last step of a build tries to unpause another one through an API or something?
In our case, every branch that is merged into master fires a deploy. BitBucket deployment feature is handy because will avoid multiples deploys to production preventing a broken state in the deploy but useless since we can not automatically start new deploys that were queued.
We currently do between 30 to 40 deploys, per week, in a team with 12 peoples. So it's pretty unreasonable to resume the deploy manually each time.