As discussed on #13863, we don't have plans to offer this in the short term in Pipelines, given other competing needs. But we would like to receive votes on this ticket and have comments with any specific use cases that drive high memory requirements for builds on Pipelines.
I'm sorry that Pipelines isn't meeting your needs right now. Unfortunately we don't have a short term fix available to increase memory for your pipeline to relieve your build time issues.
There are a number of reasons why increasing the memory limit is not simple. We use the fact that each pod runs with 4 GB memory limit to fairly schedule builds on our shared infrastructure, plan host growth for the service, and choose allocations for internal services that run inside each pod. Changing the limit or having a configurable limit is not an insurmountable problem, but it is also not a quick fix.
My mention of other priorities is not meant to be an opaque response. We're currently working on two other features that are highly voted: #12757 and #12790, and some small improvements for getting new users started faster (SSH auth and similar).
If you want to discuss this issue further, I think it would be best to set up a time to chat. My email is firstname.lastname@example.org, so if you can shoot me a quick email I can give you access to my calendar to book a time that suits.
This seems doubly relevant given that Pipelines now allows spinning up multiple containers. Our use case (running relatively large PHPUnit and Behat suites, requiring a database server!) is currently extraordinarily difficult. It's impossible to run our full suite within the two hour limit with the given resource allocation.
Hi folks, we've been thinking about this problem and trying to find a way to offer increased memory while still covering our costs, and also without introducing new and more complex pricing for Bitbucket or Pipelines.
We have a proposed solution which will double the memory (and in future, CPU) allocation for your builds for double the cost in build minutes. Configuration would work as follows:
options:size:2x# double memory (8 GB) for all builds in this repo - builds cost 2x minutespipelines:default:-step:name:A small stepsize:1x# override global setting to use 4 GB for this stepscript:-...-step:name:A giant leapsize:2x# can be set at the step levelscript:-...
As mentioned above, each build step that executes with size: 2x will be billed at 2x the build minutes. You will have the flexibility to configure this setting at the step or repository level. For example, a 2x build step that takes 5 minutes elapsed time will be charged 10 minutes from your monthly build minute allocation.
If you have any feedback on this proposal (positive or negative), please reply here. We're starting work on this feature this week, with early access starting before the end of the year. I'll keep you updated on the progress.
2x, 3x, 4x, 8x... 4GB in modern development of complex apps basically closes the door on a large group of customers. I'm having fun rearranging our tests to cram in 4G, as opposed to actually testing our app properly. It's a horrible cramped space.
I know you guys are trying to get revenue on top of users' testing, but a simple connection to ECS to let us run it ourselves would be great. Heck, just charge 10% over spot EC2 pricing or something, let us do our thing.
@Matt Ryall, thank you for giving us a solution to this problem. The 4GB limit has been a major source of pain in our build process with our java tests intermittently exceeding the limit causing deployments to fail. Look forward to testing this feature once early access is available.
@Jeffrey Labonski - thanks for the feedback. We’re just planning to offer 4/8 GB to start with. Larger allocations introduce complications with scheduling work on our Kube cluster so we’ll review the need for larger ones once this first piece is out.
I’m terms of the ECS idea, I’d like to hear more about what you’d like to see from Pipelines. If you have time for a chat, please send me an email and we can line something up.
Same goes for others on this thread, particularly if you’re trying to get Pipelines adopted for a larger team and hitting some roadblocks.
We're not a large team, and our needs aren't complicated. The reasons we'd like ECS support are:
Flexible performance, based on need and how much we want to pay. Plus we don't have to wait for pipelines-specific support when our needs change: we'll just spin up a difference instance type.
Other OS support, (Windows, specifically).
The worst part is: we were utilizing those things (and others) in Bamboo-Cloud, but it got turned off before Pipelines was ready to replace it. Now, we're stuck with slow builds and having to support both pipelines and apveyor for CI.
I think tying together CPU and memory as a bundle will work in the short term, but it doesn't seem elegant given that many test workloads aren't particularly well parallelised and rely on amounts of memory that'd require disproportionately high CPU allocations.
@Matt Ryall I'd be really interested in seeing two separate set rates: one per CPU core and another per GB of memory. You could set these rates against build minutes and allow users to specify in their pipelines configurations what their desired instance size is:
We are hitting memory issues even with what I would consider to be a relatively small project. I suspect using Gradle's parallel builds is partly to blame. Turning off parallel builds will make builds slower. For CI I have always wanted fast turnaround and paid for large servers to facilitate this.
So the current limitation is pretty much a complete adoption blocker for us. I'm actually looking at what it would take to migrate off of Pipelines until some of these issues are addressed. I would happily pay more money for a larger CI environment.
@Ade Miller - we're working on this currently, and aim to have the feature delivered before the end of the year. It will cost 2x build minutes for 2x memory+CPU, as mentioned above. We'll keep this issue updated with progress.
@Luke Carrier - thanks for the suggestion, but we're going to keep this simple to start with by offering just one size configuration for bigger builds. This means we can deliver it much sooner for you. (Scheduling builds that can select arbitrary sizes for both memory and CPU on a shared Kube cluster would be quite a challenge.) Hopefully it will still prove useful to you.
@crmoore-work - sorry to hear about the frustration, hopefully this feature will relieve some of the problems with slow builds. Running Bamboo on AWS yourself could be an option if you need the flexibility and control of your own ECS setup. Windows support (issue #13452) is on our longer term roadmap for Pipelines, but will be a big project.
@Matt Ryall with all due respect if it comes down to overpaying for resources we won't use, Atlassian will lose our business. I hope you'll reconsider this decision in due course, because it seems incredibly short sighted.
Yep, we definitely will monitor and review as we go. We monitor CPU usage on our hosts, so if it remains low while demand for more memory keeps growing (or vice versa), we'll definitely adjust our plans and options available to customers. At the moment, we're seeing demand for more of both, so that's our first cut for this feature.