Support builds needing more than 4GB of memory, even with higher pricing

Issue #13874 resolved
Nick McCouat
created an issue

It would be good to have the option to pay more for 8gb of ram (perhaps more cores too?) rather than the standard four.

Actually, if you have a 16-core / 8gb option that would be excellent.

(Originally raised as part of #13863.)

Official response

Comments (26)

  1. Matt Ryall staff

    Thanks for raising this, Nick.

    As discussed on #13863, we don't have plans to offer this in the short term in Pipelines, given other competing needs. But we would like to receive votes on this ticket and have comments with any specific use cases that drive high memory requirements for builds on Pipelines.

  2. Nick McCouat reporter

    Hey Matt,

    To an outside observer it looks like a small change to add a second configuration to your pipelines.

    As per the other ticket, if you do want to put this off, it might be worth explaining a bit more about the complexity here i.e. why it is particularly difficult or time-consuming.

  3. Matt Ryall staff

    Hi Nick,

    I'm sorry that Pipelines isn't meeting your needs right now. Unfortunately we don't have a short term fix available to increase memory for your pipeline to relieve your build time issues.

    There are a number of reasons why increasing the memory limit is not simple. We use the fact that each pod runs with 4 GB memory limit to fairly schedule builds on our shared infrastructure, plan host growth for the service, and choose allocations for internal services that run inside each pod. Changing the limit or having a configurable limit is not an insurmountable problem, but it is also not a quick fix.

    My mention of other priorities is not meant to be an opaque response. We're currently working on two other features that are highly voted: #12757 and #12790, and some small improvements for getting new users started faster (SSH auth and similar).

    If you want to discuss this issue further, I think it would be best to set up a time to chat. My email is mryall@atlassian.com, so if you can shoot me a quick email I can give you access to my calendar to book a time that suits.

    Thanks,
    Matt

  4. Nick McCouat reporter

    Hey Matt,

    Thanks for the detailed response - it's much appreciated that you've taken the time to explain what and why.

    Unfortunately what you've said sounds very reasonable. You may be right that we're perhaps trying to shoehorn a problem that just doesn't fit well into pipelines

    That being said, I'll definitely look forward to any updates you have either on this, or related issues such as incremental builds / caching of internal build artefacts.. all welcome at this end

    thanks - Nick

  5. Luke Carrier

    This seems doubly relevant given that Pipelines now allows spinning up multiple containers. Our use case (running relatively large PHPUnit and Behat suites, requiring a database server!) is currently extraordinarily difficult. It's impossible to run our full suite within the two hour limit with the given resource allocation.

  6. Paul O'Riordan

    We're hitting the memory limits on a small multi-module Java / Maven project. A number of tools refuse to run in Bitbucket pipelines include SonarQube and most recently JBehave.

  7. Matt Ryall staff

    Hi folks, we've been thinking about this problem and trying to find a way to offer increased memory while still covering our costs, and also without introducing new and more complex pricing for Bitbucket or Pipelines.

    We have a proposed solution which will double the memory (and in future, CPU) allocation for your builds for double the cost in build minutes. Configuration would work as follows:

    options:
      size: 2x  # double memory (8 GB) for all builds in this repo - builds cost 2x minutes
    
    pipelines:
      default:
        - step:
            name: A small step
            size: 1x  # override global setting to use 4 GB for this step
            script:
              - ...
        - step:
            name: A giant leap
            size: 2x  # can be set at the step level
            script:
              - ...
    

    As mentioned above, each build step that executes with size: 2x will be billed at 2x the build minutes. You will have the flexibility to configure this setting at the step or repository level. For example, a 2x build step that takes 5 minutes elapsed time will be charged 10 minutes from your monthly build minute allocation.

    If you have any feedback on this proposal (positive or negative), please reply here. We're starting work on this feature this week, with early access starting before the end of the year. I'll keep you updated on the progress.

  8. Jeffrey Labonski

    2x, 3x, 4x, 8x... 4GB in modern development of complex apps basically closes the door on a large group of customers. I'm having fun rearranging our tests to cram in 4G, as opposed to actually testing our app properly. It's a horrible cramped space.

    I know you guys are trying to get revenue on top of users' testing, but a simple connection to ECS to let us run it ourselves would be great. Heck, just charge 10% over spot EC2 pricing or something, let us do our thing.

  9. Christian Iacullo

    @Matt Ryall, thank you for giving us a solution to this problem. The 4GB limit has been a major source of pain in our build process with our java tests intermittently exceeding the limit causing deployments to fail. Look forward to testing this feature once early access is available.

  10. Matt Ryall staff

    @Jeffrey Labonski - thanks for the feedback. We’re just planning to offer 4/8 GB to start with. Larger allocations introduce complications with scheduling work on our Kube cluster so we’ll review the need for larger ones once this first piece is out.

    I’m terms of the ECS idea, I’d like to hear more about what you’d like to see from Pipelines. If you have time for a chat, please send me an email and we can line something up.

    Same goes for others on this thread, particularly if you’re trying to get Pipelines adopted for a larger team and hitting some roadblocks.

  11. Christopher Moore

    We're not a large team, and our needs aren't complicated. The reasons we'd like ECS support are:

    1. Flexible performance, based on need and how much we want to pay. Plus we don't have to wait for pipelines-specific support when our needs change: we'll just spin up a difference instance type.
    2. Other OS support, (Windows, specifically).

    The worst part is: we were utilizing those things (and others) in Bamboo-Cloud, but it got turned off before Pipelines was ready to replace it. Now, we're stuck with slow builds and having to support both pipelines and apveyor for CI.

  12. Luke Carrier

    I think tying together CPU and memory as a bundle will work in the short term, but it doesn't seem elegant given that many test workloads aren't particularly well parallelised and rely on amounts of memory that'd require disproportionately high CPU allocations.

    @Matt Ryall I'd be really interested in seeing two separate set rates: one per CPU core and another per GB of memory. You could set these rates against build minutes and allow users to specify in their pipelines configurations what their desired instance size is:

    size:
      cpu: 2
      memory: 8
    
  13. Ade Miller

    We are hitting memory issues even with what I would consider to be a relatively small project. I suspect using Gradle's parallel builds is partly to blame. Turning off parallel builds will make builds slower. For CI I have always wanted fast turnaround and paid for large servers to facilitate this.

    So the current limitation is pretty much a complete adoption blocker for us. I'm actually looking at what it would take to migrate off of Pipelines until some of these issues are addressed. I would happily pay more money for a larger CI environment.

  14. Matt Ryall staff

    @Ade Miller - we're working on this currently, and aim to have the feature delivered before the end of the year. It will cost 2x build minutes for 2x memory+CPU, as mentioned above. We'll keep this issue updated with progress.

    @Luke Carrier - thanks for the suggestion, but we're going to keep this simple to start with by offering just one size configuration for bigger builds. This means we can deliver it much sooner for you. (Scheduling builds that can select arbitrary sizes for both memory and CPU on a shared Kube cluster would be quite a challenge.) Hopefully it will still prove useful to you.

    @crmoore-work - sorry to hear about the frustration, hopefully this feature will relieve some of the problems with slow builds. Running Bamboo on AWS yourself could be an option if you need the flexibility and control of your own ECS setup. Windows support (issue #13452) is on our longer term roadmap for Pipelines, but will be a big project.

  15. Luke Carrier

    @Matt Ryall with all due respect if it comes down to overpaying for resources we won't use, Atlassian will lose our business. I hope you'll reconsider this decision in due course, because it seems incredibly short sighted.

  16. Matt Ryall staff

    Yep, we definitely will monitor and review as we go. We monitor CPU usage on our hosts, so if it remains low while demand for more memory keeps growing (or vice versa), we'll definitely adjust our plans and options available to customers. At the moment, we're seeing demand for more of both, so that's our first cut for this feature.

  17. Sebastian Cole

    I'm pleased to announce that Pipelines can now be configured with a Size to double the resources available (4GB -> 8GB). This can be applied to all steps within a pipeline eg.

    options:
      size: 2x
    pipelines:
      ... snip ...
    

    or for particular steps

    pipelines:
      default:
        - step:
           script:
             - ...
        - step:
           size: 2x
           script:
             - ...
    

    See https://confluence.atlassian.com/display/BITBUCKET/Configure+bitbucket-pipelines.yml#Configurebitbucket-pipelines.yml-ci_size for details.

  18. Log in to comment