Ability to specify a specific memory usage for Pipelines Services

Issue #14752 resolved
Ronald Chia
staff created an issue

Provide user the ability to specify specific Services' memory usage instead of using the default 1GB memory.

For example, allow users to reserve 600mb memory to a service which requires only 550 MB to run.
ℹ This will allow users to utilize the remaining 400 MB for the main build process.

Official response

Comments (15)

  1. Aneita Yang staff
    • changed status to open

    Thanks for the feedback!

    I'll open this ticket to gauge the interest of other users on being able to configure memory allocation for services. However, the team is currently working on other higher priority features so it is unlikely that we'll introduce this in the near future.

  2. Basile Beldame

    Now that you definitively blocked memory limit of 1GB per service, It would be important to have a way to increase this limit, since it can block automatic deployment for some people.

  3. Swapnil Deshpande

    As far as I know, size: 2x is applied to stuff defined in pipeline steps. However if you are building your application insider docker container, that doesn't applies. I may be wrong, bitbucket team can correct me.

  4. David Gregory

    It would be really appreciated if this could be resolved as it's essentially impossible to build medium to large Scala projects in Pipelines now - SBT needs at least 2GB as a rule

  5. Matt Ryall staff

    Thanks for your feedback on this issue.

    We're working on this issue currently and hope to have it available to Pipelines Alpha customers next week, and generally available a few days later.

    We also reverted the enforcement of the 1 GB Docker memory limit last week, and will leave it disabled until this configuration option is available. So builds need more memory for Docker pulls or builds should now be passing again.

    If you're still having trouble with SBT builds or other things, please raise a support ticket and one of our engineers can investigate.

  6. David Gregory

    @Matt Ryall should we be able to observe that change in builds at the moment? When I run builds that respect the Docker cgroup limits I still see that the JVM sets: uintx MaxHeapSize := 1073741824 and my builds still fail due to OutOfMemoryErrors.

  7. Matt Ryall staff

    @David Gregory - the 1GB limit was removed for Docker-in-Docker, but is still in place for services. Perhaps that's the reason for what you're seeing, or perhaps your JVG args are inadvertently setting Xmx somehow?

    If that doesn't help, probably best to raise a support ticket so we can look at your build configuration and advise on how to fix.

  8. Sebastian Cole

    Hey All,

    We've just released the changes to support custom memory allocation for services. It's so very fresh that the documentation, and the built-in bitbucket-pipelines.yml editor haven't yet been updated!

    To use custom memory limits, you will need to commit the changes via git/hg push.

    YML format:

    pipelines:
      default:
        - step:
            # the step resource limits are calculated based on the 
            # services they consume with a lower limit of 1024MB enforced.
            # implied 1536 for memory limit.
            services:
              - docker
              - mysql
            script:
              - docker build ...
    definitions:
        service:
            mysql:
                image: mysql:latest
                memory: 512 #Megabytes, minimum of 128MB
            docker: # we specifically disallow modifying image.
                memory: 2048 #Megabytes
    

    quick list of limitations:

    • memory is listed in Megabytes
    • don't specify the unit in the yml.
    • the build container needs at least 1024MB
    • services need at least 128MB

    I'd love to hear your thoughts and feedback about the feature, and look forward to your comments below.

    Cheers, Seb

  9. Log in to comment