Recently, we’ve been working with Bitbucket Pipelines (Pipelines hereafter), brought to us from the developer tools powerhouse Atlassian, well known, not only for their Git repository offerings but other commonly used tools like Jira, Trello & Confluence. We’ve been setting up a repeatable Pipeline of steps as a testing framework for Continuous Integration (CI) and Continuous Deployment.
There are a plethora of CI/CD options available these days, from the not too complex to the significantly more so. If you already have some familiarity with modern DevOps tooling, e.g., Docker or Kubernetes, then moving to Pipelines as your CI environment will be relatively easy. Pipelines itself is essentially based on Docker, but with its own (useful) nuances overlaid. In some respects, it is very similar to GitHub Actions but is simpler to get started with, as everything resides in one file: bitbucket-pipelines.yml
which sits in the root level of your codebase repository.
One of the main advantages of using Pipelines is that you don’t have to self-manage your CI/CD server infrastructure. Having managed several CI/CD servers previously, it is not to be underestimated how much this is a real time-saver over the midterm!
Advantages continue with the configuration for Pipelines being part of your repository, CI ‘Configuration as Code’ managed in the very same way as the rest of your codebase. Pipelines are managed through the bitbucket-pipelines.yml
file and can, with some planning, use the same docker build files as that of your developers or production environments. Once crafted, your bitbucket-pipelines.yml
can be relatively portable between projects, with tailoring easily available to specific project needs.
Where your needs are fairly simple, Pipelines comes with ‘services’ that are made available to the pipeline, with minimal configuration required. An example of this would be using the Redis datastore, often used for caching temporary data in memory.
Example Bitbucket Pipeline configuration
In Pipelines, we can configure a Redis service within the bitbucket-pipelines.yml
file to use the current official Docker image:
definitions:
services:
redis:
image: redis:6
We then reference this service within the build steps in our bitbucket-pipelines.yml
pipelines:
branches:
default:
- step:
name: Build
image: redis # The build step image
script:
- redis-cli -h localhost ping
services:
- redis # Referencing our service from the definitions section
When put together, these sections highlight how you can quickly build reusable low-configuration elements:
definitions:
services:
redis:
image: redis:6
pipelines:
branches:
default:
- step:
name: Build
image: redis # The build step image
script:
- redis-cli -h localhost ping
services:
- redis # Referencing our service from the definitions section
At this point, it is worth noting that, given that the Pipelines file is Yaml, you can check Yaml syntax online, then you can cross-check your specific Pipelines configuration with the supplied validator. Failing to do so, could prove somewhat costly, in that if there are errors in your bitbucket-pipelines.yml
file, you may potentially burn through build minutes (the unit of charge in Pipelines).
Experience with Memory Configuration
Overall, Bitbucket Pipelines is a solid CI/CD offering, our biggest challenge when getting to grips with it was fine-tuning the memory properties of the build environments. By default, Pipelines has 4Gb of RAM available to each step, of this 3Gb is available to use (1024Mb is taken by the ‘Build’ environment). It’s worth noting that ‘Services’ by default get allocated 1Gb each, and are limited to 5 attached services. I think this is fine for most scenarios. However, we found we had to limit the services we use, to smaller memory allocation than the default. This is, fortunately, easily done, by adding a memory key to the service definition:
services:
redis:
image: redis:6
memory: 256 # Adjust the memory from the default 1024
If you find that you just cannot juggle the memory requirements for your Pipeline into the 4Gb environment, you can specify a double memory environment with 8Gb (7Gb available to use). This is simply achieved by adding a size key to the specific build step, however, note that this build step will consume double the build minutes as it runs:
- step:
name: Build
size: 2x # Double the build step size allocation
image: redis
Whilst the Pipeline Validator Tool will tell you if your configuration is trying to over-allocate memory, this doesn’t quite catch everything. There can be several Pipeline runs that end up with a failure message, if the build step tries to consume too much memory. This can end up with potential frustration as it could be, as it has been in various of our projects, the final step of a multistep pipeline and results in a not overly helpful:
Container ‘<container name>’ exceeded memory limit.
At this point, you may have to adjust your memory allocation in your bitbucket-pipelines.yml
file and re-run your pipeline. Pay close attention to any log output produced to give you clues as to which way you need to adjust.
Conclusion
Once you get the hang of it, using Pipelines becomes intuitive, building on your existing DevOps knowledge. It feels ‘right’ to have CI/CD alongside the rest of your codebase, knowing that your tests, code coverage and deployments get to run very regularly. In turn, giving rise to greater team confidence in the code and its underlying quality every time changes occur.