'how to minimize duplication in the bitbucket pipeline?

see sample script below. somehow resembles how our pipeline looks like but not really obviously.

  steps:
    - step: &test-sonar
        name: test and analyze on SonarCloud
        script:
          - {some command}
          - {some command}
          - {some command}
          - {some command}
          - if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
          - pip install pytest
          - pytest --cov=tests/ --cov-report xml:coverage-reports/coverage-report.xml --junitxml=test-reports/report.xml
          - {some command}
          - {some command}
          - {some command}
          - {some command}
          - {some command}
          - {some command}
          - {some command}
          - pipe: sonarsource/sonarcloud-scan:1.4.0
    - step: &check-quality-gate-sonarcloud
        name: Check the Quality Gate on SonarCloud
        script:
          - pipe: sonarsource/sonarcloud-quality-gate:0.1.6

this script is what we run whenever we merge into the master branch.

this would mostly also be the same set of scripts but the flags in the pytest command are slightly different.

and again for a scheduled pipeline, the scripts would be mostly the same with some slight changes the flags of the pytest command.

I wouldn't want to repeat the same script 3 times and I'm not sure how to make this a bit more reusable.

Only thing I can think of is using bitbucket variables to change how pytest is executed depending on the type of pipeline but I'm still wrapping my head around that as well.



Solution 1:[1]

Some of my pipelines share a common initialization that I chain into a single instruction with the && operator and store it in a yaml anchor like

definitions:

  yaml-anchors:

    - &common-init-script >-
        command1
        && command2
        && command3

    - &aaa-step
        script:
          - *common-init-script
          - some-command

    - &bbb-step
        script:
          - *common-init-script
          - some-different-command

pipelines:
  ...
    my-pipeline:
      - step: *aaa-step
      - step: *bbb-step

Hopefully, the && operator will shortcircuit if anything failed (granted if all your commands will exit with a non-zero status code on failures), but the drawback is that the command chain is reckoned as a single instruction by Bitbucket so you will loose per-instruction time measures and also the console output will be concatenated so it is sometimes hard to tell when each command output starts or ends.


Ideally, you would store yaml lists of commands into anchors and later merge them with other inline sequences of commands but this idea was having a bad reception last time I checked https://github.com/yaml/yaml/issues/48 the main points being

  1. It should be up to the application (bitbucket pipelines parser) to tell if lists of lists of commands should be flattened.
  2. It is unclear how to introduce this into the yaml spec without causing havoc.

And now I can't find the jira issue but Atlassian was also reluctant to implement this on their end. So, here we are.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 N1ngu