'Bazel and py_test in sandbox - any way to define outputs?
I'm running multiple py_test()
configurations on number of projects. Since there's a plenty of them, the default sandboxing mechanism seems convenient - tests don't intrude one another, and run in parallel for free.
This comes with a cost, though, as to my understanding sandboxing will cause bazel to run the tests in temporary directories. Combined with py_test
rule not defining any outs parameter (https://docs.bazel.build/versions/master/be/python.html), this likely means no generated file will be preserved after the test.
What I want to achieve, are two things:
- Preserve output of the test, split by test case (I think I can get it to work using
capsys
and explicitly writing to a file named similarly to the test name). The problem here is, the file will end up in a sandboxed directory, i.e.:/home/user/.cache/bazel/_bazel_user/12342134213421342134/sandbox/linux-sandbox/1/execroot/project_name/bazel-out/k8-fastbuild/bin/test_suite.runfiles/
and will be removed afterwards. - I would like to get a run tests summary in an XML format. Bazel by itself generates an XML file in JUnit format, which would be fine, but unfortunately it does not work properly (https://github.com/bazelbuild/bazel/issues/3413). The easiest solution would be to provide a parameter
--junitxml=path
(https://docs.pytest.org/en/latest/usage.html#creating-junitxml-format-files) which works, but again - generates a file in the temporary, sandboxed directory.
Other rules in bazel define outs as the files that they will generate, i.e. https://docs.bazel.build/versions/master/be/make-variables.html#predefined_genrule_variables: genrule
contains an outs parameter.
So the question boils down to this: is there any way in bazel to reuse (or wrap around) py_test
rule and define some output files it will generate?
Solution 1:[1]
After a bit of experimenting with Bazel, I came to conclusion there's no easy to way to extend py_test
to add outputs to it. It would also be rather difficult to create my own rule from scratch.
However, turns out all tests in Bazel define some environment variables that could be used. In fact, another, similar question was asked about this that solved the problem using them: bazel - writable archivable path for test runtime
In my tests I run pytest from within Python, so it is easy to programmatically extend startup arguments:
def _get_log_file_args():
# Prepare the path to the log file, based on environmental
# variables defined by Bazel.
#
# As per docs, tests should not rely on these variables
# defined, so the framework will omit logging to file
# if necessary variables are undefined.
# See: https://docs.bazel.build/versions/master/test-encyclopedia.html#initial-conditions
LOG_DIR_ENV_VARIABLE = "TEST_UNDECLARED_OUTPUTS_DIR"
log_dir = os.environ.get(LOG_DIR_ENV_VARIABLE)
if log_dir:
file_log_path = os.path.join(log_dir, "test_output.log")
return [f"--log-file={file_log_path}"]
logger.warning(f"Environment variable '{LOG_DIR_ENV_VARIABLE}' used as the logging directory is not set. "
"Logging to file will be disabled.")
return [] # no file logging
Then it's a matter of processing the final .zip archive at ./bazel-out/darwin-fastbuild/testlogs/<package-name>/<target-name>/test.outputs/outputs.zip
(according to the linked question).
Solution 2:[2]
Just write xml output to the location specified in the XML_OUTPUT_FILE
env variable. The file won't be zipped in this case.
From the source code here:
https://github.com/bazelbuild/bazel/blob/master/tools/test/test-setup.sh#L188
if [ -n "${XML_OUTPUT_FILE-}" -a ! -f "${XML_OUTPUT_FILE-}" ]; then
# Create a default XML output file if the test runner hasn't generated
I cannot hate bazel documentation enough, it's always driving me crazy.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | hauron |
Solution 2 | Jeremy Caney |