'Inject Google Artifact Registry credentials to Docker build

We have a Google Artifact Registry for our Python packages. Authentication works like this. Locally it works well.

However, how do I pass credentials to Docker build when I want to build a Docker image that needs to install a package from our private registry?

I'd like to keep the Dockerfile the same when building with a user account or with a service account.

This works, but I'm not sure it's best practice:

FROM python:3.9

RUN pip install keyring keyrings.google-artifactregistry-auth

COPY requirements.txt .

RUN --mount=type=secret,id=creds,target=/root/.config/gcloud/application_default_credentials.json \
    pip install -r requirements.txt

Then build with:

docker build --secret="id=creds,src=$HOME/.config/gcloud/application_default_credentials.json" .


Solution 1:[1]

Using keyring is great when working locally, but in my opinion it's not the best solution for a Dockerfile. This is because your only options are to mount volumes at build time (which I feel is messy) or to copy your credentials into the Dockerfile (which I feel is insecure).

Instead, I got this working by doing the following in Dockerfile:

FROM python:3.10

ARG AUTHED_ARTIFACT_REG_URL
COPY ./requirements.txt /requirements.txt

RUN pip install --extra-index-url ${AUTHED_ARTIFACT_REG_URL} -r /requirements.txt

Then, to build your Dockerfile you can run:

docker build --build-arg AUTHED_ARTIFACT_REG_URL=https://oauth2accesstoken:$(gcloud auth print-access-token)@url-for-artifact-registry

Although it doesn't seem to be in the official docs for Artifact Registry, this works as an alternative to using keychain. Note that the token generated by gcloud auth print-access-token is valid for 1 hour.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 LondonAppDev