'Possible to add kaniko to alpine image or add jq to kaniko image

This is how I'm using kaniko to build docker images in my gitlab CI, which is working great.

But I need to read a json file to get some values. Therefore I need to get access to jq.

.gilab-ci.yml

deploy:
  stage: deployment
  image:
    name: gcr.io/kaniko-project/executor:debug
    entrypoint: [""]
  script:
    - mkdir -p /kaniko/.docker
    - echo "{\"auths\":{\"$CI_REGISTRY\":{\"auth\":\"$(echo -n ${CI_REGISTRY_USER}:${CI_REGISTRY_PASSWORD} | base64)\"}}}" > /kaniko/.docker/config.json
    - |
      /kaniko/executor \
        --context $CI_PROJECT_DIR \
        --dockerfile $CI_PROJECT_DIR/app/Dockerfile \
        --destination $CI_REGISTRY_IMAGE/app:latest \
      done
    - jq # <- Is not working, as jq is not installed

Is it possible to add jq to the image to avoid installing it always and repeatedly on this stage?

On all other stages I'm using my own alpine image to which I added everything I need in my CI pipeline. So another option would be to add kaniko to this image - if possible. That would result in one image which has all utilities needed.

Dockerfile

FROM alpine:3.14.2

RUN apk --update add \
  bash \
  curl \
  git \
  jq \
  npm
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.21.4/bin/linux/amd64/kubectl
RUN chmod u+x kubectl && mv kubectl /bin/kubectl
# Add kaniko to this image??


Solution 1:[1]

Official Kaniko Docker image is built from scratch using standalone Go binaries (see Dockerfile from Kaniko's GitHub repository). You can re-use the same binaries from official image and copy them in your image such as:

# Use this FROM instruction as shortcut to use --copy=from kaniko below
# It's also possible to use directly COPY --from=gcr.io/kaniko-project/executor
FROM gcr.io/kaniko-project/executor AS kaniko

FROM alpine:3.14.2

RUN apk --update add \
  bash \
  curl \
  git \
  jq \
  npm
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.21.4/bin/linux/amd64/kubectl
RUN chmod u+x kubectl && mv kubectl /bin/kubectl

#
# Add kaniko to this image by re-using binaries and steps from official image
#
COPY --from=kaniko /kaniko/executor /kaniko/executor
COPY --from=kaniko /kaniko/docker-credential-gcr /kaniko/docker-credential-gcr
COPY --from=kaniko /kaniko/docker-credential-ecr-login /kaniko/docker-credential-ecr-login
COPY --from=kaniko /kaniko/docker-credential-acr /kaniko/docker-credential-acr
COPY --from=kaniko /etc/nsswitch.conf /etc/nsswitch.conf
COPY --from=kaniko /kaniko/.docker /kaniko/.docker

ENV PATH $PATH:/usr/local/bin:/kaniko
ENV DOCKER_CONFIG /kaniko/.docker/
ENV DOCKER_CREDENTIAL_GCR_CONFIG /kaniko/.config/gcloud/docker_credential_gcr_config.json

EDIT: for the debug image, Dockerfile would be:

FROM gcr.io/kaniko-project/executor:debug AS kaniko

FROM alpine:3.14.2

RUN apk --update add \
  bash \
  curl \
  git \
  jq \
  npm
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.21.4/bin/linux/amd64/kubectl
RUN chmod u+x kubectl && mv kubectl /bin/kubectl

#
# Add kaniko to this image by re-using binaries and steps from official image
#
COPY --from=kaniko /kaniko/ /kaniko/
COPY --from=kaniko /kaniko/warmer /kaniko/warmer
COPY --from=kaniko /kaniko/docker-credential-gcr /kaniko/docker-credential-gcr
COPY --from=kaniko /kaniko/docker-credential-ecr-login /kaniko/docker-credential-ecr-login
COPY --from=kaniko /kaniko/docker-credential-acr /kaniko/docker-credential-acr
COPY --from=kaniko /kaniko/.docker /kaniko/.docker
COPY --from=busybox:1.32.0 /bin /busybox

ENV PATH $PATH:/usr/local/bin:/kaniko:/busybox
ENV DOCKER_CONFIG /kaniko/.docker/
ENV DOCKER_CREDENTIAL_GCR_CONFIG /kaniko/.config/gcloud/docker_credential_gcr_config.json

Note that you need to use gcr.io/kaniko-project/executor:debug (for latest version) or gcr.io/kaniko-project/executor:v1.6.0-debug as source (or another tag)


Tested building a small image, seems to work fine:

# Built above example with docker build . -t kaniko-alpine
# And ran container with docker run -it kaniko-alpine sh
echo "FROM alpine" > Dockerfile
echo "RUN echo hello" >> Dockerfile
echo "COPY Dockerfile Dockerfile" >> Dockerfile

executor version
executor -c . --no-push

# Output like:
#
# Kaniko version :  v1.6.0
#
# INFO[0000] Retrieving image manifest alpine             
# INFO[0000] Retrieving image alpine from registry index.docker.io 
# INFO[0000] GET KEYCHAIN                                 
# [...] 
# INFO[0001] RUN echo hello                               
# INFO[0001] Taking snapshot of full filesystem...        
# INFO[0001] cmd: /bin/sh                                 
# INFO[0001] args: [-c echo hello]                        
# INFO[0001] Running: [/bin/sh -c echo hello]             
# [...]

Note that using Kaniko binaries outside of their official image is not recommended, even though it may still work fine:

kaniko is meant to be run as an image: gcr.io/kaniko-project/executor. We do not recommend running the kaniko executor binary in another image, as it might not work.

Solution 2:[2]

I had the same need, as the image was to be used as the basis for a job in a Gitlab CI. I had to make some small modifications to make it work. If it helps, here is my version (no need for kubectl in my case, I just needed to be run kaniko and vault in the same container) :

FROM gcr.io/kaniko-project/executor:debug AS kaniko
FROM alpine:3.14.2

RUN apk --update add jq vault libcap
RUN setcap cap_ipc_lock= /usr/sbin/vault

COPY --from=kaniko /kaniko/ /kaniko/

ENV PATH $PATH:/usr/local/bin:/kaniko
ENV DOCKER_CONFIG /kaniko/.docker/
ENV DOCKER_CREDENTIAL_GCR_CONFIG /kaniko/.config/gcloud/docker_credential_gcr_config.json
ENV SSL_CERT_DIR /kaniko/ssl/certs

#ENTRYPOINT ["/kaniko/executor"]

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2