'Cache "go get" in docker build
I want to encapsulate my golang unit testing in a docker-compose script because it depends on several external services. My application has quite a lot of dependencies, so it takes a while to go get
.
How can I cache packages in a way that allows the docker container to build without having to download all dependencies every time I want to test?
My Dockerfile:
FROM golang:1.7
CMD ["go", "test", "-v"]
RUN mkdir -p /go/src/app
WORKDIR /go/src/app
COPY . /go/src/app
RUN go-wrapper download
RUN go-wrapper install
Every time I want to run my unit tests I run docker-compose up --build backend-test
on the following script:
version: '2'
services:
...
backend-test:
build:
context: .
dockerfile: Dockerfile
image: backend-test
depends_on:
...
But now go-wrapper download
is called each time I want to run the tests and it takes a looong time to complete.
Solutions? Thanks in advance!
Solution 1:[1]
Personally I use govendor. It keeps your dependencies in a vendor dir inside your project according to golang vendor conventions. This will still need to be copied to your docker image on build.
But there are very good reasons not to vendor. For example when you are building a pkg you should not vendor. When you have different pkg's using different versions of dependencies things get messy. This can be remedied by only vendoring executables.
So if you have a good reason not to vendor you can seperate a few steps. Putting them in the right order will speed things up.
You can create a shell script (get.sh
) with some go get
commands for dependencies. (you can put these in your Dockerfile, but they have a line limit)
go get github.com/golang/protobuf/proto
go get github.com/pborman/uuid
go get golang.org/x/net/context
go get golang.org/x/net/http2
go get golang.org/x/net/http2/hpack
Then in your Dockerfile you first copy and execute the shell script.
Each time you update the get.sh it will rebuild entirely. It still runs got get ./...
to make sure all dependencies are there. But if everything is added in the get.sh
script, you will get a decent speed boost.
FROM golang:1.6
RUN mkdir -p /go/src/app
COPY get.sh /go/src/app
WORKDIR /go/src/app
RUN bash get.sh
COPY . /go/src/app
RUN go get ./...
CMD go test -v
The general idea is that you keep frequently changing content lower in your Dockerfile and stuff that is pretty constant at the top. Even if you have to add another command or two. Docker will go line by line until it finds something that needs a rebuild and will then do every line after that too.
Solution 2:[2]
I was looking for an answer to your question, but ironically found a question I have an answer to (how to run docker tests quickly). If you really want fast tests, you should ideally avoid rebuilding the container at all when you run them. But wait, how to get the new source code onto the container? Volumes my friend, volumes. Here's how I've set this up:
docker-compose.dev.yml:
backend-test:
volumes:
- .:/path/to/myapp
Where /path/to/myapp is the path in the image, of course. You'll have to explicitly pass in this image for dev:
docker-compose up -f docker-compose.dev.yml
But now, when you run your tests, you're not going to use docker-compose anymore, you're going to use docker exec:
docker exec -it backend-test go test
If you do this right, your src dir in the backend-test container will always be up to date because it's in fact a mounted volume. Attaching to a running container and running tests should prove far faster than spinning up a new one each time.
EDIT: Commenter correctly pointed out that this only avoids rebuilding the image when you're dependencies haven't changed (no need go get
). The nice thing is that it not only avoids rebuilding, but it also avoids restarting. When I'm testing like this, and I add a dependency, I typically just go get
directly from my test console. It can be a bit tricky to get go get
to work within your container, but one way is to forward your ssh agent through to your container by mounting SSH_AUTH_SOCK. Sadly, you can't mount volumes during build, so you may need to include some kind of deploy key in your image if you want your build target to be able to pull fresh dependencies before running tests. However, The main point of my answer was to separate out the build and test, to avoid the full build until you're ready to generate the final artifact.
That said, I think I might understand that I'm not answering the question in the way that you asked it. In ruby, the answer would be as simple as copying the Gemfile and Gemfile.lock, and running bundle install --deploy
, before copying over the code you've changed. Personally I don't mind the cost of rebuilding when I add dependencies, since 99% of my changes still won't involve a rebuild. That said, you might look into using golang's new Bundler inspired dependency manager: dep. With dep installed, I'm pretty sure you can just copy your Gopkg.toml
and Gopkg.lock
into your workdir, run dep ensure
, and then copy your code. This will only pull dependencies when the Gopkg has been updated - otherwise docker will be able to reuse the cached layer with your previous dependencies installed. Sorry for the long edit!
Solution 3:[3]
As govendor is outdated, new approach called go modules is recommended migration path.
With go modules adding cache layer is simple as adding following steps:
FROM golang:1.18-buster
WORKDIR /go/app/myapp
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build .
The copy mod and sum file will take care of invalidating cache od download command. By the way the CGO_ENABLED=0 allows build to run on Alpine Linux. Set it to 1 for dynamic glibc linking.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | R Menke |
Solution 2 | |
Solution 3 | Arunas Bartisius |