'Docker: Copying files from Docker container to host
I'm thinking of using Docker to build my dependencies on a Continuous Integration (CI) server, so that I don't have to install all the runtimes and libraries on the agents themselves.
To achieve this I would need to copy the build artifacts that are built inside the container back into the host. Is that possible?
Solution 1:[1]
In order to copy a file from a container to the host, you can use the command
docker cp <containerId>:/file/path/within/container /host/path/target
Here's an example:
$ sudo docker cp goofy_roentgen:/out_read.jpg .
Here goofy_roentgen is the container name I got from the following command:
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1b4ad9311e93 bamos/openface "/bin/bash" 33 minutes ago Up 33 minutes 0.0.0.0:8000->8000/tcp, 0.0.0.0:9000->9000/tcp goofy_roentgen
You can also use (part of) the Container ID. The following command is equivalent to the first
$ sudo docker cp 1b4a:/out_read.jpg .
Solution 2:[2]
You do not need to use docker run
.
You can do it with docker create
.
From the docs:
The
docker create
command creates a writeable container layer over the specified image and prepares it for running the specified command. The container ID is then printed toSTDOUT
. This is similar todocker run -d
except the container is never started.
So, you can do:
docker create -ti --name dummy IMAGE_NAME bash
docker cp dummy:/path/to/file /dest/to/file
docker rm -f dummy
Here, you never start the container. That looked beneficial to me.
Solution 3:[3]
Mount a "volume" and copy the artifacts into there:
mkdir artifacts
docker run -i -v ${PWD}/artifacts:/artifacts ubuntu:14.04 sh << COMMANDS
# ... build software here ...
cp <artifact> /artifacts
# ... copy more artifacts into `/artifacts` ...
COMMANDS
Then when the build finishes and the container is no longer running, it has already copied the artifacts from the build into the artifacts
directory on the host.
Edit
Caveat: When you do this, you may run into problems with the user id of the docker user matching the user id of the current running user. That is, the files in /artifacts
will be shown as owned by the user with the UID of the user used inside the docker container. A way around this may be to use the calling user's UID:
docker run -i -v ${PWD}:/working_dir -w /working_dir -u $(id -u) \
ubuntu:14.04 sh << COMMANDS
# Since $(id -u) owns /working_dir, you should be okay running commands here
# and having them work. Then copy stuff into /working_dir/artifacts .
COMMANDS
Solution 4:[4]
TLDR;
$ docker run --rm -iv${PWD}:/host-volume my-image sh -s <<EOF
chown $(id -u):$(id -g) my-artifact.tar.xz
cp -a my-artifact.tar.xz /host-volume
EOF
Description
docker run
with a host volume, chown
the artifact, cp
the artifact to the host volume:
$ docker build -t my-image - <<EOF
> FROM busybox
> WORKDIR /workdir
> RUN touch foo.txt bar.txt qux.txt
> EOF
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM busybox
---> 00f017a8c2a6
Step 2/3 : WORKDIR /workdir
---> Using cache
---> 36151d97f2c9
Step 3/3 : RUN touch foo.txt bar.txt qux.txt
---> Running in a657ed4f5cab
---> 4dd197569e44
Removing intermediate container a657ed4f5cab
Successfully built 4dd197569e44
$ docker run --rm -iv${PWD}:/host-volume my-image sh -s <<EOF
chown -v $(id -u):$(id -g) *.txt
cp -va *.txt /host-volume
EOF
changed ownership of '/host-volume/bar.txt' to 10335:11111
changed ownership of '/host-volume/qux.txt' to 10335:11111
changed ownership of '/host-volume/foo.txt' to 10335:11111
'bar.txt' -> '/host-volume/bar.txt'
'foo.txt' -> '/host-volume/foo.txt'
'qux.txt' -> '/host-volume/qux.txt'
$ ls -n
total 0
-rw-r--r-- 1 10335 11111 0 May 7 18:22 bar.txt
-rw-r--r-- 1 10335 11111 0 May 7 18:22 foo.txt
-rw-r--r-- 1 10335 11111 0 May 7 18:22 qux.txt
This trick works because the chown
invocation within the heredoc the takes $(id -u):$(id -g)
values from outside the running container; i.e., the docker host.
The benefits are:
- you don't have to
docker container run --name
ordocker container create --name
before - you don't have to
docker container rm
after
Solution 5:[5]
docker cp containerId:source_path destination_path
containerId can be obtained from the command docker ps -a
source path should be absolute. for example, if the application/service directory starts from the app in your docker container the path would be /app/some_directory/file
example : docker cp d86844abc129:/app/server/output/server-test.png C:/Users/someone/Desktop/output
Solution 6:[6]
Mount a volume, copy the artifacts, adjust owner id and group id:
mkdir artifacts
docker run -i --rm -v ${PWD}/artifacts:/mnt/artifacts centos:6 /bin/bash << COMMANDS
ls -la > /mnt/artifacts/ls.txt
echo Changing owner from \$(id -u):\$(id -g) to $(id -u):$(id -g)
chown -R $(id -u):$(id -g) /mnt/artifacts
COMMANDS
EDIT: Note that some of the commands like $(id -u)
are backslashed and will therefore be processed within the container, while the ones that are not backslashed will be processed by the shell being run in the host machine BEFORE the commands are sent to the container.
Solution 7:[7]
Most of the answers do not indicate that the container must run before docker cp
will work:
docker build -t IMAGE_TAG .
docker run -d IMAGE_TAG
CONTAINER_ID=$(docker ps -alq)
# If you do not know the exact file name, you'll need to run "ls"
# FILE=$(docker exec CONTAINER_ID sh -c "ls /path/*.zip")
docker cp $CONTAINER_ID:/path/to/file .
docker stop $CONTAINER_ID
Solution 8:[8]
For Windows:
From DockerContainer To LocalMachine
$docker cp containerId:/sourceFilePath/someFile.txt C:/localMachineDestinationFolder
From LocalMachine To DockerContainer
$docker cp C:/localMachineSourceFolder/someFile.txt containerId:/containerDestinationFolder
Solution 9:[9]
If you don't have a running container, just an image, and assuming you want to copy just a text file, you could do something like this:
docker run the-image cat path/to/container/file.txt > path/to/host/file.txt
Solution 10:[10]
With the release of Docker 19.03, you can skip creating the container and even building an image. There's an option with BuildKit based builds to change the output destination. You can use this to write the results of the build to your local directory rather than into an image. E.g. here's a build of a go binary:
$ ls
Dockerfile go.mod main.go
$ cat Dockerfile
FROM golang:1.12-alpine as dev
RUN apk add --no-cache git ca-certificates
RUN adduser -D appuser
WORKDIR /src
COPY . /src/
CMD CGO_ENABLED=0 go build -o app . && ./app
FROM dev as build
RUN CGO_ENABLED=0 go build -o app .
USER appuser
CMD [ "./app" ]
FROM scratch as release
COPY --from=build /etc/passwd /etc/group /etc/
COPY --from=build /src/app /app
USER appuser
CMD [ "/app" ]
FROM scratch as artifact
COPY --from=build /src/app /app
FROM release
From the above Dockerfile, I'm building the artifact
stage that only includes the files I want to export. And the newly introduced --output
flag lets me write those to a local directory instead of an image. This needs to be performed with the BuildKit engine that ships with 19.03:
$ DOCKER_BUILDKIT=1 docker build --target artifact --output type=local,dest=. .
[+] Building 43.5s (12/12) FINISHED
=> [internal] load build definition from Dockerfile 0.7s
=> => transferring dockerfile: 572B 0.0s
=> [internal] load .dockerignore 0.5s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/golang:1.12-alpine 0.9s
=> [dev 1/5] FROM docker.io/library/golang:1.12-alpine@sha256:50deab916cce57a792cd88af3479d127a9ec571692a1a9c22109532c0d0499a0 22.5s
=> => resolve docker.io/library/golang:1.12-alpine@sha256:50deab916cce57a792cd88af3479d127a9ec571692a1a9c22109532c0d0499a0 0.0s
=> => sha256:1ec62c064901392a6722bb47a377c01a381f4482b1ce094b6d28682b6b6279fd 155B / 155B 0.3s
=> => sha256:50deab916cce57a792cd88af3479d127a9ec571692a1a9c22109532c0d0499a0 1.65kB / 1.65kB 0.0s
=> => sha256:2ecd820bec717ec5a8cdc2a1ae04887ed9b46c996f515abc481cac43a12628da 1.36kB / 1.36kB 0.0s
=> => sha256:6a17089e5a3afc489e5b6c118cd46eda66b2d5361f309d8d4b0dcac268a47b13 3.81kB / 3.81kB 0.0s
=> => sha256:89d9c30c1d48bac627e5c6cb0d1ed1eec28e7dbdfbcc04712e4c79c0f83faf17 2.79MB / 2.79MB 0.6s
=> => sha256:8ef94372a977c02d425f12c8cbda5416e372b7a869a6c2b20342c589dba3eae5 301.72kB / 301.72kB 0.4s
=> => sha256:025f14a3d97f92c07a07446e7ea8933b86068d00da9e252cf3277e9347b6fe69 125.33MB / 125.33MB 13.7s
=> => sha256:7047deb9704134ff71c99791be3f6474bb45bc3971dde9257ef9186d7cb156db 125B / 125B 0.8s
=> => extracting sha256:89d9c30c1d48bac627e5c6cb0d1ed1eec28e7dbdfbcc04712e4c79c0f83faf17 0.2s
=> => extracting sha256:8ef94372a977c02d425f12c8cbda5416e372b7a869a6c2b20342c589dba3eae5 0.1s
=> => extracting sha256:1ec62c064901392a6722bb47a377c01a381f4482b1ce094b6d28682b6b6279fd 0.0s
=> => extracting sha256:025f14a3d97f92c07a07446e7ea8933b86068d00da9e252cf3277e9347b6fe69 5.2s
=> => extracting sha256:7047deb9704134ff71c99791be3f6474bb45bc3971dde9257ef9186d7cb156db 0.0s
=> [internal] load build context 0.3s
=> => transferring context: 2.11kB 0.0s
=> [dev 2/5] RUN apk add --no-cache git ca-certificates 3.8s
=> [dev 3/5] RUN adduser -D appuser 1.7s
=> [dev 4/5] WORKDIR /src 0.5s
=> [dev 5/5] COPY . /src/ 0.4s
=> [build 1/1] RUN CGO_ENABLED=0 go build -o app . 11.6s
=> [artifact 1/1] COPY --from=build /src/app /app 0.5s
=> exporting to client 0.1s
=> => copying files 10.00MB 0.1s
After the build was complete the app
binary was exported:
$ ls
Dockerfile app go.mod main.go
$ ./app
Ready to receive requests on port 8080
Docker has other options to the --output
flag documented in their upstream BuildKit repo: https://github.com/moby/buildkit#output
Solution 11:[11]
I am posting this for anyone that is using Docker for Mac. This is what worked for me:
$ mkdir mybackup # local directory on Mac
$ docker run --rm --volumes-from <containerid> \
-v `pwd`/mybackup:/backup \
busybox \
cp /data/mydata.txt /backup
Note that when I mount using -v
that backup
directory is automatically created.
I hope this is useful to someone someday. :)
Solution 12:[12]
docker run -dit --rm IMAGE
docker cp CONTAINER:SRC_PATH DEST_PATH
https://docs.docker.com/engine/reference/commandline/run/ https://docs.docker.com/engine/reference/commandline/cp/
Solution 13:[13]
I used PowerShell (Admin) with this command.
docker cp {container id}:{container path}/error.html C:\\error.html
Example
docker cp ff3a6608467d:/var/www/app/error.html C:\\error.html
Solution 14:[14]
Another good option is first build the container and then run it using the -c flag with the shell interpreter to execute some commads
docker run --rm -i -v <host_path>:<container_path> <mydockerimage> /bin/sh -c "cp -r /tmp/homework/* <container_path>"
The above command does this:
-i = run the container in interactive mode
--rm = removed the container after the execution.
-v = shared a folder as volume from your host path to the container path.
Finally, the /bin/sh -c lets you introduce a command as a parameter and that command will copy your homework files to the container path.
I hope this additional answer may help you
Solution 15:[15]
sudo docker cp <running_container_id>:<full_file_path_in_container> <path_on_local_machine>
Example :
sudo docker cp d8a17dfc455f:/tests/reports /home/acbcb/Documents/abc
Solution 16:[16]
docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH
to copy from the container to the host machine.
e.g. docker cp test:/opt/file1 /etc/
For Vice-Versa:
docker cp [OPTIONS] SRC_PATH CONTAINER:DEST_PATH
to copy from host machine to container.
Solution 17:[17]
If you just want to pull a file from an image (instead of a running container) you can do this:
docker run --rm <image> cat <source> > <local_dest>
This will bring up the container, write the new file, then remove the container. One drawback, however, is that the file permissions and modified date will not be preserved.
Solution 18:[18]
As a more general solution, there's a CloudBees plugin for Jenkins to build inside a Docker container. You can select an image to use from a Docker registry or define a Dockerfile to build and use.
It'll mount the workspace into the container as a volume (with appropriate user), set it as your working directory, do whatever commands you request (inside the container). You can also use the docker-workflow plugin (if you prefer code over UI) to do this, with the image.inside() {} command.
Basically all of this, baked into your CI/CD server and then some.
Solution 19:[19]
For anyone trying to do this with a MySQL container and storing the volumes locally on your machine. I used the syntax that was provided in the top rated reply to this question. But had to use a specific path that's specific to MySQL
docker cp imageIdHere:/var/lib/mysql pathToYourLocalMachineHere
Solution 20:[20]
Create a data directory on the host system (outside the container) and mount this to a directory visible from inside the container. This places the files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files
docker run -d -v /path/to/Local_host_dir:/path/to/docker_dir docker_image:tag
Solution 21:[21]
This can also be done in the SDK for example python. If you already have a container built you can lookup the name via console ( docker ps -a
) name seems to be some concatenation of a scientist and an adjective (i.e. "relaxed_pasteur").
Check out help(container.get_archive)
:
Help on method get_archive in module docker.models.containers:
get_archive(path, chunk_size=2097152) method of docker.models.containers.Container instance
Retrieve a file or folder from the container in the form of a tar
archive.
Args:
path (str): Path to the file or folder to retrieve
chunk_size (int): The number of bytes returned by each iteration
of the generator. If ``None``, data will be streamed as it is
received. Default: 2 MB
Returns:
(tuple): First element is a raw tar data stream. Second element is
a dict containing ``stat`` information on the specified ``path``.
Raises:
:py:class:`docker.errors.APIError`
If the server returns an error.
Example:
>>> f = open('./sh_bin.tar', 'wb')
>>> bits, stat = container.get_archive('/bin/sh')
>>> print(stat)
{'name': 'sh', 'size': 1075464, 'mode': 493,
'mtime': '2018-10-01T15:37:48-07:00', 'linkTarget': ''}
>>> for chunk in bits:
... f.write(chunk)
>>> f.close()
So then something like this will pull out from the specified path ( /output) in the container to your host machine and unpack the tar.
import docker
import os
import tarfile
# Docker client
client = docker.from_env()
#container object
container = client.containers.get("relaxed_pasteur")
#setup tar to write bits to
f = open(os.path.join(os.getcwd(),"output.tar"),"wb")
#get the bits
bits, stat = container.get_archive('/output')
#write the bits
for chunk in bits:
f.write(chunk)
f.close()
#unpack
tar = tarfile.open("output.tar")
tar.extractall()
tar.close()
Solution 22:[22]
If you use podman/buildah1, it offers greater flexibility for copying files from a container to the host because it allows you to mount the container.
After you create the container as in this answer
podman create --name dummy IMAGE_NAME
Now we can mount the entire container, and then we use the cp
utility found in almost every linux box to copy the contents of /etc/foobar
from the container (dummy
), into /tmp
on our host machine. All this can be done rootless. Observe:
$ podman unshare -- bash -c '
mnt=$(podman mount dummy)
cp -R ${mnt}/etc/foobar /tmp
podman umount dummy
'
1.podman uses buildah internally, and they also share almost the same api
Solution 23:[23]
if you need a small file, you can use this section
Docker container inside
docker run -it -p 4122:4122 <container_ID>
nc -l -p 4122 < Output.txt
Host machine
nc 127.0.0.1 4122 > Output.txt
Solution 24:[24]
docker cp [OPTIONS] SRC_PATH CONTAINER:DEST_PATH
The DEST_PATH must be pre-exist
Solution 25:[25]
The easiest way is to just create a container, get the ID, and then copy from there
IMAGE_TAG=my-image-tag
container=$(docker create ${IMAGE_TAG})
docker cp ${container}:/src-path ./dst-path/
Solution 26:[26]
Create a path where you want to copy the file and then use:
docker run -d -v hostpath:dockerimag
Solution 27:[27]
You can use bind
instead of volume
if you want to mount only one folder, not create special storage for a container:
Build your image with tag :
docker build . -t <image>
Run your image and bind current $(pwd) directory where app.py stores and map it to /root/example/ inside your container.
docker run --mount type=bind,source="$(pwd)",target=/root/example/ <image> python app.py
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow