'docker-compose volume on node_modules but is empty

I'm pretty new with Docker and i wanted to map the node_modules folder on my computer (for debugging purpose).

This is my docker-compose.yml

web:
  build: .
  ports:
    - "3000:3000"
  links:
    - db
  environment:
    PORT: 3000
  volumes:
    - .:/usr/src/app
    - /usr/src/app/node_modules
db:
  image: mongo:3.3
  ports:
    - "27017:27017"
  command: "--smallfiles --logpath=/dev/null"

I'm with Docker for Mac. When i run docker-compose up -d all go right, but it create a node_modules folder on my computer but it's empty. I go into the bash of my container and ls node_modules, all the packages was there.

How can i get the content on the container on my computer too?

Thank you



Solution 1:[1]

TL;DR Working example, clone and try: https://github.com/xbx/base-server


You need a node_modules in your computer (outside image) for debugging purposes first (before run the container).

If you want debug only node_modules:

volumes:
    - /path/to/node_modules:/usr/src/app/node_modules

If you want debug both your code and the node_modules:

volumes:
    - .:/usr/src/app/

Remember that you will need run npm install at least one time outside the container (or copy the node_modules directory that the docker build generates). Let me now if you need more details.


Edit. So, without the need of npm in OSX, you can:

  1. docker build and then docker cp <container-id>:/path/to/node-modules ./local-node-modules/. Then in your docker-compose.yml mount those files and troubleshot whatever you want.
  2. Or, docker build and there (Dockerfile) do the npm install in another directory. Then in your command (CMD or docker-compose command) do the copy (cp) to the right directory, but this directory is mounted empty from your computer (a volume in the docker-compose.yml) and then troubleshot whatever you want.

Edit 2. (Option 2) Working example, clone and try: https://github.com/xbx/base-server I did it all automatically in this repo forked from the yours.

Dockerfile

FROM node:6.3

# Install app dependencies
RUN mkdir /build-dir
WORKDIR /build-dir
COPY package.json /build-dir
RUN npm install -g babel babel-runtime babel-register mocha nodemon
RUN npm install

# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN ln -s /build-dir/node_modules node_modules

# Bundle app source
COPY . /usr/src/app

EXPOSE 1234
CMD [ "npm", "start" ]

docker-compose.yml

web:
  build: .
  ports:
    - "1234:1234"
  links:
    - db # liaison avec la DB
  environment:
    PORT: 1234
  command: /command.sh
  volumes:
    - ./src/:/usr/src/app/src/
    - ./node_modules:/usr/src/app/node_modules
    - ./command.sh:/command.sh
db:
  image: mongo:3.3
  ports:
    - "27017:27017"
  command: "--smallfiles --logpath=/dev/null"

command.sh

#!/bin/bash

cp -r /build-dir/node_modules/ /usr/src/app/

exec npm start

Please, clone my repo and do docker-compose up. It does what you want. PS: It can be improved to do the same in a better way (ie best practices, etc)

I'm in OSX and it works for me.

Solution 2:[2]

First, there's an order of operations. When you build your image, volumes are not mounted, they only get mounted when you run the container. So when you are finished with the build, all the changes will only exist inside the image, not in any volume. If you mount a volume on a directory, it overlays whatever was from the image at that location, hiding those contents from view (with one initialization exception, see below).


Next is the volume syntax:

  volumes:
    - .:/usr/src/app
    - /usr/src/app/node_modules

tells docker-compose to create a host volume from the current directory to /usr/src/app inside the container, and then to map /usr/src/app/node_modules to an anonymous volume maintained by docker. The latter will appear as a volume in docker volume ls with a long uuid string that is relatively useless.

To map /usr/src/app/node_modules to a folder on your host, you'll need to include a folder name and colon in front of that like you have on the line above. E.g. /host/dir/node_modules:/usr/src/app/node_modules.

Named volumes are a bit different than host volumes in that docker maintains them with a name you can see in docker volume ls. You reference these volumes with just a name instead of a path. So node_modules:/usr/src/app/node_modules would create a volume called node_modules that you can mount in a container with just that name.

I diverged to describe named volumes because they come with a feature that turns into a gotcha with host volumes. Docker helps you out with named volumes by initializing them with the contents of the image at that location. So in the above example, if the named volume node_modules is empty (or new), it will first copy the contents of the image at /usr/src/app/node_modules` to this volume and then mount it inside your container.

With host volumes, you will never see any initialization, whatever is at that location, even an empty directory, is all you see in the container. There's no way to get contents from the image at that directory location to first copy out to the host volume at that location. This also means that directory permissions needed inside the container are not inherited automatically, you need to manually set the permissions on the host directory that will work inside the container.


Finally, there's a small gotcha with docker for windows and mac, they run inside a VM, and your host volumes are mounted to the VM. To get the volume mounted to the host, you have to configure the application to share the folder in your host to the VM, and then mount the volume in the VM into the container. By default, on Mac, the /Users folder is included, but if you use other directories, e.g. a /Projects directory, or even a lower case /users (unix and bsd are case sensitive), you won't see the contents from your Mac inside the container.


With that base knowledge covered, one possible solution is to redesign your workflow to get the directory contents from the image copied out to the host. First you need to copy the files to a different location inside your image. Then you need to copy the files from that saved image location to the volume mount location on container startup. When you do the latter, you should note that you are defeating the purpose of having a volume (persistence) and may want to consider adding some logic to be more selective about when you run the copy. To start, add an entrypoint.sh to your build that looks like:

#!/bin/sh
# copy from the image backup location to the volume mount
cp -a /usr/src/app_backup/node_modules/* /usr/src/app/node_modules/
# this next line runs the docker command
exec "$@"

Then update your Dockerfile to include the entrypoint and a backup command:

FROM node:6.3

# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install -g babel babel-runtime babel-register mocha nodemon
RUN npm install

# Bundle app source
COPY . /usr/src/app
RUN cp -a /usr/src/app/. /usr/src/app_backup

EXPOSE 1234
ENTRYPOINT [ "/usr/src/app/entrypoint.sh" ]
CMD [ "npm", "start" ]

And then drop the extra volume from your docker-compose.yml:

  volumes:
    - .:/usr/src/app

Solution 3:[3]

The simplest solution

Configure the node_modules volume to use your local node_modules directory as its storage location using Docker Compose and the Local Volume Driver with a Bind Mount.

First, make sure you have a local node_modules directory, or create it, and then create a Docker volume for it in the named volumes section of your docker-compose file:

volumes:
  node_modules:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: ./local/relative/path/to/node_modules

Then, add your node_modules volume to your service:

ui:
  volumes:
    - node_modules:/container/path/to/node_modules

Just make sure you always make node_module changes inside the Docker container (using docker-compose exec), and it will be synchronized perfectly and available on the host for IDEs, code completion, debugging, etc.

Version Control Tip: When your Node package.json/package-lock.json files change, either when pulling, or switching branches, in addition to rebuilding the Image, you have to remove the Volume, and delete its contents:

docker volume rm example_node_modules
rm -rf local/relative/path/to/node_modules
mkdir local/relative/path/to/node_modules

Solution 4:[4]

I added upon @Robert's answer, as there were a couple of things not taken into consideration with it; namely:

  • cp takes too long and the user can't view the progress.
  • I want node_modules to be overwritten if it were installed through the host machine.
  • I want to be able to git pull while the container is running and not running and update node_modules accordingly, should there be any changes.
  • I only want this behavior during the development environment.

To tackle the first issue, I installed rsync on my image, as well as pv (because I want to view the progress while deleting as well). Since I'm using alpine, I used apk add in the Dockerfile:

# Install rsync and pv to view progress of moving and deletion of node_modules onto host volume.
RUN apk add rsync && apk add pv

I then changed the entrypoint.sh to look like so (you may substitute yarn.lock with package-lock.json):

#!/bin/ash

# Declaring variables.
buildDir=/home/node/build-dir
workDir=/home/node/work-dir
package=package.json
lock=yarn.lock
nm=node_modules

#########################
# Begin Functions
#########################

copy_modules () { # Copy all files of build directory to that of the working directory.
  echo "Calculating build folder size..."
  buildFolderSize=$( du -a $buildDir/$nm | wc -l )
  echo "Copying files from build directory to working directory..."
  rsync -avI $buildDir/$nm/. $workDir/$nm/ | pv -lfpes "$buildFolderSize" > /dev/null
  echo "Creating flag to indicate $nm is in sync..."
  touch $workDir/$nm/.docked # Docked file is a flag that tells the files were copied already from the build directory.
}

delete_modules () { # Delete old module files.
    echo "Calculating incompatible $1 direcotry $nm folder size..."
    folderSize=$( du -a $2/$nm | wc -l )
    echo "Deleting incompatible $1 directory $nm folder..."
    rm -rfv $2/$nm/* | pv -lfpes "$folderSize" > /dev/null # Delete all files in node_modules.
    rm -rf $2/$nm/.* 2> /dev/null # Delete all hidden files in node_modules.node_modules.
}

#########################
# End Functions
# Begin Script
#########################

if cmp -s $buildDir/$lock $workDir/$lock >/dev/null 2>&1 # Compare lock files.
  then
    # Delete old modules.
    delete_modules "build" "$buildDir"
    # Remove old build package.
    rm -rf $buildDir/$package 2> /dev/null
    rm -rf $buildDir/$lock 2> /dev/null
    # Copy package.json from working directory to build directory.
    rsync --info=progress2 $workDir/$package $buildDir/$package
    rsync --info=progress2 $workDir/$lock $buildDir/$lock
    cd $buildDir/ || return
    yarn
    delete_modules "working" "$workDir"
    copy_modules

# Check if the directory is empty, as it is when it is mounted for the first time.
elif [ -z "$(ls -A $workDir/$nm)" ]
  then
    copy_modules
elif [ ! -f "$workDir/$nm/.docked" ] # Check if modules were copied from build directory.
  then
    # Delete old modules.
    delete_modules "working" "$workDir"
    # Copy modules from build directory to working directory.
    copy_modules
else
    echo "The node_modules folder is good to go; skipping copying."
fi

#########################
# End Script
#########################

if [ "$1" != "git" ] # Check if script was not run by git-merge hook.
  then
    # Change to working directory.
    cd $workDir/ || return
    # Run yarn start command to start development.
    exec yarn start:debug
fi

I added pv to, at least, show the user the progress of what is happening. Also, I added a flag to appear to indicate that node_modules was installed through a container.

Whenever a package is installed, I utilized the postinstall and postuninstall hooks of the package.json file to copy the package.json and yarn.lock files from the working directory to the build directory to keep them up to date. I also installed the postinstall-postinstall package to make sure the postuninstall hook works.

"postinstall"  : "if test $DOCKER_FLAG = 1; then rsync -I --info=progress2 /home/node/work-dir/package.json /home/node/build-dir/package.json && rsync -I --info=progress2 /home/node/work-dir/yarn.lock /home/node/build-dir/yarn.lock && echo 'Build directory files updated.' && touch /home/node/work-dir/node_modules/.docked; else rm -rf ./node_modules/.docked && echo 'Warning: files installed outside container; deleting docker flag file.'; fi",
"postuninstall": "if test $DOCKER_FLAG = 1; then rsync -I --info=progress2 /home/node/work-dir/package.json /home/node/build-dir/package.json && rsync -I --info=progress2 /home/node/work-dir/yarn.lock /home/node/build-dir/yarn.lock && echo 'Build directory files updated.' && touch /home/node/work-dir/node_modules/.docked; else rm -rf ./node_modules/.docked && echo 'Warning: files installed outside container; deleting docker flag file.'; fi",

I used an environment variable called DOCKER_FLAG and set it to 1 in the docker-compose.yml file. That way, it won't run when someone installs outside a container. Also, I made sure to remove the .docked flag file so the script knows it has been installed using host commands.

As for the issue of synchronizing node_modules every time a pull occurs, I used a git hook; namely, the post-merge hook. Every time I pull, it will attempt to run the entrypoint.sh script if the container is running. It will also give an argument to the script git that the script checks to not run exec yarn:debug, as the container is already running. Here is my script at .git/hooks/post-merge:

#!/bin/bash

if [ -x "$(command -v docker)" ] && [ "$(docker ps -a | grep <container_name>)" ]
then
  exec docker exec <container_name> sh -c "/home/node/build-dir/entrypoint.sh git"
  exit 1
fi

If the container is not running, and I fetched the changes, then the entrypoint.sh script will first check if there are any differences between the lock files, and if there are, it will reinstall in the build directory, and do what it did when the image was created and container run in the first time. This tutorial may be used to be able to share hooks with teammates.


Note: Be sure to use docker-compose run..., as docker-compose up... won't allow for the progress indicators to appear.

Solution 5:[5]

change:

  volumes:
    - .:/usr/src/app
    - /usr/src/app/node_modules

TO:

  volumes:
    - .:/usr/src/app

And it will put the node_modules in your local mapped volume. The way you have it, the /usr/src/app/node_modules will be stored in different volume that you would need to docker inspect {container-name} to find the location. If you do want to specify the location, specify it like:

- /path/to/my_node_modules:/usr/src/app/node_modules

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 a.barbieri
Solution 2 BMitch
Solution 3
Solution 4
Solution 5 ldg