I’d like to create a Database image and seed it with some initial data. This seems to work fine, since I’m able to create a container with a volume managed by docker. However, when I try to mount the volume to a directory on my linux host machine, it is created empty instead of seeded.
After several hours of trying different configurations, I narrowed down the problem to it’s core: the content of the folder in the container associated with the volume on the host machine is overwritten upon creation.
Below a simple Dockerfile
that creates a folder containing a single file. When the container is started it prints out the content of the mounted folder.
FROM ubuntu:18.04 RUN mkdir /opt/test RUN touch /opt/test/myFile.txt VOLUME /opt/test CMD ["ls", "/opt/test"]
I’m building the image with: docker build -t test .
Volume managed by docker
$ docker run -v volume-test:/opt/test --name test test myFile.txt
Here I’m getting the expected output. With the volume mounted in the space managed by docker. So the ouput of docker volume inspect volume-test
is:
{ "CreatedAt": "2020-05-13T10:09:29+02:00", "Driver": "local", "Labels": null, "Mountpoint": "/var/snap/docker/common/var-lib-docker/volumes/volume-test/_data", "Name": "volume-test", "Options": null, "Scope": "local" }
Volume mounted on host machine
$ docker run -v $(pwd)/volume:/opt/test --name test test
Where nothing is returned since the folder is empty… However, I can see that the volume is created and it’s owned by the user root, even though I’m executing the docker run command as another user.
drwxr-xr-x 2 root root 4096 May 13 10:11 volume
As a last test, I tried to see what happens when I create the folder for the volume beforehand and add some content to it (in my case a file called anotherFile.txt
).
When I’m running now the container, I’m getting the following result:
$ docker run -v $(pwd)/volume:/opt/test --name test test anotherFile.txt
Which let’s me come to the conclusion, that the content in the folder of the container is overwritten by the content of the folder on the host machine.
I can verify as well with docker inspect -f '{{ .Mounts }}' test
, that the volume is mounted at the right place:
[{bind /pathWhere/pwd/pointedTo/volume /opt/test true rprivate}]
Now my question: Is there a way to have the same behavior for volumes on the host machine as for the volumes managed by docker, where the content of the /opt/test folder in the container is copied into the folder on the host defined as volume?
Sidenote: this seems to be the case when using docker on windows and having the option Shared Folders enabled…
Furthermore, it seems as a similar question was already asked here but no answer was found. I decided to make a separate post since I think this is the most generic example to describe this issue.
Advertisement
Answer
Desired situation
Data from within the docker image is placed in a specified path on the host.
Your current situation
- When creating the image, data is put into
/opt/test
- When starting the container, you mount the volume on
/opt/test
Problem
Because you mount on the same path as where you have put your data, your data gets overwritten.
Solution
- Create a file within image during docker-build, for example
touch /opt/test/data/myFile.txt
, - Use a different path to mount your volume, so that the data is not overwritten, for example
/opt/test/mount
- Use ‘CMD’ to copy the files to the volume, like so: CMD [“cp”, “-n” “/opt/test/data/*”, “/opt/test/mount/”]
Consulted sources