'How to prevent database docker from being rebuilt and losing production data
I love using docker & docker-compose for both development and production environments.
But in my workflow, I keep considering dockers as disposable:
it means if I need to add a feature to my docker, I edit my Dockerfile
, then run docker-compose build
and docker-compose up -d
and done.
But this time, the production DB is also in a Docker.
I still need to make some changes in my environment (eg. configuring backup), but now I can't rerun docker-compose build
because this would imply a loss of all the data... It means I need to enter the docker (docker-compose run web /bin/bash
) and run the commands inside it while still reporting those to my local Dockerfile
to keep track of my changes.
Are there any best practices regarding this situation?
I thought of setting up a process that would dump the DB to a S3 bucket before container destruction, but it doesn't really scale to wide DBs...
I thought of making a Docker not destructible (how?), though it means losing the disposability of a container.
I thought of having a special partition that would be in charge of storing the data only and that would not get destructed when rebuilding the docker, though it feels hard to setup and unsecure.
So what?
Thanks
Solution 1:[1]
This is what data volumes are for. There is a whole page on the docker documentation site covering this.
The idea is that when you destroy the container, the data volume persists with data on it, and when you restart it the data hasn't gone anywhere. I will say though, that putting databases in docker containers is hard. People have done it and had severe dataloss and severe job-loss.
I would recommend reading extensively on this topic before trusting your production data to docker containers. This is a great article explaining the perils of doing this.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 |