'AWS ECS leader commands (django migrate)
We are currently deploying our Django APP on AWS Elastic Beanstalk. There we execute the django db migrations using container commands, where we assure we only run migrations on one instance by using the "leader_only" restriction.
We are considering to move our deployment to AWS EC2 Container Service. However, we cannot figure out a way to enforce the migrate to only be run on one container when new image is being deployed.
Is it possible to configure leader_only commands in AWS EC2 Container Service?
Solution 1:[1]
There is a possibility to use ECS built-in functionality to handle deployments that involve migrations. Basically, the idea is the following:
- Make containers fail their health checks if they are running against an unmigrated database, e.g. via making a custom view and checking for the existence of the migrations execution plan
executor.migration_plan(executor.loader.graph.leaf_nodes()) status = 503 if plan else 200
- Make a task definition that does nothing more then just migrates the database and make sure it is scheduled for execution with the rest of the deployment process.
The result is deployment process will try to bring one new container. This new container will fail health checks as long as database is not migrated and thus will block all the further deployment process (so you will still have old instances running to serve requests). Once migration is done - health check will now succeed, so the deployment will unblock and proceed.
This is by far the most elegant solution I was able to find in terms of running Django migrations on Amazon ECS.
Source: https://engineering.instawork.com/elegant-database-migrations-on-ecs-74f3487da99f
Solution 2:[2]
Honestly, I have not figured this out. I have encountered exactly the same limitation on ECS (as well as others which made me abandon it, but this is out of topic).
Potential workarounds: 1) Run migrations inside your init script. This has the flaw that it runs on every node at the time of the deployment. (I assume you have multi-replicas) 2) Add this as a step of your CI flow.
Hope I helped a bit, in case I come up with another idea, I'll revert back here.
Solution 3:[3]
It's not optimal, but you can simply run it as a command in task definition
"command": ["/bin/sh", "-c", "python manage.py migrate && gunicorn -w 3 -b :80 app.wsgi:application"],
Solution 4:[4]
For the ones using task definition JSON all we need to add a not essential container in our containerDefinitions
{
"name": "migrations",
"image": "your-image-name",
"essential": false,
"cpu": 24,
"memory": 200,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "your-logs-group",
"awslogs-region": "your-region",
"awslogs-stream-prefix": "your-log-prefix"
}
},
"command": [
"python3", "manage.py", "migrate"
],
"environment": [
{
"name": "ENVIRON_NAME",
"value": "${ENVIRON_NAME}"
}
]
}
I flagged this container as "essential": false
.
"If the essential parameter of a container is marked as true, and that container fails or stops for any reason, all other containers that are part of the task are stopped. If the essential parameter of a container is marked as false, then its failure does not affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential."
source: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Alex Pavlenko |
Solution 2 | Kostas Livieratos |
Solution 3 | |
Solution 4 |