'How can I use Azure Devops Release Pipeline Variables in a Kubernetes CI/CD with Helm?
I'm currently working on setting up a CI/CD pipeline to my client's new Kubernetes clusters that they host on Harbor using Helm and Azure DevOps Server, and I'm a bit stuck on how to deal with application variables (appsettings in .NET Core) in a streamlined way. Currently those variables are stored in DevOps as Release Variables, and some of the Prod variables are managed by another organization.
My thinking has been that I want separate Build and Deploy pipes, with the Build being responsible for restoring, building, triggering unit tests, and finally pushing a Docker image to the on-prem registry. Then, have multiple Deploy pipelines representing the various environments (Dev, Test, Stage, Prod) and applications that may want to use that Docker image, utilizing their own configuration.
However, I haven't been able to find a way that allows me to inject the variables from the release pipeline into the dockerized application on the release steps, since by then what I have is not my raw application but rather the Docker image of it. I've previously used ConfigMaps to resolve this, but since I can't have files locally to represent the Prod environment, I would need some way to override variables, or generate a ConfigMap, from Azure Devops' release variables.
I somehow feel like this must be a common scenario, and yet most solutions I find are either related to maintaining environment-oriented configmaps or values files in your application's repo, or using functions seemingly unique to Azure Cloud.
One solution would be to move the docker build
/push
step into the release pipelines, and injecting variables into the application before being dockerized. However this feels like a hack that will just result in a myriad of Docker images of the same application but with a tweaked appsettings file, that somehow will have to be versioned across multiple environments.
Solution 1:[1]
You could try a way to handle this through Kubernetes manifest task.
Use a Kubernetes manifest task in a build or release pipeline to bake and deploy manifests to Kubernetes clusters.
If your builds and deployments all run in Azure Pipelines so you do have a previous layer where we can do these replacements before applying the manifests to the cluster.
Variables can be defined at several scope levels where the more immediate levels will override the farthest.
It's also able to apply the same manifest to two environments (staging and production) but with different settings
You could also take a look at below blogs?
Solution 2:[2]
According to 12 factor, the best way to get environment dependent config into a cluster, is to use environment variables. There is better security and all sorts of tools to get config into environment vars, if your app can get them from there rather than file, then this is better.
However if your app cannot, then this is what I do to inject customizable config and manifests into a release using Azure DevOps:
In the build pipeline publish tokenized config and manifests into folder(s).
In the release use Replace Token task(s) on folder(s) to swap tokens for variables.
Use a kubectl task to create a configmap from file that will hold the replaced config files, and make this a volume in your deployment manifest.
Your application should either expect files to be at this location, or copy these files to the expected location. I do this in the dockerfile, with a script for my entrypoint.
I use a kube deploy step to add a tag to the end of my image in the deployment manifest, and deploy to cluster.
When the pod starts up, the configmap is created as a volume and the application starts with the latest config for that environment.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | PatrickLu-MSFT |
Solution 2 | Kieran Smart |