'How to fix kubernetes_config_map resource error on a newly provisioned EKS cluster via terraform?
I'm using Terraform to provision an EKS cluster (mostly following the example here). At the end of the tutorial, there's a method of outputting the configmap through the terraform output
command, and then applying it to the cluster via kubectl apply -f <file>
. I'm attempting to wrap this kubectl
command into the Terraform file using the kubernetes_config_map
resource, however when running Terraform for the first time, I receive the following error:
Error: Error applying plan:
1 error(s) occurred:
* kubernetes_config_map.config_map_aws_auth: 1 error(s) occurred:
* kubernetes_config_map.config_map_aws_auth: the server could not find the requested resource (post configmaps)
The strange thing is, every subsequent terraform apply
works, and applies the configmap to the EKS cluster. This leads me to believe it is perhaps a timing issue? I tried to preform a bunch of actions in between the provisioning of the cluster and applying the configmap but that didn't work. I also put an explicit depends_on
argument to ensure that the cluster has been fully provisioned first before attempting to apply the configmap.
provider "kubernetes" {
config_path = "kube_config.yaml"
}
locals {
map_roles = <<ROLES
- rolearn: ${aws_iam_role.eks_worker_iam_role.arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
ROLES
}
resource "kubernetes_config_map" "config_map_aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data {
mapRoles = "${local.map_roles}"
}
depends_on = ["aws_eks_cluster.eks_cluster"]
}
I expect for this to run correctly the first time, however it only runs after applying the same file with no changes a second time.
I attempted to get more information by enabling the TRACE
debug flag for terraform, however the only output I got was the exact same error as above.
Solution 1:[1]
Well, I don't know if that is fresh yet but I was dealing with the same troubles and found out that:
https://github.com/terraform-aws-modules/terraform-aws-eks/issues/699#issuecomment-601136543
So, in others words, I changed the cluster's name in aws_eks_cluster_auth block to a static name, and worked. Well, perhaps this is a bug on TF.
Solution 2:[2]
This seems like a timing issue while bootstrapping your cluster. Your kube-apiserver
initially doesn't think there's a configmaps
resource.
It's likely that the Role
and RoleBinding
that it's using the create the ConfigMap has not been fully configured in the cluster to allow it to create a ConfigMap (possibly within the EKS infrastructure) which uses the iam-authenticator and the following policies:
resource "aws_iam_role_policy_attachment" "demo-cluster-AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = "${aws_iam_role.demo-cluster.name}"
}
resource "aws_iam_role_policy_attachment" "demo-cluster-AmazonEKSServicePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
role = "${aws_iam_role.demo-cluster.name}"
}
The depends
Terraform clause will not do much since it seems like the timing is happening within the EKS service.
I suggest you try the terraform-aws-eks module which uses the same resource described in the doc. You can also browse through the code if you'd like to figure out how they solve the problem that you are seeing.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Daniel Andrade |
Solution 2 | Rico |