'How to avoid ClusterIssuer dependency on helm cert-manager CRDs in Terraform plan and apply? ("ClusterIssuer" not found in group "cert-manager.io")

I am trying to create a module in Terraform to create the basic resources in a Kubernetes cluster, this means a cert-manager, ingress-nginx (as the ingress controller) and a ClusterIssuer for the certificates. In this exact order.

The first two I am installing with a helm_release resource and the cluster_issuer via kubernetes_manifest.

I am getting the below error, which, after some Google searches, I found out that it's because the cert-manager installs the CRDs that the ClusterIssuer requires but at the terraform plan phase, since they are not installed yet, the manifest cannot detect the ClusterIssuer.

Then, I would like to know if there's a way to circumvent this issue but still create everything in the same configuration with only one terraform apply?

Note: I tried to use the depends_on arguments and also include a time_sleep block but it's useless because nothing is installed in the plan and that's where it fails

| Error: Failed to determine GroupVersionResource for manifest
│ 
│   with module.k8s_base.kubernetes_manifest.cluster_issuer,
│   on ../../modules/k8s_base/main.tf line 37, in resource "kubernetes_manifest" "cluster_issuer":
│   37: resource "kubernetes_manifest" "cluster_issuer" {
│ 
│ no matches for kind "ClusterIssuer" in group "cert-manager.io"
resource "helm_release" "cert_manager" {
  chart      = "cert-manager"
  repository = "https://charts.jetstack.io"
  name       = "cert-manager"

  create_namespace = var.cert_manager_create_namespace
  namespace        = var.cert_manager_namespace

  set {
    name  = "installCRDs"
    value = "true"
  }
}

resource "helm_release" "ingress_nginx" {
  name = "ingress-nginx"

  repository = "https://kubernetes.github.io/ingress-nginx"
  chart      = "ingress-nginx"

  create_namespace = var.ingress_nginx_create_namespace
  namespace        = var.ingress_nginx_namespace

  wait = true

  depends_on = [
    helm_release.cert_manager
  ]
}

resource "time_sleep" "wait" {
  create_duration = "60s"

  depends_on = [helm_release.ingress_nginx]
}

resource "kubernetes_manifest" "cluster_issuer" {
  manifest = {
    "apiVersion" = "cert-manager.io/v1"
    "kind"       = "ClusterIssuer"
    "metadata" = {
      "name" = var.cluster_issuer_name
    }
    "spec" = {
      "acme" = {
        "email" = var.cluster_issuer_email
        "privateKeySecretRef" = {
          "name" = var.cluster_issuer_private_key_secret_name
        }
        "server" = var.cluster_issuer_server
        "solvers" = [
          {
            "http01" = {
              "ingress" = {
                "class" = "nginx"
              }
            }
          }
        ]
      }
    }
  }
  depends_on = [helm_release.cert_manager, helm_release.ingress_nginx, time_sleep.wait]
}


Solution 1:[1]

Official documentation says to use kubectl apply before installing this with a helm chart, making it a two step process. Using Terraform, this would make it a 3 step process in that you have to apply a targeted section to create the cluster so you can have access to kubeconfig credentials, then run the kubectl apply command to install the CRDs, and finally run terraform apply again to install the helm chart and the rest of the IaC. This is even less ideal.

I would use the kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.crds.yaml in kubectl_manifest resources as the comment above suggests, but this is impossible since this does not link to a single yaml file but so many of them one would not be able to keep up with the changes. Unfortunately, there is no "kubectl_apply" terraform resource** for the helm chart to depend on those CRDs being installed first.

Despite all this wonkiness, there is a solution, and that is to use the helm_release resource twice. It requires creating a module and referencing a custom helm chart for the cert-issuer. It's not ideal given the amount of effort that has to be used to create it for custom needs, but once it's created, it's a reusable, modular solution.

#
# Cert-manager
# main.tf
#
resource "helm_release" "cert_manager" {
  name             = "cert-manager"
  repository       = "https://charts.jetstack.io"
  chart            = "cert-manager"
  version          = var.cert_manager_chart_version
  namespace        = var.cert_manager_namespace
  create_namespace = true

  set {
    name  = "installCRDs"
    value = true
  }

}

Reference to custom chart:

#
# cert-issuer.tf
#
# Cert Issuer using Helm
resource "helm_release" "cert_issuer" {
  name       = "cert-issuer"
  repository = path.module
  chart      = "cert-issuer"
  namespace  = var.namespace

  set {
    name  = "fullnameOverride"
    value = local.issuer_name
  }

  set {
    name  = "privateKeySecretRef"
    value = local.issuer_name
  }

  set {
    name  = "ingressClass"
    value = var.ingress_class
  }

  set {
    name  = "acmeEmail"
    value = var.cert_manager_email
  }

  set {
    name  = "acmeServer"
    value = var.acme_server
  }

  depends_on = [helm_release.cert_manager]
}

You can see that the above use of helm_release is referencing itself locally as the repository, which requires you to have a custom helm chart, like this:

# ./cluster-issuer/cluster-issuer.yaml

apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
 name: {{ include "cert-issuer.fullname" . }}
 namespace: {{ .Release.Namespace }}
spec:
 acme:
   # The ACME server URL
   server: {{ .Values.acmeServer }}
   email: {{ .Values.acmeEmail }}
   # Name of a secret used to store the ACME account private key
   privateKeySecretRef:
     name: {{ .Values.privateKeySecretRef }}
   # Enable the HTTP-01 challenge provider
   solvers:
   - http01:
       ingress:
         class: {{ .Values.ingressClass }}

For some reason, this avoids the dependency check terraform uses to throw the error and works fine to get this installed in a single apply

This could be further simplified by not using values.yaml values by creating a pure chart.

** Note, I think another work around is one can use a provisioner like 'local-exec' or 'remote-exec' after a cluser is created to run the kubectl apply command for the CRds directly, but I haven't tested this yet. It would also still require that your provisioning environment have kubectl installed and .kubeconfig properly configured, creating a dependency tree.

Also, that is of course not fully working code. for a full example of the module to use or fork, see this github repo.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 user658182