'AWS Terraform tried to destroy and rebuild RDS cluster

I have an RDS cluster I built using Terraform, this is running deletion protection currently.

When I update my Terraform script for something (example security group change) and run this into the environment it always tries to breakdown and rebuild the RDS cluster.

Running this now with deletion protection stops the rebuild, but causes the terraform apply to fail as it cannot destroy the cluster.

How can I get this to keep the existing RDS cluster without rebuilding every time I run my script?

`resource "aws_rds_cluster" "env-cluster" {
  cluster_identifier      = "mysql-env-cluster"
  engine                  = "aurora-mysql"
  engine_version          = "5.7.mysql_aurora.2.03.2"
  availability_zones      = ["${var.aws_az1}", "${var.aws_az2}"]
  db_subnet_group_name   = "${aws_db_subnet_group.env-rds-subg.name}"
  database_name           = "dbname"
  master_username         = "${var.db-user}"
  master_password         = "${var.db-pass}"
  backup_retention_period = 5
  preferred_backup_window = "22:00-23:00"
  deletion_protection     = true
  skip_final_snapshot     = true
 }

resource "aws_rds_cluster_instance" "env-01" {
  identifier              = "${var.env-db-01}"
  cluster_identifier      = "${aws_rds_cluster.env-cluster.id}"
  engine                  = "aurora-mysql"
  engine_version          = "5.7.mysql_aurora.2.03.2"
  instance_class          = "db.t2.small"
  apply_immediately       = true
}

resource "aws_rds_cluster_instance" "env-02" {
  identifier              = "${var.env-db-02}"
  cluster_identifier      = "${aws_rds_cluster.env-cluster.id}"
  engine                  = "aurora-mysql"
  engine_version          = "5.7.mysql_aurora.2.03.2"
  instance_class          = "db.t2.small"
  apply_immediately       = true
}

resource "aws_rds_cluster_endpoint" "env-02-ep" {
  cluster_identifier          = "${aws_rds_cluster.env-cluster.id}"
  cluster_endpoint_identifier = "reader"
  custom_endpoint_type        = "READER"

  excluded_members = ["${aws_rds_cluster_instance.env-01.id}"]
}`


Solution 1:[1]

I had a similar experience when trying to set up an AWS Aurora cluster and instance.

Each time I run a terraform apply it tries to recreate the Aurora cluster and instance.

Here's my Terraform script:

locals {
  aws_region      = "eu-west-1"
  tag_environment = "Dev"
  tag_terraform = {
    "true"  = "Managed by Terraform"
    "false" = "Not Managed by Terraform"
  }
  tag_family = {
    "aurora" = "Aurora"
  }
  tag_number = {
    "1" = "1"
    "2" = "2"
    "3" = "3"
    "4" = "4"
  }
}

# RDS Cluster
module "rds_cluster_1" {
  source = "../../../../modules/aws/rds-cluster-single"

  rds_cluster_identifier              = var.rds_cluster_identifier
  rds_cluster_engine                  = var.rds_cluster_engine
  rds_cluster_engine_mode             = var.rds_cluster_engine_mode
  rds_cluster_engine_version          = var.rds_cluster_engine_version
  rds_cluster_availability_zones      = ["${local.aws_region}a"]
  rds_cluster_database_name           = var.rds_cluster_database_name
  rds_cluster_port                    = var.rds_cluster_port
  rds_cluster_master_username         = var.rds_cluster_master_username
  rds_cluster_master_password         = module.password.password_result
  rds_cluster_backup_retention_period = var.rds_cluster_backup_retention_period
  rds_cluster_apply_immediately       = var.rds_cluster_apply_immediately
  allow_major_version_upgrade         = var.allow_major_version_upgrade
  db_cluster_parameter_group_name     = var.rds_cluster_parameter_group_name
  rds_cluster_deletion_protection     = var.rds_cluster_deletion_protection
  enabled_cloudwatch_logs_exports     = var.enabled_cloudwatch_logs_exports
  skip_final_snapshot                 = var.skip_final_snapshot
  # vpc_security_group_ids              = var.vpc_security_group_ids
  tag_environment = local.tag_environment
  tag_terraform   = local.tag_terraform.true
  tag_number      = local.tag_number.1
  tag_family      = local.tag_family.aurora
}

Here's how I solved it:

The issue was that each time I run terraform apply Terraform tries to check to recreate the resources in 2 subnets:

Terraform detected the following changes made outside of Terraform since the last "terraform apply":

  # module.rds_cluster_1.aws_rds_cluster.main has changed
  ~ resource "aws_rds_cluster" "main" {
      ~ availability_zones                  = [
          + "eu-west-1b",
          + "eu-west-1c",
            # (1 unchanged element hidden)
        ]
      ~ cluster_members                     = [
          + "aurora-postgres-instance-0",

however, my terraform script only specified one availability (rds_cluster_availability_zones = ["${local.aws_region}a") . All I had to do was specify all 3 availability zones (rds_cluster_availability_zones = ["${local.aws_region}a", "${local.aws_region}b", "${local.aws_region}c"]) for my region:

locals {
  aws_region      = "eu-west-1"
  tag_environment = "Dev"
  tag_terraform = {
    "true"  = "Managed by Terraform"
    "false" = "Not Managed by Terraform"
  }
  tag_family = {
    "aurora" = "Aurora"
  }
  tag_number = {
    "1" = "1"
    "2" = "2"
    "3" = "3"
    "4" = "4"
  }
}

# RDS Cluster
module "rds_cluster_1" {
  source = "../../../../modules/aws/rds-cluster-single"

  rds_cluster_identifier              = var.rds_cluster_identifier
  rds_cluster_engine                  = var.rds_cluster_engine
  rds_cluster_engine_mode             = var.rds_cluster_engine_mode
  rds_cluster_engine_version          = var.rds_cluster_engine_version
  rds_cluster_availability_zones      = ["${local.aws_region}a", "${local.aws_region}b", "${local.aws_region}c"]
  rds_cluster_database_name           = var.rds_cluster_database_name
  rds_cluster_port                    = var.rds_cluster_port
  rds_cluster_master_username         = var.rds_cluster_master_username
  rds_cluster_master_password         = module.password.password_result
  rds_cluster_backup_retention_period = var.rds_cluster_backup_retention_period
  rds_cluster_apply_immediately       = var.rds_cluster_apply_immediately
  allow_major_version_upgrade         = var.allow_major_version_upgrade
  db_cluster_parameter_group_name     = var.rds_cluster_parameter_group_name
  rds_cluster_deletion_protection     = var.rds_cluster_deletion_protection
  enabled_cloudwatch_logs_exports     = var.enabled_cloudwatch_logs_exports
  skip_final_snapshot                 = var.skip_final_snapshot
  # vpc_security_group_ids              = var.vpc_security_group_ids
  tag_environment = local.tag_environment
  tag_terraform   = local.tag_terraform.true
  tag_number      = local.tag_number.1
  tag_family      = local.tag_family.aurora
}

Resources: Terraform wants to recreate cluster on every apply #8

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 halfer