'Terraform - refactoring modules: Error: Provider configuration not present

I'm refactoring some Terraform modules and am getting:

Error: Provider configuration not present

To work with
module.my_module.some_resource.resource_name its
original provider configuration at
module.my_module.provider.some_provider.provider_name is required, but it
has been removed. This occurs when a provider configuration is removed while
objects created by that provider still exist in the state. Re-add the provider
configuration to destroy
module.my_module.some_resource.resource_name, after
which you can remove the provider configuration again.

It seems like I need to remove that resource from the tfstate file then re-add it with the new tf config.

As I'm refactoring some monolithic code there are hundreds of these Error: Provider configuration not present messages.

Any shortcut to removing and re-adding?



Solution 1:[1]

As the error message explains, Terraform has detected that there are resource objects still present in the state whose provider configurations are not available, and so it doesn't have enough information to destroy those resources.

In this particular case, that seems to be occurring because there is a provider configuration block in one of your child modules. While that is permitted for compatibility with older versions of Terraform, it's recommended to only have provider blocks in your root module so that they can always outlive any resource instances that the provider is managing.

If your intent is to destroy the resource instances in module.my_module then you must do that before removing the module "my_module" block from the root module. This is one unusual situation where we can use -target to help Terraform understand what we want it to do:

terraform destroy -target=module.my_module

Once all of those objects are destroyed, you should then be able to remove the module "my_module" block without seeing the "Provider configuration not present" error, because there will be no resource instances in the state relying on that provider configuration.

If your goal is to move resource blocks into another module, the other possible resolution here is to use terraform state mv to instruct Terraform to track the existing object under a new address:

terraform state mv 'module.my_module.some_resource.resource_name' 'module.other_module.some_resource.resource_name'

Again, it's better to do this before removing the old module, so that the old provider configuration remains present until there's nothing left for it to manage. After you've moved the existing object into a new module in the state and have a resource block in place for it in the configuration, Terraform should understand your intent to manage this resource with a different provider configuration from now on and you can safely remove the old module block, and thus the provider block inside it.

Solution 2:[2]

You can comment out resources temporary in the module which you want to destroy, uncomment resources on recreation and you can follow the below steps to avoid the error.

Remove the provider from the module and Pass provider in the module explicitly,

module "pass_provider" {
  source = "../module"
   providers = {
    aws = aws
  }
}

Pass provider with alias,

module "pass_provider_alias" {
  source = "../module"

   providers = {
    aws = "aws.alias_name"
  }
}

Solution 3:[3]

I started to get this Provider configuration not present error after upgrading Terraform v0.12 to v0.13.

Following explicit-provider-source-locations to align with Terraform v0.13 should probably be the right way, but for the meantime, downgrading to v0.12 has resolved it.

Solution 4:[4]

The first thing to check, if you are working in a team, is to validate the last version of terraform used to build, if your version is other change it and test. A Example of this kind of issue: https://github.com/hashicorp/terraform/issues/26062

Solution 5:[5]

If you commented out/removed a module and you see this error then another option is terraform state rm to esentially forget about it. Terraform state rm

You definately want to destroy the resource, terraform destroy -target=module.mymodule but in some cases the resource isn't physical, for example a random within the module. Also if you are using Terraform Cloud and the workspace is VCS linked then you wont be able to run an apply or destroy locally. In these cases use rm and if there are lingering resources just delete them manually.

Solution 6:[6]

I was having this issue in a development environment while refactoring my monorepo to add multiple environments (staging, prod). I was using Terraform Cloud. I solved it by destroying everything that Terraform was tracking, doing the refactor, and then doing a new plan/apply from scratch. This won't work for most situations, but it was by far the easiest solution for me since there were 100+ resources that had moved.

enter image description here

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Henridv
Solution 2
Solution 3 Noam Manos
Solution 4 Carlos Gomez
Solution 5
Solution 6 Nick K9