'Error while configuring Terraform S3 Backend

I am configuring S3 backend through terraform for AWS.

terraform {
  backend "s3" {}
}

On providing the values for (S3 backend) bucket name, key & region on running "terraform init" command, getting following error

"Error configuring the backend "s3": No valid credential sources found for AWS Provider. Please see https://terraform.io/docs/providers/aws/index.html for more information on providing credentials for the AWS Provider Please update the configuration in your Terraform files to fix this error then run this command again."

I have declared access & secret keys as variables in providers.tf. While running "terraform init" command it didn't prompt any access key or secret key.

How to resolve this issue?



Solution 1:[1]

When running the terraform init you have to add -backend-config options for your credentials (aws keys). So your command should look like:

terraform init -backend-config="access_key=<your access key>" -backend-config="secret_key=<your secret key>"

Solution 2:[2]

I also had the same issue, the easiest and the secure way is to fix this issue is that configure the AWS profile. Even if you properly mentioned the AWS_PROFILE in your project, you have to mention it again in your backend.tf.

my problem was, I have already set up the AWS provider in the project as below and it is working properly.

provider "aws" {
region = "${var.AWS_REGION}"
profile = "${var.AWS_PROFILE}"
}

but end of the project I was trying to configure the S3 backend configuration file. therefore I have run the command terraform init and I also got the same error message.

Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.

Note that is not enough for the terraform backend configuration. you have to mention the AWS_PROFILE in the backend file as well.

  • Full Solution

I'm using the terraform latest version at this moment. it's v0.13.5.

please see the provider.tf

provider "aws" {
region = "${var.AWS_REGION}"
profile = "${var.AWS_PROFILE}" # lets say profile is my-profile
}

for example your AWS_PROFILE is my-profile then your backend.tf should be as below.

terraform {
    backend "s3" {
    bucket = "my-terraform--bucket"
    encrypt = true
    key = "state.tfstate"
    region = "ap-southeast-2"
    profile = "my-profile" # you have to give the profile name here. not the variable("${var.AWS_PROFILE}")
  }
}

then run the terraform init

Solution 3:[3]

I've faced a similar problem when renamed profile in AWS credentials file. Deleting .terraform folder, and running terraform init again resolved the problem.

Solution 4:[4]

Don't - add variables for secrets. It's a really really bad practice and unnecessary.

Terraform will pick up your default AWS profile, or use whatever AWS profile you set AWS_PROFILE too. If this in AWS you should be using an instance profile. Roles can be done too.

If you hardcode the profile into your tf code then you have to have the same profile names where-ever you want to run this script and change it for every different account its run against.

Don't - do all this cmdline stuff, unless you like wrapper scripts or typing. Do - Add yourself a remote_state.tf that looks like

terraform {
  backend "s3" {
    bucket         = "WHAT-YOU-CALLED-YOUR-STATEBUCKET"
    key            = "mykey/terraform.tfstate"
    region         = "eu-west-1"
  }
}

now when your terraform init:

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically use this backend unless the backend configuration changes.

The values in the provider aren't relevant to the perms for the remote_state and could even be different AWS accounts (or even another cloud provider).

Solution 5:[5]

Had the same issue and I was using export AWS_PROFILE as I always had. I checked my credentials which were correct.

Re-running aws configure fixed it for some reason.

Solution 6:[6]

If you have set up custom aws profile already, use the below option.

terraform init -backend-config="profile=your-profile-name"

If there is no custom profile,then make sure to add access_key and secret_key to default profile and try.

Solution 7:[7]

In my case i configured the aws cli with proper access_key and secret_key.It worked!Tried to specify the provider with access_key and secret_key but didn't work

Solution 8:[8]

If someone is using localstack, for me only worked using this tip https://github.com/localstack/localstack/issues/3982#issuecomment-1107664517

 backend "s3" {
    bucket                      = "curso-terraform"
    key                         = "terraform.tfstate"
    region                      = "us-east-1"
    endpoint                    = "http://localhost:4566"
    skip_credentials_validation = true
    skip_metadata_api_check     = true
    force_path_style            = true
    dynamodb_table              = "terraform_state"
    dynamodb_endpoint           = "http://localhost:4566"
    encrypt                     = true
  }

And don't forget to add the endpoint in provider:

provider "aws" {
  region                      = "us-east-1"
  skip_credentials_validation = true
  skip_requesting_account_id  = true
  skip_metadata_api_check     = true
  s3_force_path_style         = true

  endpoints {
    ec2 = "http://localhost:4566"
    s3 = "http://localhost:4566"
    dynamodb = "http://localhost:4566"
  }
}

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2
Solution 3 Yaroslav Bres
Solution 4 James Woolfenden
Solution 5 mkkl
Solution 6 deepanmurugan
Solution 7 Ratul Das
Solution 8 adrianosymphony