'Terraform error refreshing state access denied

I'm using gitbucket for both my repository and for pipelines. I have a terraform config file with a remote state configured which runs fine on my local machine however it fails when running in gitbucket. I keep getting access denied error. Here's the main.tf:

terraform {
backend "s3" {
    bucket = "zego-terraform-test"
    key    = "test/terraform.tfstate"
    region = "eu-west-1"
  }
}

data "terraform_remote_state" "remote_state" {
  backend = "s3"

  config {
    bucket = "zego-terraform-test"
    key    = "test/terraform.tfstate"
    region = "eu-west-1"
  }
}

variable "region" {}

provider "aws" {
  region     = "${var.region}"
  access_key = {}
  secret_key = {}
  token      = {}
}

module "vpc" {
  source = "./modules/vpc"
}

Here's my gitbucket-pipelines.yml:

image: python:3.5.1
pipelines:
  default:
    - step:
        caches:
          - pip
        script: # Modify the commands below to build your repository.
          - apt-get update
          - apt-get install unzip
          - wget https://releases.hashicorp.com/terraform/0.11.7/terraform_0.11.7_linux_amd64.zip
          - unzip terraform_0.11.7_linux_amd64.zip
          - rm terraform_0.11.7_linux_amd64.zip
          - export PATH="$PATH:${BITBUCKET_CLONE_DIR}"
          - terraform init
            -backend-config "access_key=$AWS_ACCESS_KEY"
            -backend-config "secret_key=$AWS_SECRET_KEY"
            -backend-config "token=$TOKEN"

When I run the .tf file in this pipeline I get this error:

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error refreshing state: AccessDenied: Access Denied
    status code: 403

When I remove remote state config it runs fine. Why am I getting the access denied error even though I'm using the same creds on my local machine and in gitbucket environment?



Solution 1:[1]

Was getting the same error. For our use case, we have to manually remove the terraform.tfstate file under .terraform/ directory and run init again.

Solution 2:[2]

At first glance it seems reasonable. Have you tried having the terraform init and -backend-config's all on one line? I wonder if the - at the beginning is messing with the yml format?

Solution 3:[3]

In case a solution has not been found for this issue, you can use either "profile=" or "role_arn=" in the config section of your terraform_remote_state stanza. The same is true for the AWS Provider and the backend configuration.

I chased this issue all day today not realizing that role_arn was available for terraform_backend_state data source.

Solution 4:[4]

In my case the backend file of one of the data blocks of data.tf had permission issues, I just recreated that file and did terraform plan again, the problem sorted. Took ages to figure this out.

data "terraform_remote_state" "gateway" {
  backend = "s3"

  config = {
    bucket = "xxx-terraform-remote"
    key    = "xxx/terraform.tfstate"
    region = "eu-west-1"
  }
}

Solution 5:[5]

In my case, there was an issue with the order in which AWS client looks for credentials.

I stored AWS credentials used by terreform in ~/.aws/credentials, but I've also had different AWS credentials set in environment varaibles.

I had to remove AWS credentials from my env variables and it worked.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 LeOn - Han Li
Solution 2 Joachim
Solution 3 Michael Reilly
Solution 4 Aravind Padigala
Solution 5 Wojciech Marusarz