'Update bucket created in Terraform file results in BucketAlreadyOwnedByYou error
I need to add a policy to a bucket I create earlier on in my Terraform file.
However, this errors with
Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
How can I amend my .tf file to create the bucket, then update it?
resource "aws_s3_bucket" "bucket" {
bucket = "my-new-bucket-123"
acl = "public-read"
region = "eu-west-1"
website {
index_document = "index.html"
}
}
data "aws_iam_policy_document" "s3_bucket_policy_document" {
statement {
actions = ["s3:GetObject"]
resources = ["${aws_s3_bucket.bucket.arn}/*"]
principals {
type = "AWS"
identifiers = ["*"]
}
}
}
resource "aws_s3_bucket" "s3_bucket_policy" {
bucket = "${aws_s3_bucket.bucket.bucket}"
policy = "${data.aws_iam_policy_document.s3_bucket_policy_document.json}"
}
Solution 1:[1]
You should use the aws_s3_bucket_policy
resource to add a bucket policy to an existing S3 bucket:
resource "aws_s3_bucket" "b" {
bucket = "my_tf_test_bucket"
}
resource "aws_s3_bucket_policy" "b" {
bucket = "${aws_s3_bucket.b.id}"
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}
But if you are doing this at the same time then it's probably worth just inlining this into the original aws_s3_bucket
resource like this:
locals {
bucket_name = "my-new-bucket-123"
}
resource "aws_s3_bucket" "bucket" {
bucket = "${local.bucket_name}"
acl = "public-read"
policy = "${data.aws_iam_policy_document.s3_bucket_policy_document.json}"
region = "eu-west-1"
website {
index_document = "index.html"
}
}
data "aws_iam_policy_document" "s3_bucket_policy_document" {
statement {
actions = ["s3:GetObject"]
resources = ["arn:aws:s3:::${local.bucket_name}/*"]
principals {
type = "AWS"
identifiers = ["*"]
}
}
}
This builds the S3 ARN in the bucket policy by hand to avoid a potential cycle error from trying to reference the output arn
from the aws_s3_bucket
resource.
If you had created the bucket without the policy (by applying the Terraform without the policy resource) then adding the policy
argument to the aws_s3_bucket
resource will then cause Terraform to detect the drift and the plan will show an update to the bucket, adding the policy.
It's probably worth noting that your canned ACL used in the acl
of the aws_s3_bucket
resource is overlapping with your policy and is unnecessary. You could use either the policy or the canned ACL to allow your S3 bucket to be read by all but the public-read
ACL also allows your bucket contents to be anonymously listed like old school Apache directory listings which isn't what most people want.
Solution 2:[2]
When setting up terraform to use s3 as a backend for the first time with a config similar to below:
# backend.tf
terraform {
backend "s3" {
bucket = "<bucket_name>"
region = "eu-west-2"
key = "state"
dynamodb_endpoint = "https://dynamodb.eu-west-2.amazonaws.com"
dynamodb_table = "<table_name>"
}
}
resource "aws_s3_bucket" "<bucket_label>" {
bucket = "<bucket_name>"
lifecycle {
prevent_destroy = true
}
}
After creating the s3 bucket manually in the AWS console, run the following command to update the terraform state to inform it that the s3 bucket already exists:
terraform import aws_s3_bucket.<bucket_label> <bucket_name>
The s3 bucket will now be in your Terraform state and will henceforth be managed by Terraform.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | |
Solution 2 |