'Terraform S3 Bucket Object's etag keeps updating on each apply
I am uploading AWS Lambda code into an S3 bucket as zip files.
I have a resource declared for the S3 bucket object:
resource "aws_s3_bucket_object" "source-code-object" {
bucket = "${aws_s3_bucket.my-bucket.id}"
key = "source-code.zip"
source = "lambda_source_code/source-code.zip"
etag = "${base64sha256(file("lambda_source_code/source-code.zip"))}"
}
I also have a data declaration to zip up my code:
data "archive_file" "source-code-zip" {
type = "zip"
source_file = "${path.module}/lambda_source_code/run.py"
output_path = "${path.module}/lambda_source_code/source-code.zip"
}
The terraform apply
output keeps showing me a change to the hash:
~ aws_s3_bucket_object.source-code-object
etag: "old_hash" => "new_hash"
Even though nothing in my source code has changed. Why is this behavior occurring? I've seen similar posts with Lambdas' source codes continually changing, but my Lambdas actually are not updating each time (checked in console the last update time). However, it does look like a new S3 bucket object is uploaded on every apply
.
Solution 1:[1]
It's possible that your S3 bucket uses a KMS key to apply encryption by default. When configured that way, the etag
is not an MD5 of the file content (doc).
Objects created by the PUT Object, POST Object, or Copy operation, or through the AWS Management Console, and are encrypted by SSE-C or SSE-KMS, have ETags that are not an MD5 digest of their object data.
Without knowing their hashing implementation, there's not a way to pre-compute the value for terraform and make it a stable plan.
Instead, you should use the source_hash
property to address the limitation.
Solution 2:[2]
Zip archives contain metadata by default, such as timestamps, which results in the hash being different even if the source files are not. When manually building the archive you can avoid this with the --no-extra
or -X
flag. I am not sure if Terraform supports this flag.
From the zip man page:
-X
Do not save extra file attributes (Extended Attributes on OS/2, uid/gid and file times on Unix). The zip format uses extra fields to include additional information for each entry. Some extra fields are specific to particular systems while others are applicable to all systems. Normally when zip reads entries from an existing archive, it reads the extra fields it knows, strips the rest, and adds the extra fields applicable to that system. With -X, zip strips all old fields and only includes the Unicode and Zip64 extra fields (currently these two extra fields cannot be disabled).
Negating this option, -X-, includes all the default extra fields, but also copies over any unrecognized extra fields.
Solution 3:[3]
To prevent an update on each apply, using the new aws_s3_object
resource, you can use the output_base64sha256
attribute reference.
The aws_s3_bucket_object data source is DEPRECATED and will be removed in a future version! Use aws_s3_object instead, where new features and fixes will be added.
data "archive_file" "source-code-zip" {
type = "zip"
source_file = "${path.module}/lambda_source_code/run.py"
output_path = "${path.module}/lambda_source_code/source-code.zip"
}
resource "aws_s3_object" "source-code-object" {
bucket = aws_s3_bucket.my-bucket.id
key = "source-code.zip"
# we can also reference `output_path` from `archive_file`
# so as not to repeat the path
source = data.archive_file.source-code-zip.output_path
source_hash = data.archive_file.source-code-zip.output_base64sha256
}
output_base64sha256
has the added benefit of working with s3 objects encrypted using KMS, as @Matt F pointed out.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | |
Solution 2 | Community |
Solution 3 | tjheslin1 |