'terraform data archive file source directory with selected files and directories
I want to create a data archive_file with selected files and folders as source_dir. I have the folder structure as below. Within the src directory, I have lambdas directory and within that there are few folders and a set of files as below.
src
|-lambdasfolder
|-__init__.py
|-commonfolder (config.py, __init__.py)
|-lambda1folder (requesthandler.py, __init__.py)
|-lambda2folder (requesthandler.py, __init__.py)
|testsfolder
|otherfolder
...
I want to create a source directory for data archive file with selected folders and files. I want a create a src directory with just a single file and couple of directories matching the structure below.
src
|-lambdasfolder
|-__init__.py
|-commonfolder (config.py, __init__.py)
|-lambda1folder (requesthandler.py, __init__.py)
I am finding the examples as below which zips the entire directory, but how can I zip only the required ones
data "archive_file" "lambda_source"{
type = "zip"
source_dir = "${path.module}/../src"
output_path = "${path.module}/temp/src.zip"
}
I have managed to get working to some extent by creating a null reference and archive file dependant on it.
resource "null_resource" "lambda-repo" {
triggers = {
#not sure on this
}
provisioner "local-exec" {
command = "bash lambda-repo.sh"
working_dir = "${path.module}"
}
}
data "archive_file" "lambda-repo-file" {
depends_on = [null_resource.lambda-repo]
type = "zip"
source_dir = "${path.module}/lambda_archive/lambda-repo"
output_path = "${path.module}/lambda_archive/lambda-repo.zip"
}
and the shell script as follows
#!/bin/sh
mkdir -p lambda_archive/lambda-repo/lambdasfolder/common
mkdir -p lambda_archive/lambda-repo/lambdasfolder/lambda1folder
touch lambda_archive/lambda-repo/lambdasfolder/__init__.py
cp -r ../src/lambdasfolder/common/. lambda_archive/lambda-repo/lambdasfolder/common
cp -r ../src/lambdasfolder/lambda1folder/. lambda_archive/lambda-repo/lambdasfolder/lambda1folder
And also on the s3 object, I need to comment the etag for the initial terraform apply
resource "aws_s3_object" "lambda-repo" {
bucket = aws_s3_bucket.lambda-repo.id
key = "lambda-repo.zip"
source = data.archive_file.lambda-repo-file.output_path
#had to comment etag
#etag = filemd5(data.archive_file.lambda-repo-file.output_path)
}
Now I am left with two issues.
- It is not detecting any changes in lambda functions when terraform apply is executed after the first run
- How do I remove the folders and zip folders created through null resource?
Solution 1:[1]
Sadly you can't do this, unless you want to use local-exec to create the zips in a fully custom way. Otherwise, you have to re-organize your folder structure to have fully separate folders for archive_file
.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Marcin |