Commit 53ca99fe authored by Anton Babenko's avatar Anton Babenko

Rewrite to match other modules, added all existing S3 features

parent c5850e45
repos: repos:
- repo: git://github.com/antonbabenko/pre-commit-terraform - repo: git://github.com/antonbabenko/pre-commit-terraform
rev: v1.17.0 rev: v1.19.0
hooks: hooks:
- id: terraform_fmt - id: terraform_fmt
- id: terraform_docs - id: terraform_docs
- repo: git://github.com/pre-commit/pre-commit-hooks - repo: git://github.com/pre-commit/pre-commit-hooks
rev: v2.2.3 rev: v2.3.0
hooks: hooks:
- id: check-merge-conflict - id: check-merge-conflict
# AWS S3 bucket Terraform module # AWS S3 bucket Terraform module
Terraform module which creates S3 bucket resources on AWS. Terraform module which creates S3 bucket on AWS with all (or almost all) features provided by Terraform AWS provider.
This type of resources are supported: This type of resources are supported:
* [S3 bucket](https://www.terraform.io/docs/providers/aws/r/s3_bucket.html) * [S3 bucket](https://www.terraform.io/docs/providers/aws/r/s3_bucket.html)
These S3 Bucket configurations are supported: These features of S3 bucket configurations are supported:
- cors - static web-site hosting
- lifecycle-rules - access logging
- logging
- replication (Cross Region Replication - CRR)*
- versioning - versioning
- website - CORS
- lifecycle rules
``` - server-side encryption
These configurations are not supported yet: - object locking
- Cross-Region Replication (CRR)
In Cross Region Replication (in replication_configuration/rules block):
- priority (the argument is not supported yet).
- filter (the argument is not supported yet).
Object Lock Configuration block(object_lock_configuration) (this configuration block is not supported yet).
```
## Terraform versions ## Terraform versions
...@@ -31,62 +23,41 @@ Only Terraform 0.12 is supported. ...@@ -31,62 +23,41 @@ Only Terraform 0.12 is supported.
## Usage ## Usage
- **Private Bucket** ### Private bucket with versioning enabled
```hcl ```hcl
module "s3_bucket" { module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws" source = "terraform-aws-modules/s3-bucket/aws"
bucket = "s3-tf-example-versioning" bucket = "my-s3-bucket"
acl = "private" acl = "private"
versioning_inputs = [ versioning = {
{
enabled = true enabled = true
mfa_delete = null }
},
]
} }
``` ```
## Examples: ## Conditional creation
* [S3-CORS](https://github.com/terraform-aws-modules/terraform-aws-s3-bucket/tree/master/examples/s3-cors)
* [S3-Lifecycle-Rules](https://github.com/terraform-aws-modules/terraform-aws-s3-bucket/tree/master/examples/s3-lifecycle-rules)
* [S3-Logging](https://github.com/terraform-aws-modules/terraform-aws-s3-bucket/tree/master/examples/s3-logging)
* [S3-Replication](https://github.com/terraform-aws-modules/terraform-aws-s3-bucket/tree/master/examples/s3-replication)
* [S3-Versioning](https://github.com/terraform-aws-modules/terraform-aws-s3-bucket/tree/master/examples/s3-versioning)
* [S3-Website](https://github.com/terraform-aws-modules/terraform-aws-s3-bucket/tree/master/examples/s3-website)
## Inputs notes
```
The Terraform "aws_s3_bucket" resource has some nested configuration blocks and this was translated
to this module as lists of objects. Each configuration block was renamed as it follows:
<CONFIG_BLOCK_NAME>_inputs
```
Sometimes you need to have a way to create S3 resources conditionally but Terraform does not allow to use `count` inside `module` block, so the solution is to specify argument `create_bucket`.
``` ```hcl
module "aws_s3_bucket" { # This S3 bucket will not be created
source = "../.." module "s3_bucket" {
bucket = "s3-tf-example-logging" source = "terraform-aws-modules/s3-bucket/aws"
acl = "private"
logging_inputs = [ create_bucket = false
{ # ... omitted
target_bucket = "s3-tf-example-logger"
target_prefix = "log/"
},
]
```
The **logging_inputs** list will be converted to a **logging** configuration block:
```
logging {
target_bucket = "s3-tf-example-logger"
target_prefix = "log/"
} }
``` ```
## Examples:
* [Complete](https://github.com/terraform-aws-modules/terraform-aws-s3-bucket/tree/master/examples/complete) - Complete S3 bucket with most of supported features enabled
* [Cross-Region Replication](https://github.com/terraform-aws-modules/terraform-aws-s3-bucket/tree/master/examples/s3-replication) - S3 bucket with Cross-Region Replication (CRR) enabled
<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK --> <!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
## Inputs ## Inputs
...@@ -96,32 +67,33 @@ logging { ...@@ -96,32 +67,33 @@ logging {
| acl | (Optional) The canned ACL to apply. Defaults to 'private'. | string | `"private"` | no | | acl | (Optional) The canned ACL to apply. Defaults to 'private'. | string | `"private"` | no |
| bucket | (Optional, Forces new resource) The name of the bucket. If omitted, Terraform will assign a random, unique name. | string | `"null"` | no | | bucket | (Optional, Forces new resource) The name of the bucket. If omitted, Terraform will assign a random, unique name. | string | `"null"` | no |
| bucket\_prefix | (Optional, Forces new resource) Creates a unique bucket name beginning with the specified prefix. Conflicts with bucket. | string | `"null"` | no | | bucket\_prefix | (Optional, Forces new resource) Creates a unique bucket name beginning with the specified prefix. Conflicts with bucket. | string | `"null"` | no |
| cors\_rule\_inputs | | object | `"null"` | no | | cors\_rule | Map containing a rule of Cross-Origin Resource Sharing. | any | `{}` | no |
| force\_destroy | (Optional, Default:false ) A boolean that indicates all objects should be deleted from the bucket so that the bucket can be destroyed without error. These objects are not recoverable. | string | `"false"` | no | | create\_bucket | Controls if S3 bucket should be created | bool | `"true"` | no |
| lifecycle\_rule\_inputs | | object | `"null"` | no | | force\_destroy | (Optional, Default:false ) A boolean that indicates all objects should be deleted from the bucket so that the bucket can be destroyed without error. These objects are not recoverable. | bool | `"false"` | no |
| logging\_inputs | | object | `"null"` | no | | lifecycle\_rule | List of maps containing configuration of object lifecycle management. | any | `[]` | no |
| object\_lock\_configuration\_inputs | | object | `"null"` | no | | logging | Map containing access bucket logging configuration. | map(string) | `{}` | no |
| object\_lock\_configuration | Map containing S3 object locking configuration. | any | `{}` | no |
| policy | (Optional) A valid bucket policy JSON document. Note that if the policy document is not specific enough (but still valid), Terraform may view the policy as constantly changing in a terraform plan. In this case, please make sure you use the verbose/specific version of the policy. For more information about building AWS IAM policy documents with Terraform, see the AWS IAM Policy Document Guide. | string | `"null"` | no | | policy | (Optional) A valid bucket policy JSON document. Note that if the policy document is not specific enough (but still valid), Terraform may view the policy as constantly changing in a terraform plan. In this case, please make sure you use the verbose/specific version of the policy. For more information about building AWS IAM policy documents with Terraform, see the AWS IAM Policy Document Guide. | string | `"null"` | no |
| region | (Optional) If specified, the AWS region this bucket should reside in. Otherwise, the region used by the callee. | string | `"null"` | no | | region | (Optional) If specified, the AWS region this bucket should reside in. Otherwise, the region used by the callee. | string | `"null"` | no |
| replication\_configuration\_inputs | | object | `"null"` | no | | replication\_configuration | Map containing cross-region replication configuration. | any | `{}` | no |
| request\_payer | (Optional) Specifies who should bear the cost of Amazon S3 data transfer. Can be either BucketOwner or Requester. By default, the owner of the S3 bucket would incur the costs of any data transfer. See Requester Pays Buckets developer guide for more information. | string | `"null"` | no | | request\_payer | (Optional) Specifies who should bear the cost of Amazon S3 data transfer. Can be either BucketOwner or Requester. By default, the owner of the S3 bucket would incur the costs of any data transfer. See Requester Pays Buckets developer guide for more information. | string | `"null"` | no |
| server\_side\_encryption\_configuration\_inputs | | object | `"null"` | no | | server\_side\_encryption\_configuration | Map containing server-side encryption configuration. | any | `{}` | no |
| tags | (Optional) A mapping of tags to assign to the bucket. | map | `{}` | no | | tags | (Optional) A mapping of tags to assign to the bucket. | map(string) | `{}` | no |
| versioning\_inputs | | object | `"null"` | no | | versioning | Map containing versioning configuration. | map(string) | `{}` | no |
| website\_inputs | | object | `"null"` | no | | website | Map containing static web-site hosting or redirect configuration. | map(string) | `{}` | no |
## Outputs ## Outputs
| Name | Description | | Name | Description |
|------|-------------| |------|-------------|
| arn | The ARN of the bucket. Will be of format arn:aws:s3:::bucketname. | | this\_s3\_bucket\_arn | The ARN of the bucket. Will be of format arn:aws:s3:::bucketname. |
| bucket\_domain\_name | The bucket domain name. Will be of format bucketname.s3.amazonaws.com. | | this\_s3\_bucket\_bucket\_domain\_name | The bucket domain name. Will be of format bucketname.s3.amazonaws.com. |
| bucket\_regional\_domain\_name | The bucket region-specific domain name. The bucket domain name including the region name, please refer here for format. Note: The AWS CloudFront allows specifying S3 region-specific endpoint when creating S3 origin, it will prevent redirect issues from CloudFront to S3 Origin URL. | | this\_s3\_bucket\_bucket\_regional\_domain\_name | The bucket region-specific domain name. The bucket domain name including the region name, please refer here for format. Note: The AWS CloudFront allows specifying S3 region-specific endpoint when creating S3 origin, it will prevent redirect issues from CloudFront to S3 Origin URL. |
| hosted\_zone\_id | The Route 53 Hosted Zone ID for this bucket's region. | | this\_s3\_bucket\_hosted\_zone\_id | The Route 53 Hosted Zone ID for this bucket's region. |
| id | The name of the bucket. | | this\_s3\_bucket\_id | The name of the bucket. |
| region | The AWS region this bucket resides in. | | this\_s3\_bucket\_region | The AWS region this bucket resides in. |
| website\_domain | The domain of the website endpoint, if the bucket is configured with a website. If not, this will be an empty string. This is used to create Route 53 alias records. | | this\_s3\_bucket\_website\_domain | The domain of the website endpoint, if the bucket is configured with a website. If not, this will be an empty string. This is used to create Route 53 alias records. |
| website\_endpoint | The website endpoint, if the bucket is configured with a website. If not, this will be an empty string. | | this\_s3\_bucket\_website\_endpoint | The website endpoint, if the bucket is configured with a website. If not, this will be an empty string. |
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK --> <!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
......
# Complete S3 bucket with most of supported features enabled
Configuration in this directory creates S3 bucket which demos such capabilities:
- static web-site hosting
- access logging
- versioning
- CORS
- lifecycle rules
- server-side encryption
- object locking
Please check [S3 replication example](https://github.com/terraform-aws-modules/terraform-aws-s3-bucket/tree/master/examples/complete) to see Cross-Region Replication (CRR) supported by this module.
## Usage
To run this example you need to execute:
```bash
$ terraform init
$ terraform plan
$ terraform apply
```
Note that this example may create resources which cost money. Run `terraform destroy` when you don't need these resources.
<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
## Outputs
| Name | Description |
|------|-------------|
| this\_s3\_bucket\_arn | The ARN of the bucket. Will be of format arn:aws:s3:::bucketname. |
| this\_s3\_bucket\_bucket\_domain\_name | The bucket domain name. Will be of format bucketname.s3.amazonaws.com. |
| this\_s3\_bucket\_bucket\_regional\_domain\_name | The bucket region-specific domain name. The bucket domain name including the region name, please refer here for format. Note: The AWS CloudFront allows specifying S3 region-specific endpoint when creating S3 origin, it will prevent redirect issues from CloudFront to S3 Origin URL. |
| this\_s3\_bucket\_hosted\_zone\_id | The Route 53 Hosted Zone ID for this bucket's region. |
| this\_s3\_bucket\_id | The name of the bucket. |
| this\_s3\_bucket\_region | The AWS region this bucket resides in. |
| this\_s3\_bucket\_website\_domain | The domain of the website endpoint, if the bucket is configured with a website. If not, this will be an empty string. This is used to create Route 53 alias records. |
| this\_s3\_bucket\_website\_endpoint | The website endpoint, if the bucket is configured with a website. If not, this will be an empty string. |
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
resource "random_pet" "this" {
length = 2
}
resource "aws_kms_key" "objects" {
description = "KMS key is used to encrypt bucket objects"
deletion_window_in_days = 7
}
module "log_bucket" {
source = "../../"
bucket = "logs-${random_pet.this.id}"
acl = "log-delivery-write"
force_destroy = true
}
module "s3_bucket" {
source = "../../"
bucket = "s3-bucket-${random_pet.this.id}"
acl = "private"
force_destroy = true
tags = {
Owner = "Anton"
}
versioning = {
enabled = true
}
website = {
index_document = "index.html"
error_document = "error.html"
routing_rules = jsonencode([{
Condition : {
KeyPrefixEquals : "docs/"
},
Redirect : {
ReplaceKeyPrefixWith : "documents/"
}
}])
}
logging = {
target_bucket = module.log_bucket.this_s3_bucket_id
target_prefix = "log/"
}
cors_rule = {
allowed_methods = ["PUT", "POST"]
allowed_origins = ["https://modules.tf", "https://terraform-aws-modules.modules.tf"]
allowed_headers = ["*"]
expose_headers = ["ETag"]
max_age_seconds = 3000
}
lifecycle_rule = [
{
id = "log"
enabled = true
prefix = "log/"
tags = {
rule = "log"
autoclean = "true"
}
transition = [
{
days = 30
storage_class = "ONEZONE_IA"
}, {
days = 60
storage_class = "GLACIER"
}
]
expiration = {
days = 90
}
noncurrent_version_expiration = {
days = 30
}
},
{
id = "log1"
enabled = true
prefix = "log1/"
abort_incomplete_multipart_upload_days = 7
noncurrent_version_transition = [
{
days = 30
storage_class = "STANDARD_IA"
},
{
days = 60
storage_class = "ONEZONE_IA"
},
{
days = 90
storage_class = "GLACIER"
},
]
noncurrent_version_expiration = {
days = 300
}
},
]
server_side_encryption_configuration = {
rule = {
apply_server_side_encryption_by_default = {
kms_master_key_id = aws_kms_key.objects.arn
sse_algorithm = "aws:kms"
}
}
}
object_lock_configuration = {
object_lock_enabled = "Enabled"
rule = {
default_retention = {
mode = "COMPLIANCE"
years = 5
}
}
}
}
output "this_s3_bucket_id" {
description = "The name of the bucket."
value = module.s3_bucket.this_s3_bucket_id
}
output "this_s3_bucket_arn" {
description = "The ARN of the bucket. Will be of format arn:aws:s3:::bucketname."
value = module.s3_bucket.this_s3_bucket_arn
}
output "this_s3_bucket_bucket_domain_name" {
description = "The bucket domain name. Will be of format bucketname.s3.amazonaws.com."
value = module.s3_bucket.this_s3_bucket_bucket_domain_name
}
output "this_s3_bucket_bucket_regional_domain_name" {
description = "The bucket region-specific domain name. The bucket domain name including the region name, please refer here for format. Note: The AWS CloudFront allows specifying S3 region-specific endpoint when creating S3 origin, it will prevent redirect issues from CloudFront to S3 Origin URL."
value = module.s3_bucket.this_s3_bucket_bucket_regional_domain_name
}
output "this_s3_bucket_hosted_zone_id" {
description = "The Route 53 Hosted Zone ID for this bucket's region."
value = module.s3_bucket.this_s3_bucket_hosted_zone_id
}
output "this_s3_bucket_region" {
description = "The AWS region this bucket resides in."
value = module.s3_bucket.this_s3_bucket_region
}
output "this_s3_bucket_website_endpoint" {
description = "The website endpoint, if the bucket is configured with a website. If not, this will be an empty string."
value = module.s3_bucket.this_s3_bucket_website_endpoint
}
output "this_s3_bucket_website_domain" {
description = "The domain of the website endpoint, if the bucket is configured with a website. If not, this will be an empty string. This is used to create Route 53 alias records. "
value = module.s3_bucket.this_s3_bucket_website_domain
}
variable "region" {
default = "us-west-2"
}
# Configure the AWS Provider
provider "aws" {
region = var.region
}
// Calling module:
module "aws_s3_bucket" {
source = "../.."
bucket = "s3-tf-example-cors"
acl = "private"
cors_rule_inputs = [
{
allowed_headers = ["*"]
allowed_methods = ["PUT", "POST"]
allowed_origins = ["https://s3-website-test.hashicorp.com", "https://s3-website-test.hashicorp.io"]
expose_headers = ["ETag"]
max_age_seconds = 3000
},
{
allowed_headers = ["*"]
allowed_methods = ["GET"]
allowed_origins = ["https://s3-website-test.hashicorp.io"]
expose_headers = ["ETag"]
max_age_seconds = 3000
},
]
}
\ No newline at end of file
variable "region" {
default = "us-west-2"
}
# Configure the AWS Provider
provider "aws" {
region = var.region
}
// Calling module:
module "aws_s3_bucket" {
source = "../.."
bucket = "s3-tf-example-lifecycle"
acl = "private"
lifecycle_rule_inputs = [{
id = "log"
enabled = true
prefix = "log/"
abort_incomplete_multipart_upload_days = null
tags = {
"rule" = "log"
"autoclean" = "true"
}
expiration_inputs = [{
days = 90
date = null
expired_object_delete_marker = null
},
]
transition_inputs = []
noncurrent_version_transition_inputs = []
noncurrent_version_expiration_inputs = []
},
{
id = "log1"
enabled = true
prefix = "log1/"
abort_incomplete_multipart_upload_days = null
tags = {
"rule" = "log1"
"autoclean" = "true"
}
expiration_inputs = []
transition_inputs = []
noncurrent_version_transition_inputs = [
{
days = 30
storage_class = "STANDARD_IA"
},
{
days = 60
storage_class = "ONEZONE_IA"
},
{
days = 90
storage_class = "GLACIER"
},
]
noncurrent_version_expiration_inputs = []
},
]
}
\ No newline at end of file
variable "region" {
default = "us-west-2"
}
# Configure the AWS Provider
provider "aws" {
region = var.region
}
// Calling module:
module "log_bucket" {
source = "../.."
bucket = "s3-tf-example-logger"
acl = "log-delivery-write"
}
module "aws_s3_bucket" {
source = "../.."
bucket = "s3-tf-example-logging"
acl = "private"
logging_inputs = [
{
target_bucket = "s3-tf-example-logger"
target_prefix = "log/"
},
]
}
# S3 bucket with Cross-Region Replication (CRR) enabled
Configuration in this directory creates S3 bucket in one region and configures CRR to another bucket in another region.
Please check [complete example](https://github.com/terraform-aws-modules/terraform-aws-s3-bucket/tree/master/examples/complete) to see all other features supported by this module.
## Usage
To run this example you need to execute:
```bash
$ terraform init
$ terraform plan
$ terraform apply
```
Note that this example may create resources which cost money. Run `terraform destroy` when you don't need these resources.
<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
## Outputs
| Name | Description |
|------|-------------|
| this\_s3\_bucket\_arn | The ARN of the bucket. Will be of format arn:aws:s3:::bucketname. |
| this\_s3\_bucket\_bucket\_domain\_name | The bucket domain name. Will be of format bucketname.s3.amazonaws.com. |
| this\_s3\_bucket\_bucket\_regional\_domain\_name | The bucket region-specific domain name. The bucket domain name including the region name, please refer here for format. Note: The AWS CloudFront allows specifying S3 region-specific endpoint when creating S3 origin, it will prevent redirect issues from CloudFront to S3 Origin URL. |
| this\_s3\_bucket\_hosted\_zone\_id | The Route 53 Hosted Zone ID for this bucket's region. |
| this\_s3\_bucket\_id | The name of the bucket. |
| this\_s3\_bucket\_region | The AWS region this bucket resides in. |
| this\_s3\_bucket\_website\_domain | The domain of the website endpoint, if the bucket is configured with a website. If not, this will be an empty string. This is used to create Route 53 alias records. |
| this\_s3\_bucket\_website\_endpoint | The website endpoint, if the bucket is configured with a website. If not, this will be an empty string. |
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
resource "aws_iam_role" "replication" {
name = "s3-bucket-replication-${random_pet.this.id}"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
POLICY
}
resource "aws_iam_policy" "replication" {
name = "s3-bucket-replication-${random_pet.this.id}"
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetReplicationConfiguration",
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::${local.bucket_name}"
]
},
{
"Action": [
"s3:GetObjectVersion",
"s3:GetObjectVersionAcl"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::${local.bucket_name}/*"
]
},
{
"Action": [
"s3:ReplicateObject",
"s3:ReplicateDelete"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::${local.destination_bucket_name}/*"
}
]
}
POLICY
}
resource "aws_iam_policy_attachment" "replication" {
name = "s3-bucket-replication-${random_pet.this.id}"
roles = [aws_iam_role.replication.name]
policy_arn = aws_iam_policy.replication.arn
}
variable "region" { locals {
default = "ca-central-1" bucket_name = "origin-s3-bucket-${random_pet.this.id}"
destination_bucket_name = "replica-s3-bucket-${random_pet.this.id}"
origin_region = "eu-west-1"
replica_region = "eu-central-1"
} }
# Configure the AWS Provider
provider "aws" { provider "aws" {
region = var.region region = local.origin_region
} }
module "bucket" { provider "aws" {
source = "../.." region = local.replica_region
bucket = "s3-tf-example-replication"
alias = "replica"
}
data "aws_caller_identity" "current" {}
resource "random_pet" "this" {
length = 2
}
resource "aws_kms_key" "replica" {
provider = "aws.replica"
description = "S3 bucket replication KMS key"
deletion_window_in_days = 7
}
module "replica_bucket" {
source = "../../"
providers = {
aws = "aws.replica"
}
bucket = local.destination_bucket_name
region = local.replica_region
acl = "private" acl = "private"
versioning_inputs = [ versioning = {
{
enabled = true enabled = true
mfa_delete = null }
}, }
]
replication_configuration_inputs = [ module "s3_bucket" {
{ source = "../../"
role = "<ROLE_ARN>" // Place the IAM Role to access the destination bucket
bucket = local.bucket_name
region = local.origin_region
acl = "private"
rules_inputs = [ versioning = {
enabled = true
}
replication_configuration = {
role = aws_iam_role.replication.arn
rules = [
{ {
id = "foobar" id = "foo"
prefix = "foo"
status = "Enabled" status = "Enabled"
priority = null priority = 10
source_selection_criteria_inputs = null
filter_inputs = null
destination_inputs = [ source_selection_criteria = {
{ sse_kms_encrypted_objects = {
bucket = "<DESTINATION_BUCKET>" // Place the destination bicket ARN enabled = true
}
}
filter = {
prefix = "one"
tags = {
ReplicateMe = "Yes"
}
}
destination = {
bucket = "arn:aws:s3:::${local.destination_bucket_name}"
storage_class = "STANDARD" storage_class = "STANDARD"
replica_kms_key_id = null replica_kms_key_id = aws_kms_key.replica.arn
account_id = null account_id = data.aws_caller_identity.current.account_id
access_control_translation_inputs = null access_control_translation = {
owner = "Destination"
}
}
}, },
] {
}, id = "bar"
] status = "Enabled"
priority = 20
destination = {
bucket = "arn:aws:s3:::${local.destination_bucket_name}"
storage_class = "STANDARD"
}
filter = {
prefix = "two"
tags = {
ReplicateMe = "Yes"
}
}
}, },
] ]
}
} }
output "this_s3_bucket_id" {
description = "The name of the bucket."
value = module.s3_bucket.this_s3_bucket_id
}
output "this_s3_bucket_arn" {
description = "The ARN of the bucket. Will be of format arn:aws:s3:::bucketname."
value = module.s3_bucket.this_s3_bucket_arn
}
output "this_s3_bucket_bucket_domain_name" {
description = "The bucket domain name. Will be of format bucketname.s3.amazonaws.com."
value = module.s3_bucket.this_s3_bucket_bucket_domain_name
}
output "this_s3_bucket_bucket_regional_domain_name" {
description = "The bucket region-specific domain name. The bucket domain name including the region name, please refer here for format. Note: The AWS CloudFront allows specifying S3 region-specific endpoint when creating S3 origin, it will prevent redirect issues from CloudFront to S3 Origin URL."
value = module.s3_bucket.this_s3_bucket_bucket_regional_domain_name
}
output "this_s3_bucket_hosted_zone_id" {
description = "The Route 53 Hosted Zone ID for this bucket's region."
value = module.s3_bucket.this_s3_bucket_hosted_zone_id
}
output "this_s3_bucket_region" {
description = "The AWS region this bucket resides in."
value = module.s3_bucket.this_s3_bucket_region
}
output "this_s3_bucket_website_endpoint" {
description = "The website endpoint, if the bucket is configured with a website. If not, this will be an empty string."
value = module.s3_bucket.this_s3_bucket_website_endpoint
}
output "this_s3_bucket_website_domain" {
description = "The domain of the website endpoint, if the bucket is configured with a website. If not, this will be an empty string. This is used to create Route 53 alias records. "
value = module.s3_bucket.this_s3_bucket_website_domain
}
variable "region" {
default = "us-west-2"
}
# Configure the AWS Provider
provider "aws" {
region = var.region
}
// Calling module:
module "aws_s3_bucket" {
source = "../.."
bucket = "s3-tf-example-versioning"
acl = "private"
versioning_inputs = [
{
enabled = true
mfa_delete = null
},
]
}
\ No newline at end of file
variable "region" {
default = "us-west-2"
}
# Configure the AWS Provider
provider "aws" {
region = var.region
}
// Calling module:
module "aws_s3_bucket" {
source = "../.."
bucket = "s3-tf-example-website"
acl = "private"
website_inputs = [
{
index_document = "index.html"
error_document = "error.html"
redirect_all_requests_to = null
routing_rules = <<EOF
[{
"Condition": {
"KeyPrefixEquals": "docs/"
},
"Redirect": {
"ReplaceKeyPrefixWith": "documents/"
}
}]
EOF
}
]
}
\ No newline at end of file
resource "aws_s3_bucket" "this" { resource "aws_s3_bucket" "this" {
count = var.create_bucket ? 1 : 0
bucket = var.bucket bucket = var.bucket
bucket_prefix = var.bucket_prefix bucket_prefix = var.bucket_prefix
acl = var.acl acl = var.acl
...@@ -10,121 +12,126 @@ resource "aws_s3_bucket" "this" { ...@@ -10,121 +12,126 @@ resource "aws_s3_bucket" "this" {
request_payer = var.request_payer request_payer = var.request_payer
dynamic "website" { dynamic "website" {
for_each = var.website_inputs == null ? [] : var.website_inputs for_each = length(keys(var.website)) == 0 ? [] : [var.website]
content { content {
index_document = website.value.index_document index_document = lookup(website.value, "index_document", null)
error_document = website.value.error_document error_document = lookup(website.value, "error_document", null)
redirect_all_requests_to = website.value.redirect_all_requests_to redirect_all_requests_to = lookup(website.value, "redirect_all_requests_to", null)
routing_rules = website.value.routing_rules routing_rules = lookup(website.value, "routing_rules", null)
} }
} }
dynamic "cors_rule" { dynamic "cors_rule" {
for_each = var.cors_rule_inputs == null ? [] : var.cors_rule_inputs for_each = length(keys(var.cors_rule)) == 0 ? [] : [var.cors_rule]
content { content {
allowed_headers = cors_rule.value.allowed_headers
allowed_methods = cors_rule.value.allowed_methods allowed_methods = cors_rule.value.allowed_methods
allowed_origins = cors_rule.value.allowed_origins allowed_origins = cors_rule.value.allowed_origins
expose_headers = cors_rule.value.expose_headers allowed_headers = lookup(cors_rule.value, "allowed_headers", null)
max_age_seconds = cors_rule.value.max_age_seconds expose_headers = lookup(cors_rule.value, "expose_headers", null)
max_age_seconds = lookup(cors_rule.value, "max_age_seconds", null)
} }
} }
dynamic "versioning" { dynamic "versioning" {
for_each = var.versioning_inputs == null ? [] : var.versioning_inputs for_each = length(keys(var.versioning)) == 0 ? [] : [var.versioning]
content { content {
enabled = versioning.value.enabled enabled = lookup(versioning.value, "enabled", null)
mfa_delete = versioning.value.mfa_delete mfa_delete = lookup(versioning.value, "mfa_delete", null)
} }
} }
dynamic "logging" { dynamic "logging" {
for_each = var.logging_inputs == null ? [] : var.logging_inputs for_each = length(keys(var.logging)) == 0 ? [] : [var.logging]
content { content {
target_bucket = logging.value.target_bucket target_bucket = logging.value.target_bucket
target_prefix = logging.value.target_prefix target_prefix = lookup(logging.value, "target_prefix", null)
} }
} }
dynamic "lifecycle_rule" { dynamic "lifecycle_rule" {
for_each = var.lifecycle_rule_inputs == null ? [] : var.lifecycle_rule_inputs for_each = var.lifecycle_rule
content { content {
id = lifecycle_rule.value.id id = lookup(lifecycle_rule.value, "id", null)
prefix = lifecycle_rule.value.prefix prefix = lookup(lifecycle_rule.value, "prefix", null)
tags = lifecycle_rule.value.tags tags = lookup(lifecycle_rule.value, "tags", null)
abort_incomplete_multipart_upload_days = lookup(lifecycle_rule.value, "abort_incomplete_multipart_upload_days", null)
enabled = lifecycle_rule.value.enabled enabled = lifecycle_rule.value.enabled
abort_incomplete_multipart_upload_days = lifecycle_rule.value.abort_incomplete_multipart_upload_days
# Max 1 block - expiration
dynamic "expiration" { dynamic "expiration" {
for_each = lifecycle_rule.value.expiration_inputs == null ? [] : lifecycle_rule.value.expiration_inputs for_each = length(keys(lookup(lifecycle_rule.value, "expiration", {}))) == 0 ? [] : [lookup(lifecycle_rule.value, "expiration", {})]
content { content {
date = expiration.value.date date = lookup(expiration.value, "date", null)
days = expiration.value.days days = lookup(expiration.value, "days", null)
expired_object_delete_marker = expiration.value.expired_object_delete_marker expired_object_delete_marker = lookup(expiration.value, "expired_object_delete_marker", null)
} }
} }
# Several blocks - transition
dynamic "transition" { dynamic "transition" {
for_each = lifecycle_rule.value.transition_inputs == null ? [] : lifecycle_rule.value.transition_inputs for_each = lookup(lifecycle_rule.value, "transition", [])
content { content {
date = transition.value.date date = lookup(transition.value, "date", null)
days = transition.value.days days = lookup(transition.value, "days", null)
storage_class = transition.value.storage_class storage_class = transition.value.storage_class
} }
} }
dynamic "noncurrent_version_transition" { # Max 1 block - noncurrent_version_expiration
for_each = lifecycle_rule.value.noncurrent_version_transition_inputs == null ? [] : lifecycle_rule.value.noncurrent_version_transition_inputs dynamic "noncurrent_version_expiration" {
for_each = length(keys(lookup(lifecycle_rule.value, "noncurrent_version_expiration", {}))) == 0 ? [] : [lookup(lifecycle_rule.value, "noncurrent_version_expiration", {})]
content { content {
days = noncurrent_version_transition.value.days days = lookup(noncurrent_version_expiration.value, "days", null)
storage_class = noncurrent_version_transition.value.storage_class
} }
} }
dynamic "noncurrent_version_expiration" { # Several blocks - noncurrent_version_transition
for_each = lifecycle_rule.value.noncurrent_version_expiration_inputs == null ? [] : lifecycle_rule.value.noncurrent_version_expiration_inputs dynamic "noncurrent_version_transition" {
for_each = lookup(lifecycle_rule.value, "noncurrent_version_transition", [])
content { content {
days = noncurrent_version_expiration.value.days days = lookup(noncurrent_version_transition.value, "days", null)
storage_class = noncurrent_version_transition.value.storage_class
} }
} }
} }
} }
# Max 1 block - replication_configuration
dynamic "replication_configuration" { dynamic "replication_configuration" {
for_each = var.replication_configuration_inputs == null ? [] : var.replication_configuration_inputs for_each = length(keys(var.replication_configuration)) == 0 ? [] : [var.replication_configuration]
content { content {
role = replication_configuration.value.role role = replication_configuration.value.role
dynamic "rules" { dynamic "rules" {
for_each = replication_configuration.value.rules_inputs == null ? [] : replication_configuration.value.rules_inputs for_each = replication_configuration.value.rules
content { content {
id = rules.value.id id = lookup(rules.value, "id", null)
// priority = rules.value.priority priority = lookup(rules.value, "priority", null)
prefix = rules.value.prefix prefix = lookup(rules.value, "prefix", null)
status = rules.value.status status = lookup(rules.value, "status", null)
dynamic "destination" { dynamic "destination" {
for_each = rules.value.destination_inputs == null ? [] : rules.value.destination_inputs for_each = length(keys(lookup(rules.value, "destination", {}))) == 0 ? [] : [lookup(rules.value, "destination", {})]
content { content {
bucket = destination.value.bucket bucket = lookup(destination.value, "bucket", null)
storage_class = destination.value.storage_class storage_class = lookup(destination.value, "storage_class", null)
replica_kms_key_id = destination.value.replica_kms_key_id replica_kms_key_id = lookup(destination.value, "replica_kms_key_id", null)
account_id = destination.value.account_id account_id = lookup(destination.value, "account_id", null)
dynamic "access_control_translation" { dynamic "access_control_translation" {
for_each = destination.value.access_control_translation_inputs == null ? [] : destination.value.access_control_translation_inputs for_each = length(keys(lookup(destination.value, "access_control_translation", {}))) == 0 ? [] : [lookup(destination.value, "access_control_translation", {})]
content { content {
owner = access_control_translation.value.owner owner = access_control_translation.value.owner
...@@ -134,64 +141,79 @@ resource "aws_s3_bucket" "this" { ...@@ -134,64 +141,79 @@ resource "aws_s3_bucket" "this" {
} }
dynamic "source_selection_criteria" { dynamic "source_selection_criteria" {
for_each = rules.value.source_selection_criteria_inputs == null ? [] : rules.value.source_selection_criteria_inputs for_each = length(keys(lookup(rules.value, "source_selection_criteria", {}))) == 0 ? [] : [lookup(rules.value, "source_selection_criteria", {})]
content { content {
sse_kms_encrypted_objects {
enabled = source_selection_criteria.value.enabled dynamic "sse_kms_encrypted_objects" {
for_each = length(keys(lookup(source_selection_criteria.value, "sse_kms_encrypted_objects", {}))) == 0 ? [] : [lookup(source_selection_criteria.value, "sse_kms_encrypted_objects", {})]
content {
enabled = sse_kms_encrypted_objects.value.enabled
} }
} }
} }
/* }
dynamic "filter" { dynamic "filter" {
for_each = rules.value.filter_inputs == null ? [] : rules.value.filter_inputs for_each = length(keys(lookup(rules.value, "filter", {}))) == 0 ? [] : [lookup(rules.value, "filter", {})]
content { content {
prefix = filter.value.prefix prefix = lookup(filter.value, "prefix", null)
tags = filter.value.tags tags = lookup(filter.value, "tags", null)
} }
} }
*/
} }
} }
} }
} }
# Max 1 block - server_side_encryption_configuration
dynamic "server_side_encryption_configuration" { dynamic "server_side_encryption_configuration" {
for_each = var.server_side_encryption_configuration_inputs == null ? [] : var.server_side_encryption_configuration_inputs for_each = length(keys(var.server_side_encryption_configuration)) == 0 ? [] : [var.server_side_encryption_configuration]
content { content {
rule {
apply_server_side_encryption_by_default { dynamic "rule" {
sse_algorithm = server_side_encryption_configuration.value.sse_algorithm for_each = length(keys(lookup(server_side_encryption_configuration.value, "rule", {}))) == 0 ? [] : [lookup(server_side_encryption_configuration.value, "rule", {})]
kms_master_key_id = server_side_encryption_configuration.value.kms_master_key_id
content {
dynamic "apply_server_side_encryption_by_default" {
for_each = length(keys(lookup(rule.value, "apply_server_side_encryption_by_default", {}))) == 0 ? [] : [
lookup(rule.value, "apply_server_side_encryption_by_default", {})]
content {
sse_algorithm = apply_server_side_encryption_by_default.value.sse_algorithm
kms_master_key_id = apply_server_side_encryption_by_default.value.kms_master_key_id
}
}
} }
} }
} }
} }
/*
# Max 1 block - object_lock_configuration
dynamic "object_lock_configuration" { dynamic "object_lock_configuration" {
for_each = var.object_lock_configuration_inputs == null ? [] : var.object_lock_configuration_inputs for_each = length(keys(var.object_lock_configuration)) == 0 ? [] : [var.object_lock_configuration]
content { content {
object_lock_enabled = object_lock_configuration.value.object_lock_enabled object_lock_enabled = object_lock_configuration.value.object_lock_enabled
dynamic "rule" { dynamic "rule" {
for_each = object_lock_configuration.value.rule_inputs == null ? [] : object_lock_configuration.value.rule_inputs for_each = length(keys(lookup(object_lock_configuration.value, "rule", {}))) == 0 ? [] : [lookup(object_lock_configuration.value, "rule", {})]
content { content {
default_retention { default_retention {
mode = rule.value.mode mode = lookup(lookup(rule.value, "default_retention", {}), "mode")
days = rule.value.days days = lookup(lookup(rule.value, "default_retention", {}), "days", null)
years = rule.value.years years = lookup(lookup(rule.value, "default_retention", {}), "years", null)
} }
} }
} }
} }
} }
*/
}
}
output "id" { output "this_s3_bucket_id" {
description = "The name of the bucket." description = "The name of the bucket."
value = element(concat(aws_s3_bucket.this.*.id, list("")), 0) value = element(concat(aws_s3_bucket.this.*.id, list("")), 0)
} }
output "arn" { output "this_s3_bucket_arn" {
description = "The ARN of the bucket. Will be of format arn:aws:s3:::bucketname." description = "The ARN of the bucket. Will be of format arn:aws:s3:::bucketname."
value = element(concat(aws_s3_bucket.this.*.arn, list("")), 0) value = element(concat(aws_s3_bucket.this.*.arn, list("")), 0)
} }
output "bucket_domain_name" { output "this_s3_bucket_bucket_domain_name" {
description = "The bucket domain name. Will be of format bucketname.s3.amazonaws.com." description = "The bucket domain name. Will be of format bucketname.s3.amazonaws.com."
value = element(concat(aws_s3_bucket.this.*.bucket_domain_name, list("")), 0) value = element(concat(aws_s3_bucket.this.*.bucket_domain_name, list("")), 0)
} }
output "bucket_regional_domain_name" { output "this_s3_bucket_bucket_regional_domain_name" {
description = "The bucket region-specific domain name. The bucket domain name including the region name, please refer here for format. Note: The AWS CloudFront allows specifying S3 region-specific endpoint when creating S3 origin, it will prevent redirect issues from CloudFront to S3 Origin URL." description = "The bucket region-specific domain name. The bucket domain name including the region name, please refer here for format. Note: The AWS CloudFront allows specifying S3 region-specific endpoint when creating S3 origin, it will prevent redirect issues from CloudFront to S3 Origin URL."
value = element(concat(aws_s3_bucket.this.*.bucket_regional_domain_name, list("")), 0) value = element(concat(aws_s3_bucket.this.*.bucket_regional_domain_name, list("")), 0)
} }
output "hosted_zone_id" { output "this_s3_bucket_hosted_zone_id" {
description = "The Route 53 Hosted Zone ID for this bucket's region." description = "The Route 53 Hosted Zone ID for this bucket's region."
value = element(concat(aws_s3_bucket.this.*.hosted_zone_id, list("")), 0) value = element(concat(aws_s3_bucket.this.*.hosted_zone_id, list("")), 0)
} }
output "region" { output "this_s3_bucket_region" {
description = "The AWS region this bucket resides in." description = "The AWS region this bucket resides in."
value = element(concat(aws_s3_bucket.this.*.region, list("")), 0) value = element(concat(aws_s3_bucket.this.*.region, list("")), 0)
} }
output "website_endpoint" { output "this_s3_bucket_website_endpoint" {
description = "The website endpoint, if the bucket is configured with a website. If not, this will be an empty string." description = "The website endpoint, if the bucket is configured with a website. If not, this will be an empty string."
value = element(concat(aws_s3_bucket.this.*.website_endpoint, list("")), 0) value = element(concat(aws_s3_bucket.this.*.website_endpoint, list("")), 0)
} }
output "website_domain" { output "this_s3_bucket_website_domain" {
description = "The domain of the website endpoint, if the bucket is configured with a website. If not, this will be an empty string. This is used to create Route 53 alias records. " description = "The domain of the website endpoint, if the bucket is configured with a website. If not, this will be an empty string. This is used to create Route 53 alias records. "
value = element(concat(aws_s3_bucket.this.*.website_domain, list("")), 0) value = element(concat(aws_s3_bucket.this.*.website_domain, list("")), 0)
} }
variable "create_bucket" {
description = "Controls if S3 bucket should be created"
type = bool
default = true
}
variable "bucket" { variable "bucket" {
description = "(Optional, Forces new resource) The name of the bucket. If omitted, Terraform will assign a random, unique name." description = "(Optional, Forces new resource) The name of the bucket. If omitted, Terraform will assign a random, unique name."
type = string
default = null default = null
} }
variable "bucket_prefix" { variable "bucket_prefix" {
description = "(Optional, Forces new resource) Creates a unique bucket name beginning with the specified prefix. Conflicts with bucket." description = "(Optional, Forces new resource) Creates a unique bucket name beginning with the specified prefix. Conflicts with bucket."
type = string
default = null default = null
} }
variable "acl" { variable "acl" {
description = "(Optional) The canned ACL to apply. Defaults to 'private'." description = "(Optional) The canned ACL to apply. Defaults to 'private'."
type = string
default = "private" default = "private"
} }
variable "policy" { variable "policy" {
description = "(Optional) A valid bucket policy JSON document. Note that if the policy document is not specific enough (but still valid), Terraform may view the policy as constantly changing in a terraform plan. In this case, please make sure you use the verbose/specific version of the policy. For more information about building AWS IAM policy documents with Terraform, see the AWS IAM Policy Document Guide." description = "(Optional) A valid bucket policy JSON document. Note that if the policy document is not specific enough (but still valid), Terraform may view the policy as constantly changing in a terraform plan. In this case, please make sure you use the verbose/specific version of the policy. For more information about building AWS IAM policy documents with Terraform, see the AWS IAM Policy Document Guide."
type = string
default = null default = null
} }
variable "tags" { variable "tags" {
description = "(Optional) A mapping of tags to assign to the bucket." description = "(Optional) A mapping of tags to assign to the bucket."
type = map(string)
default = {} default = {}
} }
variable "force_destroy" { variable "force_destroy" {
description = "(Optional, Default:false ) A boolean that indicates all objects should be deleted from the bucket so that the bucket can be destroyed without error. These objects are not recoverable." description = "(Optional, Default:false ) A boolean that indicates all objects should be deleted from the bucket so that the bucket can be destroyed without error. These objects are not recoverable."
type = bool
default = false default = false
} }
variable "acceleration_status" { variable "acceleration_status" {
description = "(Optional) Sets the accelerate configuration of an existing bucket. Can be Enabled or Suspended." description = "(Optional) Sets the accelerate configuration of an existing bucket. Can be Enabled or Suspended."
type = string
default = null default = null
} }
variable "region" { variable "region" {
description = "(Optional) If specified, the AWS region this bucket should reside in. Otherwise, the region used by the callee." description = "(Optional) If specified, the AWS region this bucket should reside in. Otherwise, the region used by the callee."
type = string
default = null default = null
} }
variable "request_payer" { variable "request_payer" {
description = "(Optional) Specifies who should bear the cost of Amazon S3 data transfer. Can be either BucketOwner or Requester. By default, the owner of the S3 bucket would incur the costs of any data transfer. See Requester Pays Buckets developer guide for more information." description = "(Optional) Specifies who should bear the cost of Amazon S3 data transfer. Can be either BucketOwner or Requester. By default, the owner of the S3 bucket would incur the costs of any data transfer. See Requester Pays Buckets developer guide for more information."
type = string
default = null default = null
} }
variable "website_inputs" { variable "website" {
type = list(object({ description = "Map containing static web-site hosting or redirect configuration."
index_document = string type = map(string)
error_document = string default = {}
redirect_all_requests_to = string
routing_rules = string
}))
default = null
} }
variable "cors_rule_inputs" { variable "cors_rule" {
type = list(object({ description = "Map containing a rule of Cross-Origin Resource Sharing."
allowed_headers = list(string) type = any # should be `map`, but it produces an error "all map elements must have the same type"
allowed_methods = list(string) default = {}
allowed_origins = list(string)
expose_headers = list(string)
max_age_seconds = number
}))
default = null
} }
variable "versioning_inputs" { variable "versioning" {
type = list(object({ description = "Map containing versioning configuration."
enabled = string type = map(string)
mfa_delete = string default = {}
}))
default = null
} }
variable "logging_inputs" { variable "logging" {
type = list(object({ description = "Map containing access bucket logging configuration."
target_bucket = string type = map(string)
target_prefix = string default = {}
}))
default = null
} }
// Lifecycle rules variables: variable "lifecycle_rule" {
variable "lifecycle_rule_inputs" { description = "List of maps containing configuration of object lifecycle management."
type = list(object({ type = any
id = string default = []
prefix = string
tags = map(string)
enabled = string
abort_incomplete_multipart_upload_days = string
expiration_inputs = list(object({
date = string
days = number
expired_object_delete_marker = string
}))
transition_inputs = list(object({
date = string
days = number
storage_class = string
}))
noncurrent_version_transition_inputs = list(object({
days = number
storage_class = string
}))
noncurrent_version_expiration_inputs = list(object({
days = number
}))
}))
default = null
} }
// Replication configuration variables: variable "replication_configuration" {
variable "replication_configuration_inputs" { description = "Map containing cross-region replication configuration."
type = list(object({ type = any
role = string default = {}
rules_inputs = list(object({
id = string
// priority = number
prefix = string
status = string
destination_inputs = list(object({
bucket = string
storage_class = string
replica_kms_key_id = string
account_id = string
access_control_translation_inputs = list(object({
owner = string
}))
}))
source_selection_criteria_inputs = list(object({
enabled = string
}))
/* filter_inputs = list(object({
prefix = string
tags = map(string)
}))
*/
}))
}))
default = null
} }
variable "server_side_encryption_configuration" {
// Server side encryption config: description = "Map containing server-side encryption configuration."
variable "server_side_encryption_configuration_inputs" { type = any
type = list(object({ default = {}
sse_algorithm = string
kms_master_key_id = string
}))
default = null
} }
//Object lock config variable "object_lock_configuration" {
/* description = "Map containing S3 object locking configuration."
variable "object_lock_configuration_inputs" { type = any
type = list(object({ default = {}
object_lock_enabled = string
rule_inputs = list(object({
mode = string
days = number
years = number
}))
}))
default = null
} }
*/
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment