I've stumbled upon interesting access denied problems with S3-based Terraform state files recently. Suppose you have two or more Terraform root modules which use the same bucket for storing the state and just use a different key (=state file):
terraform {
backend "s3" {
bucket = "terraform-state"
key = "root-module-1"
region = "eu-central-1"
}
}
and
terraform {
backend "s3" {
bucket = "terraform-state"
key = "root-module-2"
region = "eu-central-1"
}
}
However, trying to access the state file in one root module works, but on the other you get a generic S3 Access Denied error:
$ terraform state pull
Failed to load state: AccessDenied: Access Denied
status code: 403, request id: 033BB4A91223DCBF, host id: …
Looking into it there's nothing wrong with the bucket policy or the user's IAM policy. Going through the steps for debugging S3 403 errors feels a bit too much. Fortunately wiping out the .terraform folder in the broken root module is likely to help:
$ rm -rf .terraform
$ terraform init
--- snip ---
$ terraform state pull
{
"version": 3,
"terraform_version": "0.11.14",
"serial": 1,
"lineage": "bddc5c8d-d1d4-74d8-34af-a4a925090a06",
"modules": [
{
"path": [
"root"
]
--- snip --
I did not yet investigate the root cause of this problem, but it could be that having the AWS_* environment variables in Python Virtualenv and accidentally using the wrong set of AWS keys (e.g. for production instead of development) could cause Terraform to cache something under .terraform that should not be there.
The next time this happens I will look closer and update this blog post. In the meanwhile, feel free to check our other Terraform related-posts in this blog and read the basics of Terraform. You might also want read higher level theory on Infrastructure as Code.