Skip to main content

Backend Type: s3

Stores the state as a given key in a given bucket on Amazon S3. This backend also supports state locking and consistency checking via Dynamo DB, which can be enabled by setting the dynamodb_table field to an existing DynamoDB table name. A single DynamoDB table can be used to lock multiple remote state files. OpenTofu generates key names that include the values of the bucket and key variables.

Example Configuration

Code Block
terraform {
backend "s3" {
bucket = "mybucket"
key = "path/to/my/key"
region = "us-east-1"
}
}

This assumes we have a bucket created called mybucket. The OpenTofu state is written to the key path/to/my/key.

Note that for the access credentials we recommend using a partial configuration.

S3 Bucket Permissions

OpenTofu will need the following AWS IAM permissions on the target backend bucket:

  • s3:ListBucket on arn:aws:s3:::mybucket
  • s3:GetObject on arn:aws:s3:::mybucket/path/to/my/key
  • s3:PutObject on arn:aws:s3:::mybucket/path/to/my/key
  • s3:DeleteObject on arn:aws:s3:::mybucket/path/to/my/key

This is seen in the following AWS IAM Statement:

Code Block
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::mybucket"
},
{
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:PutObject", "s3:DeleteObject"],
"Resource": "arn:aws:s3:::mybucket/path/to/my/key"
}
]
}

DynamoDB Table Permissions

If you are using state locking, OpenTofu will need the following AWS IAM permissions on the DynamoDB table (arn:aws:dynamodb:::table/mytable):

  • dynamodb:DescribeTable
  • dynamodb:GetItem
  • dynamodb:PutItem
  • dynamodb:DeleteItem

This is seen in the following AWS IAM Statement:

Code Block
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:DescribeTable",
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:DeleteItem"
],
"Resource": "arn:aws:dynamodb:*:*:table/mytable"
}
]
}

Data Source Configuration

To make use of the S3 remote state in another configuration, use the terraform_remote_state data source.

Code Block
data "terraform_remote_state" "network" {
backend = "s3"
config = {
bucket = "tofu-state-prod"
key = "network/terraform.tfstate"
region = "us-east-1"
}
}

The terraform_remote_state data source will return all of the root module outputs defined in the referenced remote state (but not any outputs from nested modules unless they are explicitly output again in the root). An example output might look like:

Code Block
data.terraform_remote_state.network:
id = 2016-10-29 01:57:59.780010914 +0000 UTC
addresses.# = 2
addresses.0 = 52.207.220.222
addresses.1 = 54.196.78.166
backend = s3
config.% = 3
config.bucket = tofu-state-prod
config.key = network/terraform.tfstate
config.region = us-east-1
elb_address = web-elb-790251200.us-east-1.elb.amazonaws.com
public_subnet_id = subnet-1e05dd33

Configuration

This backend requires the configuration of the AWS Region and S3 state storage. Other configuration, such as enabling DynamoDB state locking, is optional.

Credentials and Shared Configuration

The following configuration is required:

  • region - (Required) AWS Region of the S3 Bucket and DynamoDB Table (if used). This can also be sourced from the AWS_DEFAULT_REGION and AWS_REGION environment variables.

The following configuration is optional:

  • access_key - (Optional) AWS access key. If configured, must also configure secret_key. This can also be sourced from the AWS_ACCESS_KEY_ID environment variable, AWS shared credentials file (e.g. ~/.aws/credentials), or AWS shared configuration file (e.g. ~/.aws/config).
  • secret_key - (Optional) AWS access key. If configured, must also configure access_key. This can also be sourced from the AWS_SECRET_ACCESS_KEY environment variable, AWS shared credentials file (e.g. ~/.aws/credentials), or AWS shared configuration file (e.g. ~/.aws/config).
  • iam_endpoint - (Optional) Deprecated Custom endpoint for the AWS Identity and Access Management (IAM) API. This can also be sourced from the AWS_IAM_ENDPOINT environment variable.
  • max_retries - (Optional) The maximum number of times an AWS API request is retried on retryable failure. Defaults to 5.
  • retry_mode - (Optional) Specifies how retries are attempted. Valid values are standard and adaptive. This can also be sourced from the AWS_RETRY_MODE environment variable.
  • profile - (Optional) Name of AWS profile in AWS shared credentials file (e.g. ~/.aws/credentials) or AWS shared configuration file (e.g. ~/.aws/config) to use for credentials and/or configuration. This can also be sourced from the AWS_PROFILE environment variable.
  • shared_credentials_file - (Optional) Deprecated Path to the AWS shared credentials file. Defaults to ~/.aws/credentials.
  • shared_credentials_files - (Optional) List of paths to AWS shared credentials files. Defaults to ~/.aws/credentials. This can also be sourced from the AWS_SHARED_CREDENTIALS_FILE environment variable.
  • shared_config_files - (Optional) List of paths to AWS shared configuration files. Defaults to ~/.aws/config. This can also be sourced from the AWS_SHARED_CONFIG_FILE environment variable.
  • skip_s3_checksum - (Optional) Do not include checksum in the input when uploading S3 Objects. Useful for non AWS S3 APIs which do not support checksum validation.
  • skip_credentials_validation - (Optional) Skip credentials validation via the STS API.
  • skip_region_validation - (Optional) Skip validation of provided region name.
  • skip_metadata_api_check - (Optional) Skip usage of EC2 Metadata API.
  • skip_requesting_account_id - (Optional) Skip requesting the account ID. Useful for AWS API implementations that do not have the IAM, STS API, or metadata API.
  • sts_endpoint - (Optional) Deprecated Custom endpoint for the AWS Security Token Service (STS) API. This can also be sourced from the AWS_STS_ENDPOINT environment variable.
  • sts_region - (Optional) AWS region for STS. If unset, AWS will use the same region for STS as other non-STS operations.
  • token - (Optional) Multi-Factor Authentication (MFA) token. This can also be sourced from the AWS_SESSION_TOKEN environment variable.
  • allowed_account_ids (Optional): A list of permitted AWS account IDs to safeguard against accidental disruption of a live environment. This option conflicts with forbidden_account_ids.
  • forbidden_account_ids (Optional): A list of prohibited AWS account IDs to prevent unintentional disruption of a live environment. This option conflicts with allowed_account_ids.
  • custom_ca_bundle - File containing custom root and intermediate certificates. Can also be configured using the AWS_CA_BUNDLE environment variable.
  • ec2_metadata_service_endpoint - Address of the EC2 metadata service (IMDS) endpoint to use. This can also be sourced from the AWS_EC2_METADATA_SERVICE_ENDPOINT environment variable.
  • ec2_metadata_service_endpoint_mode - Mode to use in communicating with the metadata service. Valid values are IPv4 and IPv6. This can also be sourced from the AWS_EC2_METADATA_SERVICE_ENDPOINT_MODE environment variable.
  • http_proxy - (Optional) The address of an HTTP proxy to use when accessing the AWS API. This can also be sourced from the HTTP_PROXY environment variable.
  • https_proxy - (Optional) The address of an HTTPS proxy to use when accessing the AWS API. This can also be sourced from the HTTPS_PROXY environment variable.
  • no_proxy - (Optional) Comma-separated values which specify hosts that should be excluded from proxying when accessing the AWS API. This can also be sourced from the NO_PROXY environment variable. Find more details here.
  • insecure - (Optional) Explicitly allow the backend to perform "insecure" SSL requests; default is false.
  • use_dualstack_endpoint - (Optional) Resolve an endpoint with DualStack capability.
  • use_fips_endpoint - (Optional) Resolve an endpoint with FIPS capability.

Customizing AWS API Endpoints

The optional endpoints argument contains the following options:

  • s3 - (Optional) Use this to set a custom endpoint URL for the AWS S3 API. This can also be sourced from the AWS_ENDPOINT_URL_S3 environment variable or the deprecated environment variable AWS_S3_ENDPOINT.
  • iam - (Optional) Use this to set a custom endpoint URL for the AWS IAM API. This can also be sourced from the AWS_ENDPOINT_URL_IAM environment variable or the deprecated environment variable AWS_IAM_ENDPOINT.
  • sts - (Optional) Use this to set a custom endpoint URL for the AWS STS API. This can also be sourced from the AWS_ENDPOINT_URL_STS environment variable or the deprecated environment variable AWS_STS_ENDPOINT.
  • dynamodb - (Optional) Use this to set a custom endpoint URL for the AWS DynamoDB API. This can also be sourced from the AWS_ENDPOINT_URL_DYNAMODB environment variable or the deprecated environment variable AWS_DYNAMODB_ENDPOINT.
Code Block
terraform {
backend "s3" {
endpoints {
dynamodb = "http://localhost:4569"
s3 = "http://localhost:4572"
}
}
}

Assume Role Configuration

Assuming an IAM Role is optional and can be configured in two ways. The preferred way is to use the argument assume_role, as the other, the other method is deprecated.

The argument assume_role contains the following arguments:

  • role_arn - (Required) The Amazon Resource Name (ARN) of the IAM Role to be assumed.
  • duration - (Optional) Specifies the validity period for individual credentials. These credentials are automatically renewed, with the maximum renewal defined by the AWS account. The duration should be specified in the format <hours>h<minutes>m<seconds>s, with each unit being optional. For example, an hour and a half can be represented as 1h30m or simply 90m. The duration must be within the range of 15 minutes (15m) to 12 hours (12h).
  • external_id - (Optional) An external identifier to use when assuming the role.
  • policy - (Optional) JSON representation of an IAM Policy that further restricts permissions for the IAM Role being assumed.
  • policy_arns - (Optional) A set of Amazon Resource Names (ARNs) for IAM Policies that further limit permissions for the assumed IAM Role.
  • session_name - (Optional) The session name to be used when assuming the role.
  • tags - (Optional) A map of tags to be associated with the assumed role session.
  • transitive_tag_keys - (Optional) A set of tag keys from the assumed role session to be passed to any subsequent sessions.

The following arguments on the top level are deprecated:

  • assume_role_duration_seconds - (Optional) Number of seconds to restrict the assume role session duration. Use assume_role.duration instead.
  • assume_role_policy - (Optional) IAM Policy JSON describing further restricting permissions for the IAM Role being assumed. Use assume_role.policy instead.
  • assume_role_policy_arns - (Optional) Set of Amazon Resource Names (ARNs) of IAM Policies describing further restricting permissions for the IAM Role being assumed. Use assume_role.policy_arns instead.
  • assume_role_tags - (Optional) Map of assume role session tags. Use assume_role.tags instead.
  • assume_role_transitive_tag_keys - (Optional) Set of assume role session tag keys to pass to any subsequent sessions. Use assume_role.transitive_tag_keys instead.
  • external_id - (Optional) External identifier to use when assuming the role. Use assume_role.external_id instead.
  • role_arn - (Optional) Amazon Resource Name (ARN) of the IAM Role to assume. Use assume_role.role_arn instead.
  • session_name - (Optional) Session name to use when assuming the role. Use assume_role.session_name instead.
Code Block
terraform {
backend "s3" {
bucket = "mybucket"
key = "my/key.tfstate"
region = "us-east-1"
assume_role = {
role_arn = "arn:aws:iam::ACCOUNT-ID:role/Opentofu"
}
}
}

Assume Role With Web Identity Configuration

The following assume_role_with_web_identity configuration block is optional:

  • role_arn - (Required) Amazon Resource Name (ARN) of the IAM Role to assume. Can also be set with the AWS_ROLE_ARN environment variable.
  • duration - (Optional) The duration individual credentials will be valid. Credentials are automatically renewed up to the maximum defined by the AWS account. Specified using the format <hours>h<minutes>m<seconds>s with any unit being optional. For example, an hour and a half can be specified as 1h30m or 90m. Must be between 15 minutes (15m) and 12 hours (12h).
  • policy - (Optional) IAM Policy JSON describing further restricting permissions for the IAM Role being assumed.
  • policy_arns - (Optional) Set of Amazon Resource Names (ARNs) of IAM Policies describing further restricting permissions for the IAM Role being assumed.
  • session_name - (Optional) Session name to use when assuming the role. Can also be set with the AWS_ROLE_SESSION_NAME environment variable.
  • web_identity_token - (Optional) The value of a web identity token from an OpenID Connect (OIDC) or OAuth provider. One of web_identity_token or web_identity_token_file is required.
  • web_identity_token_file - (Optional) File containing a web identity token from an OpenID Connect (OIDC) or OAuth provider. One of web_identity_token_file or web_identity_token is required. Can also be set with the AWS_WEB_IDENTITY_TOKEN_FILE environment variable.
Code Block
terraform {
backend "s3" {
bucket = "mybucket"
key = "my/key.tfstate"
region = "us-east-1"

assume_role_with_web_identity = {
role_arn = "arn:aws:iam::ACCOUNT-ID:role/Opentofu"
web_identity_token = "<token value>"
}
}
}

It's possible to constrain the assumed role by providing a policy.

Code Block
terraform {
backend "s3" {
bucket = "mybucket"
key = "my/key.tfstate"
region = "us-east-1"

assume_role_with_web_identity = {
role_arn = "arn:aws:iam::ACCOUNT-ID:role/Opentofu"
web_identity_token = "<token value>"
policy = <<-JSON
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::mybucket"
]
}
]
}
JSON
}
}
}

S3 State Storage

The following configuration is required:

  • bucket - (Required) Name of the S3 Bucket.
  • key - (Required) Path to the state file inside the S3 Bucket. When using a non-default workspace, the state path will be /workspace_key_prefix/workspace_name/key (see also the workspace_key_prefix configuration).

The following configuration is optional:

  • acl - (Optional) Canned ACL to be applied to the state file.
  • encrypt - (Optional) Enable server side encryption of the state file.
  • endpoint - (Optional) Deprecated Custom endpoint for the AWS S3 API. This can also be sourced from the AWS_S3_ENDPOINT environment variable.
  • force_path_style - (Optional) Deprecated Enable path-style S3 URLs (https://<HOST>/<BUCKET> instead of https://<BUCKET>.<HOST>). Use use_path_style instead.
  • use_path_style - (Optional) Enable path-style S3 URLs (https://<HOST>/<BUCKET> instead of https://<BUCKET>.<HOST>).
  • kms_key_id - (Optional) Amazon Resource Name (ARN) of a Key Management Service (KMS) Key to use for encrypting the state. Note that if this value is specified, OpenTofu will need kms:Encrypt, kms:Decrypt and kms:GenerateDataKey permissions on this KMS key.
  • sse_customer_key - (Optional) The key to use for encrypting state with Server-Side Encryption with Customer-Provided Keys (SSE-C). This is the base64-encoded value of the key, which must decode to 256 bits. This can also be sourced from the AWS_SSE_CUSTOMER_KEY environment variable, which is recommended due to the sensitivity of the value. Setting it inside an OpenTofu file will cause it to be persisted to disk in terraform.tfstate.
  • workspace_key_prefix - (Optional) Prefix applied to the state path inside the bucket. This is only relevant when using a non-default workspace. Defaults to env:.

DynamoDB State Locking

The following configuration is optional:

  • dynamodb_endpoint - (Optional) Deprecated Custom endpoint for the AWS DynamoDB API. This can also be sourced from the AWS_DYNAMODB_ENDPOINT environment variable.
  • dynamodb_table - (Optional) Name of DynamoDB Table to use for state locking and consistency. The table must have a partition key named LockID with type of String. If not configured, state locking will be disabled.

Multi-account AWS Architecture

A common architectural pattern is for an organization to use a number of separate AWS accounts to isolate different teams and environments. For example, a "staging" system will often be deployed into a separate AWS account than its corresponding "production" system, to minimize the risk of the staging environment affecting production infrastructure, whether via rate limiting, misconfigured access controls, or other unintended interactions.

The S3 backend can be used in a number of different ways that make different tradeoffs between convenience, security, and isolation in such an organization. This section describes one such approach that aims to find a good compromise between these tradeoffs, allowing use of OpenTofu's workspaces feature to switch conveniently between multiple isolated deployments of the same configuration.

Use this section as a starting-point for your approach, but note that you will probably need to make adjustments for the unique standards and regulations that apply to your organization. You will also need to make some adjustments to this approach to account for existing practices within your organization, if for example other tools have previously been used to manage infrastructure.

OpenTofu is an administrative tool that manages your infrastructure, and so ideally the infrastructure that is used by OpenTofu should exist outside of the infrastructure that OpenTofu manages. This can be achieved by creating a separate administrative AWS account which contains the user accounts used by human operators and any infrastructure and tools used to manage the other accounts. Isolating shared administrative tools from your main environments has a number of advantages, such as avoiding accidentally damaging the administrative infrastructure while changing the target infrastructure, and reducing the risk that an attacker might abuse production infrastructure to gain access to the (usually more privileged) administrative infrastructure.

Administrative Account Setup

Your administrative AWS account will contain at least the following items:

  • One or more IAM user for system administrators that will log in to maintain infrastructure in the other accounts.
  • Optionally, one or more IAM groups to differentiate between different groups of users that have different levels of access to the other AWS accounts.
  • An S3 bucket that will contain the OpenTofu state files for each workspace.
  • A DynamoDB table that will be used for locking to prevent concurrent operations on a single workspace.

Provide the S3 bucket name and DynamoDB table name to OpenTofu within the S3 backend configuration using the bucket and dynamodb_table arguments respectively, and configure a suitable workspace_key_prefix to contain the states of the various workspaces that will subsequently be created for this configuration.

Environment Account Setup

For the sake of this section, the term "environment account" refers to one of the accounts whose contents are managed by OpenTofu, separate from the administrative account described above.

Your environment accounts will eventually contain your own product-specific infrastructure. Along with this it must contain one or more IAM roles that grant sufficient access for OpenTofu to perform the desired management tasks.

Delegating Access

Each Administrator will run OpenTofu using credentials for their IAM user in the administrative account. IAM Role Delegation is used to grant these users access to the roles created in each environment account.

Full details on role delegation are covered in the AWS documentation linked above. The most important details are:

  • Each role's Assume Role Policy must grant access to the administrative AWS account, which creates a trust relationship with the administrative AWS account so that its users may assume the role.
  • The users or groups within the administrative account must also have a policy that creates the converse relationship, allowing these users or groups to assume that role.

Since the purpose of the administrative account is only to host tools for managing other accounts, it is useful to give the administrative accounts restricted access only to the specific operations needed to assume the environment account role and access the OpenTofu state. By blocking all other access, you remove the risk that user error will lead to staging or production resources being created in the administrative account by mistake.

When configuring OpenTofu, use either environment variables or the standard credentials file ~/.aws/credentials to provide the administrator user's IAM credentials within the administrative account to both the S3 backend and to OpenTofu's AWS provider.

Use conditional configuration to pass a different assume_role value to the AWS provider depending on the selected workspace. For example:

Code Block
variable "workspace_iam_roles" {
default = {
staging = "arn:aws:iam::STAGING-ACCOUNT-ID:role/OpenTofu"
production = "arn:aws:iam::PRODUCTION-ACCOUNT-ID:role/OpenTofu"
}
}

provider "aws" {
# No credentials explicitly set here because they come from either the
# environment or the global credentials file.

assume_role {
role_arn = "${var.workspace_iam_roles[terraform.workspace]}"
}
}

If workspace IAM roles are centrally managed and shared across many separate OpenTofu configurations, the role ARNs could also be obtained via a data source such as terraform_remote_state to avoid repeating these values.

Creating and Selecting Workspaces

With the necessary objects created and the backend configured, run tofu init to initialize the backend and establish an initial workspace called "default". This workspace will not be used, but is created automatically by OpenTofu as a convenience for users who are not using the workspaces feature.

Create a workspace corresponding to each key given in the workspace_iam_roles variable value above:

Code Block
$ tofu workspace new staging
Created and switched to workspace "staging"!

...

$ tofu workspace new production
Created and switched to workspace "production"!

...

Due to the assume_role setting in the AWS provider configuration, any management operations for AWS resources will be performed via the configured role in the appropriate environment AWS account. The backend operations, such as reading and writing the state from S3, will be performed directly as the administrator's own user within the administrative account.

Code Block
$ tofu workspace select staging
$ tofu apply
...

Running OpenTofu in Amazon EC2

Teams that make extensive use of OpenTofu for infrastructure management often run OpenTofu in automation to ensure a consistent operating environment and to limit access to the various secrets and other sensitive information that OpenTofu configurations tend to require.

When running OpenTofu in an automation tool running on an Amazon EC2 instance, consider running this instance in the administrative account and using an instance profile in place of the various administrator IAM users suggested above. An IAM instance profile can also be granted cross-account delegation access via an IAM policy, giving this instance the access it needs to run OpenTofu.

To isolate access to different environment accounts, use a separate EC2 instance for each target account so that its access can be limited only to the single account.

Similar approaches can be taken with equivalent features in other AWS compute services, such as ECS.

Protecting Access to Workspace State

In a simple implementation of the pattern described in the prior sections, all users have access to read and write states for all workspaces. In many cases it is desirable to apply more precise access constraints to the OpenTofu state objects in S3, so that for example only trusted administrators are allowed to modify the production state, or to control reading of a state that contains sensitive information.

Amazon S3 supports fine-grained access control on a per-object-path basis using IAM policy. A full description of S3's access control mechanism is beyond the scope of this guide, but an example IAM policy granting access to only a single state object within an S3 bucket is shown below:

Code Block
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::myorg-tofu-states"
},
{
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:PutObject"],
"Resource": "arn:aws:s3:::myorg-tofu-states/myapp/production/tfstate"
}
]
}

It is also possible to apply fine-grained access control to the DynamoDB table used for locking. When OpenTofu puts the state lock in place during tofu plan, it stores the full state file as a document and sets the s3 object key as the partition key for the document. After the state lock is released, OpenTofu places a digest of the updated state file in DynamoDB. The key is similar to the one for the original state file, but is suffixed with -md5.

The example below shows a simple IAM policy that allows the backend operations role to perform these operations:

Code Block
{
"Version": "2012-10-17",
"Statement": [
{
"Effect" : "Allow",
"Action" : [
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:Query",
"dynamodb:UpdateItem"
],
"Resource" : ["arn:aws:dynamodb:*:*:table/myorg-state-lock-table"],
"Condition" : {
"ForAllValues:StringEquals" : {
"dynamodb:LeadingKeys" : [
"myorg-tofu-states/myapp/production/tfstate", // during a state lock the full state file is stored with this key
"myorg-tofu-states/myapp/production/tfstate-md5" // after the lock is released a hash of the statefile's contents are stored with this key
]
}
}
}
]
}

Refer to the AWS documentation on DynamoDB fine-grained locking for more details.

Configuring Custom User-Agent Information

Note this feature is optional.

By default, the underlying AWS client used by the OpenTofu AWS Provider creates requests with User-Agent headers including information about OpenTofu and AWS Go SDK versions. To provide additional information in the User-Agent headers, the TF_APPEND_USER_AGENT environment variable can be set and its value will be directly added to HTTP requests. e.g.

Code Block
$ export TF_APPEND_USER_AGENT="JenkinsAgent/i-12345678 BuildID/1234 (Optional Extra Information)"