Terraform state locking using DynamoDB

Terraform state is the backbone of your terraform project for provisioning your cloud infrastructure. When you work on large infrastructure provisioning with Terraform then it is always more than one developer or team of a developer working on the same terraform project.
The problems arise when two developers try to update the same terraform state file which is stored remotely(S3 Bucket). Ideally, the update on the state file should be done incrementally so that when one developer finishes pushing its state file changes another developer can push their changes after taking the update.
But because of the agile working environment, we cannot guarantee that incremental updates on terraform state files will be performed one after another. Any developer can update and push terraform state files at any point in time, so there should be some provision to prevent a developer from writing or updating terraform files when another developer is already using them.
Why Terraform State Locking is important?- It prevents Terraform state file(terraform.tfstate) from accidental updates by putting a lock on the file so that the current update can be finished before processing the new change. The feature of Terraform state locking is supported by AWS S3 and Dynamo DB.
- How does Store Terraform state file remotely on S3?
Before we implement the Dynamo DB locking feature first we need to store the Terraform state file(terraform. tfstate) file remotely on the AWS S3 bucket.
I am gonna take a very simple example in which we are going to provision an AWS EC2 machine and store the Terraform state file remotely.
Let’s start by creating the main. tf and we will add the following resource blocks to it -
Provider Block
- AWS Instance resource block(aws_instance) for EC2
- Backend S3 block
- Execute terraform script
- Verify the remote state file
1.1 Provider Block
As we are working on the AWS environment so we will be using the AWS providers. So add the following block to your main. tf -
provider "aws" {
region = "us-east-1"
access_key = var.access_key
secret_key = var.secret_key
}
1.2 AWS Instance resource block(aws_instance) for EC2
After adding the provider block let’s add the aws_instance resource block in which we are going to set up the EC2 the machine of type t2.micro -
provider "aws" {
region = "us-east-1"
access_key = var.access_key
secret_key = var.secret_key
}resource "aws_instance" "ec2_example" {
ami = "ami-0767046d1677be5a0"
instance_type = "t2.micro"
tags = {
Name = "EC2 Instance with remote state"
}
}
1.3 Backend S3 block
Now after adding the provider and aws_instance block let’s add the backend S3 block to my main.tf -
provider "aws" {
region = "us-east-1"
access_key = var.access_key
secret_key = var.secret_key
}resource "aws_dynamodb_table" "state_locking" {
hash_key = "LockID"
name = "dynamodb-state-locking"
attribute {
name = "LockID"
type = "S"
}
billing_mode = "PAY_PER_REQUEST"
}resource "aws_instance" "ec2_example" {
ami = "ami-0767046d1677be5a0"
instance_type = "t2.micro"
tags = {
Name = "EC2 Instance with remote state"
}
}terraform {
backend "s3" {
bucket = "terraform-s3-bucket"
key = "terraform/remote/s3/terraform.tfstate"
region = "us-east-1"
}
}
2. Create DynamoDB table on AWS
Now for implementing the state locking we need to create a DynamoDB table.
- Goto your AWS management console and search for DynamoDB onto the search bar.

2. Click on the DynamoDB
3. From the left navigation panel click on Tables

4. Click on Create Table

5. Enter the Table name — “dynamodb-state-locking” and Partition Key “LockID”

6. Click on Create Table and you can verify the table after the creation

3. Add AWS DynamoDB Table reference to Backend S3 remote state?
After creating the DynamoDB table in the previous step, let’s add the reference of the DynamoDB table name (dynamodb-state-locking) to the backend S3 state.
terraform {
backend "s3" {
bucket = "terraform-s3-bucket"
key = "terraform/remote/s3/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "dynamodb-state-locking"
}
}
Your final Terraform main. tf should look like this -
provider "aws" {
region = "us-east-1"
access_key = var.access_key
secret_key = var.secret_key
}resource "aws_dynamodb_table" "state_locking" {
hash_key = "LockID"
name = "dynamodb-state-locking"
attribute {
name = "LockID"
type = "S"
}
billing_mode = "PAY_PER_REQUEST"
}resource "aws_instance" "ec2_example" {
ami = "ami-0767046d1677be5a0"
instance_type = "t2.micro"
tags = {
Name = "EC2 Instance with remote state"
}
}terraform {
backend "s3" {
bucket = "terraform-s3-bucket"
key = "terraform/remote/s3/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "dynamodb-state-locking"
}
}
3.1 Apply the above terraform configuration with the DynamoDB table
- The first command we are gonna run terraform init

2. Now the run the terraform plan command

3. Finally, the terraform apply command

4. Verify the DynamoDB LockID by going into the AWS management console -

4. Spin one more EC2 instance with the same Terraform state file
(*Note- To simulate the locking scenario I am creating another main. tf with the same configuration. I would encourage you to create one main. tf and save the file in some other directory)
To test terraform state locking I will provision one more EC2 machine using the same Terraform state file (terraform/remote/s3/terraform.tfstate) stored in my S3 bucket along with the same DynamoDB table (dynamodb-state-locking).
Keep in mind we are still using the following two components from the previous main. tf
- S3 Bucket — terraform-s3-bucket
- DynamoDB Table — dynamodb-state-locking
- Terraform state file — terraform/remote/s3/terraform.tfstate
Here is another main. tf file -
provider "aws" {
region = "us-east-1"
access_key = var.access_key
secret_key = var.secret_key
}resource "aws_instance" "ec2_example" {
ami = "ami-0767046d1677be5a0"
instance_type = "t2.micro"
tags = {
Name = "EC2 Instance with remote state"
}
}terraform {
backend "s3" {
bucket = "terraform-s3-bucket"
key = "terraform/remote/s3/terraform.tfstate"
encrypt = true
region = "us-east-1"
dynamodb_table = "dynamodb-state-locking"
}
}
4.1 Run both the terraform files at the same time to simulate the Locking on the Terraform state file
On the left side of the screen, you will see the first terraform file(main. tf) which I have created in Step-1, and on the right-hand side, you will see the terraform file(main. tf) from the Step-4.
**How did I simulate the remote state locking scenario? **
- I have executed terraform apply on terraform file present on the right-hand side but did not let it finish. While executing terraform apply command I did not type yes when it asks for Do you want to perform these actions? so basically terraform apply command is still running and holding a lock on the remote state file.
- At the same time, I executed the terraform apply on main. tf from Step-4 which you can see on the right side of the screenshot. Since the second main.tf file also referring to the same remote state as well as same dynamo DB table it will throw en error — Error: Error acquiring the state lock Error message: ConditionalCheckFailedException: The conditional request failed Lock Info ID: 8f014160–8894–868e-529d-0f16e42af405

5. Conclusion
Terraform state file locking is one of the most valuable features offered by terraform for managing the Terraform state file. If you are using the AWS S3 and Dynamo DB then terraform state locking can improve your state management and save your time from unforeseen issues.