Fix: Terraform Error Acquiring State Lock — State Lock Conflict
Quick Answer
How to fix Terraform state lock errors — understanding lock mechanisms, safely force-unlocking stuck locks, preventing lock conflicts in CI/CD, and using remote backends correctly.
The Error
Running terraform apply or terraform plan fails with a state lock error:
╷
│ Error: Error acquiring the state lock
│
│ Error message: ConditionalCheckFailedException: The conditional request failed
│ Lock Info:
│ ID: f4a3b2c1-d5e6-7890-abcd-ef1234567890
│ Path: terraform/state
│ Operation: OperationTypePlan
│ Who: user@hostname
│ Version: 1.6.0
│ Created: 2026-03-20 09:45:00.123456789 +0000 UTC
│ Info:
╵Or with an S3/DynamoDB backend:
Error: Error locking state: Error acquiring the state lock: ConditionalCheckFailedExceptionOr with a Terraform Cloud backend:
Error: Error acquiring the state lock
The state is already locked by another user. Terraform acquired a lock on the state to prevent concurrent modifications.Why This Happens
Terraform uses state locking to prevent concurrent modifications to infrastructure. When two operations run simultaneously (or when a previous operation crashed without releasing the lock), the lock remains held and subsequent operations fail:
- Previous
terraform applycrashed or was force-killed — the lock is acquired at the start of an operation. If the process is killed (Ctrl+C, server restart, timeout), the lock may not be released automatically. - Two CI/CD pipelines running in parallel — if two PRs are merged and deployed simultaneously, both try to acquire the lock. The second one fails.
- Network interruption during apply — a network failure between Terraform and the backend (S3, DynamoDB, GCS) during an operation can leave the lock in place.
- Backend lock entry stuck — with the DynamoDB lock table backend, the lock entry persists until explicitly deleted. A stuck apply won’t clean it up.
- Long-running apply — Terraform Cloud and some backends have lock timeouts. A very long apply may see its lock considered stale.
Fix 1: Verify the Lock Is Actually Stuck
Before force-unlocking, verify no operation is currently running. Force-unlocking a lock held by a live terraform apply will corrupt your state:
# Check if there's actually a running Terraform process
# On the machine where Terraform runs:
ps aux | grep terraform
# In CI/CD — check the CI system for running pipelines
# GitHub Actions: check the Actions tab
# GitLab CI: check the Pipelines page
# Jenkins: check the Build QueueCheck the lock details in the error message:
Lock Info:
ID: f4a3b2c1-d5e6-7890-abcd-ef1234567890
Who: ci-runner@github-actions-runner-abc123
Created: 2026-03-20 09:45:00 UTCWho— the machine/user that acquired the lock. Is this machine still running Terraform?Created— how long ago was the lock acquired? If it was 3 days ago, it’s almost certainly stuck.Operation— what operation was in progress?OperationTypePlanis safer to interrupt thanOperationTypeApply.
Warning: Only force-unlock when you are certain no Terraform process is currently using the lock. Unlocking a live operation causes two concurrent writers — this corrupts the state file and may cause irreversible infrastructure changes.
Fix 2: Force Unlock a Stuck Lock
Once you’ve confirmed the lock is stuck, use terraform force-unlock:
# Get the lock ID from the error message
terraform force-unlock f4a3b2c1-d5e6-7890-abcd-ef1234567890
# If running with a specific backend config
terraform force-unlock -force f4a3b2c1-d5e6-7890-abcd-ef1234567890The -force flag skips the confirmation prompt (useful in scripts):
terraform force-unlock -force <lock-id>After force-unlock, verify the state is consistent:
# Pull the current state and review it
terraform state pull > state-backup-$(date +%Y%m%d).json
# Check that the resources match what's actually deployed
terraform plan
# Should show no unexpected changes if state is consistentFor S3 + DynamoDB backend — manually delete the lock entry:
If terraform force-unlock fails, delete the DynamoDB lock entry directly:
# Find the lock item in the DynamoDB table
aws dynamodb scan \
--table-name terraform-state-locks \
--filter-expression "attribute_exists(LockID)" \
--query "Items[*].{LockID: LockID.S}"
# Delete the specific lock entry
aws dynamodb delete-item \
--table-name terraform-state-locks \
--key '{"LockID": {"S": "terraform/state"}}'
# The LockID key is typically the S3 path + ".tflock"
aws dynamodb delete-item \
--table-name terraform-state-locks \
--key '{"LockID": {"S": "my-bucket/path/to/terraform.tfstate"}}'For GCS backend:
# List lock files
gsutil ls gs://my-bucket/terraform/state.tflock
# Remove the lock file
gsutil rm gs://my-bucket/terraform/state.tflockFix 3: Prevent Lock Conflicts in CI/CD
The most common source of stuck locks in CI/CD is two pipelines running concurrently. Prevent this with concurrency controls:
GitHub Actions — use concurrency to prevent parallel runs:
# .github/workflows/terraform.yml
name: Terraform Apply
on:
push:
branches: [main]
# Cancel in-progress runs for the same ref, or queue them
concurrency:
group: terraform-${{ github.ref }}
cancel-in-progress: false # Wait for the previous run, don't cancel it
# cancel-in-progress: true # Cancel previous run (risky for apply)
jobs:
apply:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: hashicorp/setup-terraform@v3
- run: terraform init
- run: terraform apply -auto-approveGitLab CI — use resource_group to serialize jobs:
# .gitlab-ci.yml
terraform:apply:
stage: deploy
resource_group: terraform-production # Only one job in this group runs at a time
script:
- terraform init
- terraform apply -auto-approveJenkins — use a lock plugin to serialize builds:
// Jenkinsfile
pipeline {
stages {
stage('Terraform Apply') {
options {
lock(resource: 'terraform-production') // Serialize on this named lock
}
steps {
sh 'terraform init'
sh 'terraform apply -auto-approve'
}
}
}
}Set a lock timeout — if Terraform can’t acquire the lock within N seconds, fail fast rather than waiting indefinitely:
terraform apply -lock-timeout=60s
# Fails after 60 seconds if lock can't be acquired
# Default: wait indefinitelyFix 4: Configure the S3 + DynamoDB Backend Correctly
The most common Terraform backend for AWS requires both an S3 bucket (for state storage) and a DynamoDB table (for locking):
# backend.tf
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "production/terraform.tfstate"
region = "us-east-1"
encrypt = true
# DynamoDB table for state locking
dynamodb_table = "terraform-state-locks"
}
}Create the DynamoDB table for locking (if it doesn’t exist):
aws dynamodb create-table \
--table-name terraform-state-locks \
--attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH \
--billing-mode PAY_PER_REQUEST \
--region us-east-1Verify the table exists and check for stuck locks:
# List all items in the lock table
aws dynamodb scan \
--table-name terraform-state-locks \
--query "Items" \
--output table
# An empty Items array means no locks are held
# Non-empty Items means there's an active (or stuck) lockIAM permissions required for the Terraform executor:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::my-terraform-state/*"
},
{
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:DeleteItem"
],
"Resource": "arn:aws:dynamodb:us-east-1:*:table/terraform-state-locks"
}
]
}If the DynamoDB permissions are missing, every terraform apply fails with a lock error even when no lock is held.
Fix 5: Recover from a Corrupted State
If terraform apply was interrupted mid-apply, the state may be inconsistent — some resources exist in the cloud but aren’t in the state file, or vice versa:
# Back up the current state before any recovery
terraform state pull > state-backup-$(date +%Y%m%d-%H%M%S).json
# Run plan to see what Terraform thinks has changed
terraform plan -out=recovery.plan
# Review the plan carefully — look for unexpected creates or destroys
# If the plan looks correct, apply it
terraform apply recovery.planImport resources that exist in the cloud but not in state:
# If a resource was created but not recorded in state
terraform import aws_instance.web i-0abc123def456789
terraform import aws_s3_bucket.data my-bucket-nameRemove resources from state that no longer exist in the cloud:
# List all resources in state
terraform state list
# Remove a specific resource from state (doesn't destroy the resource)
terraform state rm aws_instance.old_webRestore from a backup if state is severely corrupted:
# Push a previous state version back as the current state
terraform state push state-backup-20260320.json
# For S3 backends — check S3 versioning for previous state versions
aws s3api list-object-versions \
--bucket my-terraform-state \
--prefix production/terraform.tfstate \
--query 'Versions[*].{VersionId: VersionId, LastModified: LastModified}'
# Restore a specific version
aws s3api get-object \
--bucket my-terraform-state \
--key production/terraform.tfstate \
--version-id <version-id> \
restored-state.jsonPro Tip: Always enable S3 versioning on your Terraform state bucket. If state becomes corrupted, you can restore a previous version. This is the single most important safeguard for Terraform state management.
Fix 6: Use Terraform Cloud or Atlantis for Centralized Locking
Instead of managing S3 + DynamoDB backends manually, use a system that handles locking, history, and concurrent access automatically:
Terraform Cloud backend:
terraform {
cloud {
organization = "my-org"
workspaces {
name = "production"
}
}
}Terraform Cloud handles locking automatically — concurrent applies queue up and run sequentially. You can see lock status in the UI.
Atlantis — open-source pull request automation for Terraform:
# atlantis.yaml
version: 3
projects:
- name: production
dir: infrastructure/production
workflow: default
autoplan:
when_modified: ["*.tf", "*.tfvars"]Atlantis serializes applies per workspace — only one apply runs at a time per project. It also shows plan output in pull request comments.
Still Not Working?
Lock persists after force-unlock — some backends (especially Azure Storage) have additional lock mechanisms. Check backend-specific documentation:
# Azure Blob Storage backend — check for a .tflock blob
az storage blob list \
--container-name terraform-state \
--account-name mystorageaccount \
--query "[?ends_with(name, '.tflock')]"
# Delete stuck lock blob
az storage blob delete \
--container-name terraform-state \
--account-name mystorageaccount \
--name terraform.tfstate.tflockLock ID in error doesn’t match force-unlock — make sure you’re copying the full lock UUID from the error message, including hyphens.
If using Terraform workspaces, the state path includes the workspace name:
# Default workspace: terraform.tfstate
# Named workspace: env:/production/terraform.tfstate
terraform workspace show # Confirm current workspace
terraform force-unlock -force <lock-id>For related infrastructure issues, see Fix: Terraform Plan Shows Unexpected Changes and Fix: Kubernetes Pod OOMKilled.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: AWS Access Denied — IAM Permission Errors and Policy Debugging
How to fix AWS Access Denied errors — understanding IAM policies, using IAM policy simulator, fixing AssumeRole errors, resource-based policies, and SCPs blocking actions.
Fix: Kubernetes Secret Not Mounted — Pod Cannot Access Secret Values
How to fix Kubernetes Secrets not being mounted — namespace mismatches, RBAC permissions, volume mount configuration, environment variable injection, and secret decoding issues.
Fix: Terraform Variable Not Set — No Value for Required Variable
How to fix Terraform 'no value for required variable' errors — variable definition files, environment variables, tfvars files, sensitive variables, and variable precedence.
Fix: Docker Secrets Not Working — BuildKit --secret Not Mounting, Compose Secrets Undefined, or Secret Leaking into Image
How to fix Docker secrets — BuildKit secret mounts in Dockerfile, docker-compose secrets config, runtime vs build-time secrets, environment variable alternatives, and verifying secrets don't leak into image layers.