Skip to content

Fix: Terraform Error Acquiring State Lock — State Lock Conflict

FixDevs ·

Quick Answer

How to fix Terraform state lock errors — understanding lock mechanisms, safely force-unlocking stuck locks, preventing lock conflicts in CI/CD, and using remote backends correctly.

The Error

Running terraform apply or terraform plan fails with a state lock error:


│ Error: Error acquiring the state lock

│ Error message: ConditionalCheckFailedException: The conditional request failed
│ Lock Info:
│   ID:        f4a3b2c1-d5e6-7890-abcd-ef1234567890
│   Path:      terraform/state
│   Operation: OperationTypePlan
│   Who:       user@hostname
│   Version:   1.6.0
│   Created:   2026-03-20 09:45:00.123456789 +0000 UTC
│   Info:

Or with an S3/DynamoDB backend:

Error: Error locking state: Error acquiring the state lock: ConditionalCheckFailedException

Or with a Terraform Cloud backend:

Error: Error acquiring the state lock
The state is already locked by another user. Terraform acquired a lock on the state to prevent concurrent modifications.

Why This Happens

Terraform uses state locking to prevent concurrent modifications to infrastructure. When two operations run simultaneously (or when a previous operation crashed without releasing the lock), the lock remains held and subsequent operations fail:

  • Previous terraform apply crashed or was force-killed — the lock is acquired at the start of an operation. If the process is killed (Ctrl+C, server restart, timeout), the lock may not be released automatically.
  • Two CI/CD pipelines running in parallel — if two PRs are merged and deployed simultaneously, both try to acquire the lock. The second one fails.
  • Network interruption during apply — a network failure between Terraform and the backend (S3, DynamoDB, GCS) during an operation can leave the lock in place.
  • Backend lock entry stuck — with the DynamoDB lock table backend, the lock entry persists until explicitly deleted. A stuck apply won’t clean it up.
  • Long-running apply — Terraform Cloud and some backends have lock timeouts. A very long apply may see its lock considered stale.

Fix 1: Verify the Lock Is Actually Stuck

Before force-unlocking, verify no operation is currently running. Force-unlocking a lock held by a live terraform apply will corrupt your state:

# Check if there's actually a running Terraform process
# On the machine where Terraform runs:
ps aux | grep terraform

# In CI/CD — check the CI system for running pipelines
# GitHub Actions: check the Actions tab
# GitLab CI: check the Pipelines page
# Jenkins: check the Build Queue

Check the lock details in the error message:

Lock Info:
  ID:        f4a3b2c1-d5e6-7890-abcd-ef1234567890
  Who:       ci-runner@github-actions-runner-abc123
  Created:   2026-03-20 09:45:00 UTC
  • Who — the machine/user that acquired the lock. Is this machine still running Terraform?
  • Created — how long ago was the lock acquired? If it was 3 days ago, it’s almost certainly stuck.
  • Operation — what operation was in progress? OperationTypePlan is safer to interrupt than OperationTypeApply.

Warning: Only force-unlock when you are certain no Terraform process is currently using the lock. Unlocking a live operation causes two concurrent writers — this corrupts the state file and may cause irreversible infrastructure changes.

Fix 2: Force Unlock a Stuck Lock

Once you’ve confirmed the lock is stuck, use terraform force-unlock:

# Get the lock ID from the error message
terraform force-unlock f4a3b2c1-d5e6-7890-abcd-ef1234567890

# If running with a specific backend config
terraform force-unlock -force f4a3b2c1-d5e6-7890-abcd-ef1234567890

The -force flag skips the confirmation prompt (useful in scripts):

terraform force-unlock -force <lock-id>

After force-unlock, verify the state is consistent:

# Pull the current state and review it
terraform state pull > state-backup-$(date +%Y%m%d).json

# Check that the resources match what's actually deployed
terraform plan
# Should show no unexpected changes if state is consistent

For S3 + DynamoDB backend — manually delete the lock entry:

If terraform force-unlock fails, delete the DynamoDB lock entry directly:

# Find the lock item in the DynamoDB table
aws dynamodb scan \
  --table-name terraform-state-locks \
  --filter-expression "attribute_exists(LockID)" \
  --query "Items[*].{LockID: LockID.S}"

# Delete the specific lock entry
aws dynamodb delete-item \
  --table-name terraform-state-locks \
  --key '{"LockID": {"S": "terraform/state"}}'

# The LockID key is typically the S3 path + ".tflock"
aws dynamodb delete-item \
  --table-name terraform-state-locks \
  --key '{"LockID": {"S": "my-bucket/path/to/terraform.tfstate"}}'

For GCS backend:

# List lock files
gsutil ls gs://my-bucket/terraform/state.tflock

# Remove the lock file
gsutil rm gs://my-bucket/terraform/state.tflock

Fix 3: Prevent Lock Conflicts in CI/CD

The most common source of stuck locks in CI/CD is two pipelines running concurrently. Prevent this with concurrency controls:

GitHub Actions — use concurrency to prevent parallel runs:

# .github/workflows/terraform.yml
name: Terraform Apply

on:
  push:
    branches: [main]

# Cancel in-progress runs for the same ref, or queue them
concurrency:
  group: terraform-${{ github.ref }}
  cancel-in-progress: false   # Wait for the previous run, don't cancel it
  # cancel-in-progress: true  # Cancel previous run (risky for apply)

jobs:
  apply:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v4
    - uses: hashicorp/setup-terraform@v3
    - run: terraform init
    - run: terraform apply -auto-approve

GitLab CI — use resource_group to serialize jobs:

# .gitlab-ci.yml
terraform:apply:
  stage: deploy
  resource_group: terraform-production   # Only one job in this group runs at a time
  script:
    - terraform init
    - terraform apply -auto-approve

Jenkins — use a lock plugin to serialize builds:

// Jenkinsfile
pipeline {
  stages {
    stage('Terraform Apply') {
      options {
        lock(resource: 'terraform-production')  // Serialize on this named lock
      }
      steps {
        sh 'terraform init'
        sh 'terraform apply -auto-approve'
      }
    }
  }
}

Set a lock timeout — if Terraform can’t acquire the lock within N seconds, fail fast rather than waiting indefinitely:

terraform apply -lock-timeout=60s
# Fails after 60 seconds if lock can't be acquired
# Default: wait indefinitely

Fix 4: Configure the S3 + DynamoDB Backend Correctly

The most common Terraform backend for AWS requires both an S3 bucket (for state storage) and a DynamoDB table (for locking):

# backend.tf
terraform {
  backend "s3" {
    bucket         = "my-terraform-state"
    key            = "production/terraform.tfstate"
    region         = "us-east-1"
    encrypt        = true

    # DynamoDB table for state locking
    dynamodb_table = "terraform-state-locks"
  }
}

Create the DynamoDB table for locking (if it doesn’t exist):

aws dynamodb create-table \
  --table-name terraform-state-locks \
  --attribute-definitions AttributeName=LockID,AttributeType=S \
  --key-schema AttributeName=LockID,KeyType=HASH \
  --billing-mode PAY_PER_REQUEST \
  --region us-east-1

Verify the table exists and check for stuck locks:

# List all items in the lock table
aws dynamodb scan \
  --table-name terraform-state-locks \
  --query "Items" \
  --output table

# An empty Items array means no locks are held
# Non-empty Items means there's an active (or stuck) lock

IAM permissions required for the Terraform executor:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject",
        "s3:DeleteObject"
      ],
      "Resource": "arn:aws:s3:::my-terraform-state/*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "dynamodb:GetItem",
        "dynamodb:PutItem",
        "dynamodb:DeleteItem"
      ],
      "Resource": "arn:aws:dynamodb:us-east-1:*:table/terraform-state-locks"
    }
  ]
}

If the DynamoDB permissions are missing, every terraform apply fails with a lock error even when no lock is held.

Fix 5: Recover from a Corrupted State

If terraform apply was interrupted mid-apply, the state may be inconsistent — some resources exist in the cloud but aren’t in the state file, or vice versa:

# Back up the current state before any recovery
terraform state pull > state-backup-$(date +%Y%m%d-%H%M%S).json

# Run plan to see what Terraform thinks has changed
terraform plan -out=recovery.plan

# Review the plan carefully — look for unexpected creates or destroys
# If the plan looks correct, apply it
terraform apply recovery.plan

Import resources that exist in the cloud but not in state:

# If a resource was created but not recorded in state
terraform import aws_instance.web i-0abc123def456789
terraform import aws_s3_bucket.data my-bucket-name

Remove resources from state that no longer exist in the cloud:

# List all resources in state
terraform state list

# Remove a specific resource from state (doesn't destroy the resource)
terraform state rm aws_instance.old_web

Restore from a backup if state is severely corrupted:

# Push a previous state version back as the current state
terraform state push state-backup-20260320.json

# For S3 backends — check S3 versioning for previous state versions
aws s3api list-object-versions \
  --bucket my-terraform-state \
  --prefix production/terraform.tfstate \
  --query 'Versions[*].{VersionId: VersionId, LastModified: LastModified}'

# Restore a specific version
aws s3api get-object \
  --bucket my-terraform-state \
  --key production/terraform.tfstate \
  --version-id <version-id> \
  restored-state.json

Pro Tip: Always enable S3 versioning on your Terraform state bucket. If state becomes corrupted, you can restore a previous version. This is the single most important safeguard for Terraform state management.

Fix 6: Use Terraform Cloud or Atlantis for Centralized Locking

Instead of managing S3 + DynamoDB backends manually, use a system that handles locking, history, and concurrent access automatically:

Terraform Cloud backend:

terraform {
  cloud {
    organization = "my-org"
    workspaces {
      name = "production"
    }
  }
}

Terraform Cloud handles locking automatically — concurrent applies queue up and run sequentially. You can see lock status in the UI.

Atlantis — open-source pull request automation for Terraform:

# atlantis.yaml
version: 3
projects:
- name: production
  dir: infrastructure/production
  workflow: default
  autoplan:
    when_modified: ["*.tf", "*.tfvars"]

Atlantis serializes applies per workspace — only one apply runs at a time per project. It also shows plan output in pull request comments.

Still Not Working?

Lock persists after force-unlock — some backends (especially Azure Storage) have additional lock mechanisms. Check backend-specific documentation:

# Azure Blob Storage backend — check for a .tflock blob
az storage blob list \
  --container-name terraform-state \
  --account-name mystorageaccount \
  --query "[?ends_with(name, '.tflock')]"

# Delete stuck lock blob
az storage blob delete \
  --container-name terraform-state \
  --account-name mystorageaccount \
  --name terraform.tfstate.tflock

Lock ID in error doesn’t match force-unlock — make sure you’re copying the full lock UUID from the error message, including hyphens.

If using Terraform workspaces, the state path includes the workspace name:

# Default workspace: terraform.tfstate
# Named workspace: env:/production/terraform.tfstate
terraform workspace show   # Confirm current workspace
terraform force-unlock -force <lock-id>

For related infrastructure issues, see Fix: Terraform Plan Shows Unexpected Changes and Fix: Kubernetes Pod OOMKilled.

F

FixDevs

Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.

Was this article helpful?

Related Articles