Fix: Error Acquiring the State Lock (Terraform)
The Error
You run terraform plan or terraform apply and get:
Error: Error acquiring the state lock
Error message: ConditionalCheckFailedException: The conditional request failed
Lock Info:
ID: a]1b2c3d4-e5f6-7890-abcd-ef1234567890
Path: my-project/terraform.tfstate
Operation: OperationTypePlan
Who: user@hostname
Version: 1.9.0
Created: 2026-04-05 14:23:01.123456 +0000 UTC
Info:
Terraform acquires a state lock to protect against two runs modifying state
simultaneously. Please resolve the issue above and try again.Or one of these variations:
Error: Error locking state: Error acquiring the state lockError: Failed to load backend: Error configuring the backend "s3"Error: Backend initialization required, please run "terraform init"Error: Resource already existsAll of these relate to Terraform’s state management — the mechanism that tracks what infrastructure exists and maps it to your configuration.
Why This Happens
Terraform uses a state file to track every resource it manages. When using a remote backend (S3, GCS, Azure Blob, Terraform Cloud), a lock prevents concurrent operations from corrupting the state.
The “Error acquiring the state lock” means one of these things:
- A previous Terraform run is still in progress. Another
planorapplyis actively running, and the lock is legitimately held. - A previous run crashed or was interrupted. You hit Ctrl+C during an apply, your CI pipeline timed out, or your terminal disconnected. The lock was never released.
- DynamoDB table issues (AWS). The lock table doesn’t exist, has the wrong schema, or your IAM permissions don’t allow access to it.
- Network/permissions problem. Terraform can reach the state file but can’t read or write the lock table.
The “Failed to load backend” and “Backend initialization required” errors happen when your backend configuration changed, your .terraform directory is corrupted, or there’s a mismatch between your local state and remote backend.
Fix 1: Wait for the Other Run to Finish
Before doing anything else, check if someone (or a CI pipeline) is actually running Terraform right now.
The lock info in the error tells you:
- Who — the user and hostname holding the lock
- Created — when the lock was acquired
- Operation — what they’re doing (plan or apply)
If the lock was created minutes ago and you know a teammate is working, wait. Ask them.
If the lock was created hours or days ago, it’s almost certainly stale. Move on to Fix 2.
Fix 2: Force-Unlock a Stale Lock
If the lock is from a crashed or interrupted run, force-release it:
terraform force-unlock LOCK_IDUse the lock ID from the error message:
terraform force-unlock a1b2c3d4-e5f6-7890-abcd-ef1234567890Terraform will ask you to confirm. To skip the confirmation (useful in CI):
terraform force-unlock -force a1b2c3d4-e5f6-7890-abcd-ef1234567890When this is safe: The lock is from a run that crashed, timed out, or was killed. No Terraform process is currently modifying your state.
When this is dangerous: Another process is genuinely running. Force-unlocking while another apply is in progress can corrupt your state file. Always confirm the lock is stale before force-unlocking.
After unlocking, run your command again:
terraform planFix 3: Fix Your DynamoDB Lock Table (AWS S3 Backend)
If you’re using the S3 backend with DynamoDB for locking (the standard AWS setup), the lock table must exist and have the correct schema.
Verify the table exists
aws dynamodb describe-table --table-name terraform-state-lockIf you get ResourceNotFoundException, the table doesn’t exist. Create it:
aws dynamodb create-table \
--table-name terraform-state-lock \
--attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH \
--billing-mode PAY_PER_REQUESTThe critical detail: the partition key must be named LockID (case-sensitive) with type String. Any other name and Terraform silently fails to acquire locks, or throws errors.
Verify your backend config matches
Your backend.tf or main.tf should look like:
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "my-project/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-state-lock"
encrypt = true
}
}The dynamodb_table value must exactly match the table name you created. If you’ve recently renamed the table or changed the backend config, run:
terraform init -reconfigureVerify IAM permissions
Your IAM user or role needs these DynamoDB permissions on the lock table:
{
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:DeleteItem"
],
"Resource": "arn:aws:dynamodb:*:*:table/terraform-state-lock"
}And these S3 permissions on the state bucket:
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-terraform-state",
"arn:aws:s3:::my-terraform-state/*"
]
}If your credentials are misconfigured entirely, you’ll see a different error. See Fix: Unable to Locate Credentials (AWS CLI / SDK) for that.
Fix 4: Reinitialize the Backend
If you see “Failed to load backend” or “Backend initialization required,” your .terraform directory is out of sync with your backend configuration.
Basic reinitialize
terraform initIf that fails with a backend mismatch error:
terraform init -reconfigureThis tells Terraform to ignore the existing backend configuration in .terraform and set up a fresh connection to the backend defined in your code.
Important: -reconfigure does not migrate state. If you’re intentionally moving from one backend to another (e.g., local to S3), use -migrate-state instead:
terraform init -migrate-stateThis copies your existing state to the new backend.
Nuclear option: delete .terraform
If init -reconfigure still fails, delete the .terraform directory and reinitialize:
rm -rf .terraform
rm -f .terraform.lock.hcl
terraform initThe .terraform directory contains cached providers, modules, and backend config. Deleting it forces a complete fresh start. Your actual state file (remote or local) is not affected.
Note: Only delete .terraform.lock.hcl if you’re also willing to re-resolve all provider versions. If you want to keep your locked versions, leave it in place.
Fix 5: Fix State File Corruption
If your state file got corrupted (partial write, concurrent modification, manual editing gone wrong), you’ll see errors like:
Error refreshing state: state snapshot was created by Terraform v1.9.0,
which is newer than current v1.8.0Error: Failed to load state: unsupported state file formatDownload and inspect the state
terraform state pull > current-state.jsonIf this fails, download the state file directly from your backend:
# S3 backend
aws s3 cp s3://my-terraform-state/my-project/terraform.tfstate ./corrupted-state.jsonCheck if the file is valid JSON (see Fix: JSON Parse Unexpected Token for JSON debugging tips):
python3 -m json.tool corrupted-state.json > /dev/nullIf it’s not valid JSON, your state file was partially written. Check your S3 bucket for previous versions:
aws s3api list-object-versions \
--bucket my-terraform-state \
--prefix my-project/terraform.tfstateRestore a previous version:
aws s3api get-object \
--bucket my-terraform-state \
--key my-project/terraform.tfstate \
--version-id YOUR_VERSION_ID \
restored-state.jsonAlways enable versioning on your state bucket. It’s your safety net for state corruption:
aws s3api put-bucket-versioning \
--bucket my-terraform-state \
--versioning-configuration Status=EnabledDowngrade/upgrade state version
If the error mentions a version mismatch, you need to use the same or newer Terraform version that last wrote the state. Terraform state files are forward-compatible but not backward-compatible. You can’t use Terraform 1.8 to read state written by Terraform 1.9.
Either upgrade your Terraform version or, as a last resort, manually edit the terraform_version field in the state JSON (risky — only do this if the versions are very close and you understand the implications).
Fix 6: Handle “Resource Already Exists” with Import
When you see:
Error: A resource with the ID "/subscriptions/.../myResource" already existsThis means the real infrastructure exists but Terraform doesn’t know about it. The resource isn’t in your state file, so Terraform tries to create it and fails.
Import the existing resource
terraform import aws_s3_bucket.my_bucket my-bucket-nameFor Terraform 1.5+, you can use import blocks in your configuration instead:
import {
to = aws_s3_bucket.my_bucket
id = "my-bucket-name"
}Then run:
terraform planTerraform generates a plan that includes importing the resource. This is cleaner than the CLI import because it’s code-reviewable and works in CI pipelines.
Find the right import ID
The import ID format varies by resource type. Check the Terraform provider documentation for the specific resource. Common patterns:
- AWS: Usually the resource’s ARN or name (
my-bucket-name,sg-12345678) - Azure: The full resource ID (
/subscriptions/.../resourceGroups/.../providers/...) - GCP: The project/region/name format (
projects/my-project/zones/us-central1-a/instances/my-instance)
Fix 7: Fix Provider Version Constraints
Error: Failed to query available provider packages
Could not retrieve the list of available versions for provider
hashicorp/aws: locked provider registry.hashicorp.com/hashicorp/aws 5.30.0
does not match configured version constraint ~> 4.0Your .terraform.lock.hcl has locked a provider version that conflicts with your version constraints.
Update the lock file
terraform init -upgradeThis re-resolves all provider versions within your constraints and updates the lock file.
If you need to change the constraint itself, update your required_providers block:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}Then run terraform init -upgrade again.
Fix 8: State Surgery with terraform state
When resources get into a bad state — duplicated, orphaned, or moved — use state commands to fix them.
Remove a resource from state (without destroying it)
terraform state rm aws_s3_bucket.my_bucketThis tells Terraform to “forget” about the resource. The actual infrastructure stays untouched. Useful when you want to stop managing a resource or re-import it.
Move a resource (renamed or refactored)
terraform state mv aws_s3_bucket.old_name aws_s3_bucket.new_nameFor Terraform 1.1+, use moved blocks in your config instead:
moved {
from = aws_s3_bucket.old_name
to = aws_s3_bucket.new_name
}This is self-documenting and works across team members without everyone running CLI commands.
List all resources in state
terraform state listShow details of a specific resource
terraform state show aws_s3_bucket.my_bucketStill Not Working?
The lock is stuck and force-unlock fails
If force-unlock itself throws an error, delete the lock directly from DynamoDB:
aws dynamodb delete-item \
--table-name terraform-state-lock \
--key '{"LockID": {"S": "my-terraform-state/my-project/terraform.tfstate"}}'The LockID value is the full S3 path to your state file (bucket + key). Scan the table to find the exact value if you’re unsure:
aws dynamodb scan --table-name terraform-state-lockS3 bucket policy or encryption blocking access
If your state bucket uses KMS encryption, you need kms:Encrypt, kms:Decrypt, and kms:GenerateDataKey permissions on the KMS key. A common misconfiguration is having S3 permissions but missing KMS permissions:
{
"Effect": "Allow",
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:GenerateDataKey"
],
"Resource": "arn:aws:kms:us-east-1:123456789012:key/your-kms-key-id"
}Also check for S3 bucket policies that restrict access by IP, VPC endpoint, or IAM principal. These can silently block Terraform even when your IAM permissions look correct.
Terraform Cloud / Terraform Enterprise lock issues
If you’re using Terraform Cloud or Enterprise, locks are managed differently. You can’t use force-unlock with a lock ID. Instead:
- Go to your workspace in the Terraform Cloud UI
- Click Settings > General
- Scroll to Force Unlock
- Click Force Unlock to release the lock
Or via the API:
curl -s \
--header "Authorization: Bearer $TFC_TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
"https://app.terraform.io/api/v2/workspaces/WORKSPACE_ID/actions/force-unlock".terraform directory from a different OS or machine
The .terraform directory contains platform-specific provider binaries. If you copy your project from Linux to macOS (or vice versa), or from one CI runner to another with a different architecture, you’ll get errors about missing or incompatible providers.
Delete .terraform and reinitialize:
rm -rf .terraform
terraform initNever commit .terraform to version control. If you’re new to Git configuration, see Fix: fatal: not a git repository for getting started. Your .gitignore should include:
.terraform/
*.tfstate
*.tfstate.backupState shows changes on every plan with no config changes
If terraform plan always shows changes even when you haven’t modified anything, common causes include:
- Ignored attributes that change outside Terraform (e.g.,
tags_allin AWS). Add alifecycleblock:
lifecycle {
ignore_changes = [tags_all]
}Non-deterministic values in your config (timestamps, random strings computed at plan time). Use
terraform_dataor move the computation tolocals.Provider version difference between team members or CI. Lock your provider versions with
.terraform.lock.hcland commit this file.
Backend configuration in YAML files causing parse errors
If your backend config is generated from other files or templates, make sure the output is valid HCL. Terraform’s HCL parser is strict about syntax. Common mistakes include trailing commas, unquoted strings where quotes are needed, and missing closing braces. If you’re debugging HCL syntax issues, see Fix: YAML Mapping Values Are Not Allowed Here — the mental model for debugging config syntax errors is similar.
Running Terraform in CI/CD and the lock keeps timing out
CI pipelines often have multiple jobs or stages that run Terraform. If one job fails mid-apply, the next job can’t acquire the lock. Solutions:
- Add a force-unlock step before plan/apply in your pipeline (only for non-production environments)
- Increase the lock timeout:
terraform plan -lock-timeout=300s- Use separate state files per environment to avoid lock contention between pipelines:
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "env/${var.environment}/terraform.tfstate" # won't work -- see below
}
}Note: You can’t use variables in backend blocks. Use partial configuration instead:
terraform init -backend-config="key=env/production/terraform.tfstate"Or use workspaces:
terraform workspace new production
terraform workspace select productionConnection to Kubernetes cluster failing during Terraform apply
If your Terraform configuration manages Kubernetes resources and you’re getting connection errors from the Kubernetes provider, the cluster endpoint or credentials may have changed since the state was last written. This is common after cluster recreation or credential rotation. For Kubernetes connection troubleshooting, see Fix: The Connection to the Server localhost:8080 Was Refused.
Related: If your Terraform AWS operations fail with credential errors, see Fix: Unable to Locate Credentials (AWS CLI / SDK). For Kubernetes-related Terraform issues, see Fix: The Connection to the Server localhost:8080 Was Refused.
Related Articles
Fix: Terraform Failed to install provider (or Failed to query available provider packages)
How to fix 'Failed to install provider' and 'Failed to query available provider packages' errors in Terraform, covering registry issues, version constraints, network problems, platform support, and air-gapped environments.
Fix: AWS S3 Access Denied (403 Forbidden) when uploading, downloading, or listing
How to fix the 'Access Denied' (403 Forbidden) error in AWS S3 when uploading, downloading, listing, or managing objects using the CLI, SDK, or console.
Fix: Unable to Locate Credentials (AWS CLI / SDK)
How to fix 'Unable to locate credentials', 'NoCredentialProviders: no valid providers in chain', and 'The security token included in the request is expired' errors in AWS CLI, SDKs, and applications running on EC2, ECS, Lambda, and Docker.
Fix: Ansible UNREACHABLE – Failed to Connect to the Host via SSH
How to fix Ansible UNREACHABLE errors caused by SSH connection failures, wrong credentials, host key issues, or Python interpreter problems on remote hosts.