Fix: AWS S3 Access Denied (403 Forbidden) when uploading, downloading, or listing
Quick Answer
How to fix the 'Access Denied' (403 Forbidden) error in AWS S3 when uploading, downloading, listing, or managing objects using the CLI, SDK, or console.
The Error
You try to upload, download, or list objects in an S3 bucket and get:
An error occurred (AccessDenied) when calling the PutObject operation: Access DeniedOr when listing a bucket:
An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access DeniedIn the AWS SDK (for example, Python boto3):
botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: ForbiddenIn the AWS Console, you may see:
Insufficient permissions to list objectsOr when downloading via a browser or curl:
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>ABCDEFGH12345678</RequestId>
<HostId>...</HostId>
</Error>All of these mean the same thing: the request to S3 was authenticated (your credentials are valid), but the action was explicitly denied by one or more permission layers. This is an authorization problem, not an authentication problem. If your credentials themselves are missing or invalid, see Fix: Unable to Locate Credentials (AWS CLI / SDK).
Why This Happens
S3 access control is evaluated through multiple independent permission layers. A request must be allowed by all applicable layers to succeed. If any single layer denies the request, you get a 403 Access Denied error. This is where the confusion comes from — you might have the right IAM policy but still get denied because a bucket policy, ACL, or organizational policy is blocking the request.
The permission layers that S3 evaluates are:
- IAM policies — Attached to the user, group, or role making the request. This is the most common place to grant S3 access.
- S3 bucket policies — Attached to the bucket itself. These can allow or deny access for specific principals, IP ranges, VPCs, or conditions.
- S3 Access Control Lists (ACLs) — A legacy access mechanism. Modern buckets have ACLs disabled by default (Bucket Owner Enforced).
- S3 Block Public Access — Account-level and bucket-level settings that override any policy or ACL that would grant public access.
- VPC endpoint policies — If the request goes through a VPC endpoint, the endpoint’s policy must also allow the action.
- KMS key policies — If the bucket uses SSE-KMS encryption, the caller needs permissions on the KMS key in addition to S3 permissions.
- AWS Organizations Service Control Policies (SCPs) — Organization-wide guardrails that can restrict what actions are allowed, regardless of IAM policies.
A request is denied if any of these layers says “no.” You need to check each one to find the blocker.
Fix 1: Grant IAM User or Role Permissions
The most common cause of Access Denied is that the IAM principal (user or role) making the request simply doesn’t have the required S3 permissions.
Check what permissions the caller currently has:
aws sts get-caller-identityThis tells you which IAM user or role is making the request. Then check that principal’s IAM policies.
To grant full S3 access to a specific bucket, attach an inline or managed policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
]
}Critical detail: Notice the two separate resource ARNs. s3:ListBucket applies to the bucket itself (arn:aws:s3:::my-bucket), while s3:GetObject, s3:PutObject, and s3:DeleteObject apply to objects within the bucket (arn:aws:s3:::my-bucket/*). Forgetting the /* suffix for object actions, or omitting the bucket-level ARN for listing, is one of the most frequent IAM mistakes.
You can test whether your IAM policy is the problem using the IAM policy simulator:
aws iam simulate-principal-policy \
--policy-source-arn arn:aws:iam::123456789012:user/my-user \
--action-names s3:GetObject \
--resource-arns arn:aws:s3:::my-bucket/my-file.txtIf the result is implicitDeny, the IAM policy doesn’t grant the action. If it’s explicitDeny, there’s a deny statement somewhere actively blocking it.
Common Mistake: Forgetting the
/*suffix on the Resource ARN for object-level actions.s3:GetObjectonarn:aws:s3:::my-bucket(without/*) grants zero access to actual objects — you needarn:aws:s3:::my-bucket/*for object operations.
Fix 2: Update the S3 Bucket Policy
Even if the IAM policy grants access, the bucket policy can independently deny or fail to allow the request. This is especially relevant for cross-account access, where the bucket policy must explicitly grant access to the external principal.
View the current bucket policy:
aws s3api get-bucket-policy --bucket my-bucket --output text | python -m json.toolIf there’s no bucket policy, this command returns an error, which is fine — it means no bucket-level restrictions or grants exist beyond IAM.
A permissive bucket policy for a specific IAM user looks like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowSpecificUser",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:user/my-user"
},
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
]
}Watch out for explicit deny statements in the bucket policy. An explicit deny always wins, regardless of what IAM policies or other allow statements say. Look for "Effect": "Deny" in the bucket policy and check whether its conditions match your request.
A common deny pattern is IP-based restriction:
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "203.0.113.0/24"
}
}
}If your IP address is not in the allowed range, every request gets denied — even with valid IAM permissions.
Fix 3: Check S3 Block Public Access Settings
S3 Block Public Access is an account-level and bucket-level feature that overrides any bucket policy or ACL that grants public access. If you’re trying to make objects publicly accessible and getting Access Denied, this is likely the cause.
Check bucket-level settings:
aws s3api get-public-access-block --bucket my-bucketCheck account-level settings:
aws s3control get-public-access-block --account-id 123456789012The four settings are:
- BlockPublicAcls — Rejects PUT requests that include public ACLs.
- IgnorePublicAcls — Ignores all public ACLs on the bucket and its objects.
- BlockPublicPolicy — Rejects bucket policies that grant public access.
- RestrictPublicBuckets — Restricts access to buckets with public policies to only AWS service principals and authorized users.
If any of these are true at the account level, they override the bucket-level settings. To allow public access (if that’s genuinely what you need), disable the relevant settings at both levels:
aws s3api put-public-access-block \
--bucket my-bucket \
--public-access-block-configuration \
"BlockPublicAcls=false,IgnorePublicAcls=false,BlockPublicPolicy=false,RestrictPublicBuckets=false"Warning: Disabling Block Public Access should only be done when you intentionally need public access, such as hosting a static website. For most use cases, keep it enabled and use IAM policies and bucket policies to grant access to specific principals.
Fix 4: Fix ACL vs. Bucket Policy Conflicts
Access Control Lists (ACLs) are S3’s legacy access mechanism. Since April 2023, new buckets are created with ACLs disabled by default (the “Bucket Owner Enforced” setting). If your bucket uses this setting, ACL-based grants are ignored and all access must be managed through IAM policies and bucket policies.
Check your bucket’s ownership setting:
aws s3api get-bucket-ownership-controls --bucket my-bucketIf you see BucketOwnerEnforced, ACLs are disabled. Any application code or CLI command that tries to set an ACL (like --acl public-read or --acl bucket-owner-full-control) will fail with Access Denied:
# This fails if ACLs are disabled
aws s3 cp file.txt s3://my-bucket/ --acl public-readFix: Remove the --acl flag from your commands or SDK calls. If you need cross-account access, use bucket policies instead of ACLs.
If you genuinely need ACLs (rare), change the ownership setting:
aws s3api put-bucket-ownership-controls \
--bucket my-bucket \
--ownership-controls '{"Rules":[{"ObjectOwnership":"BucketOwnerPreferred"}]}'Fix 5: Configure Cross-Account Access
When accessing an S3 bucket from a different AWS account, both accounts need to grant permission. The requesting account’s IAM policy must allow the S3 action, and the bucket-owning account’s bucket policy must allow the requesting principal.
In the bucket-owning account, add a bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowCrossAccountAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111111111111:role/CrossAccountRole"
},
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
]
}In the requesting account, the IAM role must have a policy allowing S3 actions on the bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
]
}For cross-account object uploads, there’s an additional ownership issue. By default, the uploading account owns the object, not the bucket owner. This means the bucket owner can’t access objects uploaded by another account. Fix this by requiring uploads to grant bucket-owner-full-control:
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
}Or, better yet, enable Bucket Owner Enforced ownership, which automatically makes the bucket owner the owner of all objects regardless of who uploaded them.
If you’re managing cross-account infrastructure with Terraform and running into state issues, see Fix: Terraform Error Acquiring the State Lock.
Fix 6: Update VPC Endpoint Policy
If your application accesses S3 through a VPC endpoint (gateway endpoint), the endpoint’s policy must also allow the specific S3 actions. A restrictive VPC endpoint policy can silently deny S3 requests even when IAM and bucket policies are correct.
Check the endpoint policy:
aws ec2 describe-vpc-endpoints --filters "Name=service-name,Values=com.amazonaws.us-east-1.s3"The default policy allows all actions to all S3 resources. But if someone restricted it, it might look like this:
{
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::approved-bucket/*"
}
]
}This policy only allows GetObject on one specific bucket. Any other action or bucket access through this VPC endpoint gets denied. Update the policy to include the actions and buckets you need.
Also confirm that the route table associated with your subnet includes a route to the S3 VPC endpoint. If traffic is going over the internet instead of through the endpoint, the endpoint policy doesn’t apply — but the bucket policy might have a condition requiring aws:sourceVpce, which would deny the internet-routed request.
Fix 7: Grant KMS Key Permissions for SSE-KMS Encrypted Buckets
If the bucket uses server-side encryption with AWS KMS (SSE-KMS), the caller needs permissions on the KMS key in addition to S3 permissions. Without KMS access, you get Access Denied when trying to upload or download objects.
Check the bucket’s encryption configuration:
aws s3api get-bucket-encryption --bucket my-bucketIf you see aws:kms as the encryption type, note the KMS key ARN. Then ensure the caller has the required KMS permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey"
],
"Resource": "arn:aws:kms:us-east-1:123456789012:key/key-id-here"
}
]
}- Downloading objects requires
kms:Decrypt. - Uploading objects requires
kms:GenerateDataKey(S3 needs to generate a data key to encrypt the object).
For cross-account scenarios, the KMS key policy must also allow the external principal:
{
"Sid": "AllowCrossAccountDecrypt",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111111111111:role/CrossAccountRole"
},
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey"
],
"Resource": "*"
}If the bucket uses the default AWS-managed key (aws/s3), cross-account access is not possible through KMS key policy changes because you cannot modify the key policy of AWS-managed keys. In this case, switch to a customer-managed KMS key.
Fix 8: Fix Presigned URL Issues
Presigned URLs grant temporary access to S3 objects, but they can fail with Access Denied for several reasons.
The credentials that generated the URL have expired. If the presigned URL was created with temporary credentials (STS, SSO, instance role), the URL becomes invalid when those credentials expire — even if the URL’s own expiration time hasn’t been reached.
The IAM user or role that generated the URL no longer has permission. The URL inherits the permissions of the creator. If their IAM policy was changed or their access keys were deactivated, all presigned URLs they created stop working.
The URL was generated for the wrong region. S3 presigned URLs are region-specific. If the bucket is in eu-west-1 but the URL was generated targeting us-east-1, it fails:
# Ensure the region matches the bucket's region
aws s3 presign s3://my-bucket/file.txt --region eu-west-1 --expires-in 3600The URL has expired. The default expiration for presigned URLs is 3600 seconds (1 hour). The maximum depends on the credential type: IAM users can go up to 7 days, while STS/role credentials are limited by the session duration.
To debug, try generating a fresh presigned URL and testing it immediately:
URL=$(aws s3 presign s3://my-bucket/file.txt --expires-in 300)
curl -I "$URL"If this works but older URLs don’t, it’s a credential or expiration issue. If you’re dealing with environment variable problems in your application that generates presigned URLs, see Fix: process.env.VARIABLE_NAME Is Undefined.
Fix 9: Handle Bucket Owner Enforced (ACLs Disabled)
Since April 2023, all new S3 buckets are created with the Bucket Owner Enforced setting, which disables ACLs entirely. This means:
- All objects in the bucket are owned by the bucket owner, regardless of who uploaded them.
- Any request that includes an ACL header (
x-amz-acl,x-amz-grant-*) fails with Access Denied. - Legacy applications that set ACLs on PUT requests break.
The fix depends on your situation:
If your application explicitly sets ACLs when uploading:
Remove all ACL-related parameters from your code. In the AWS CLI, remove --acl. In SDKs, remove the ACL parameter from PutObject calls.
# Before (fails with Bucket Owner Enforced)
s3.put_object(Bucket='my-bucket', Key='file.txt', Body=data, ACL='private')
# After (works)
s3.put_object(Bucket='my-bucket', Key='file.txt', Body=data)# Before
aws s3 cp file.txt s3://my-bucket/ --acl bucket-owner-full-control
# After
aws s3 cp file.txt s3://my-bucket/If you’re using a third-party tool that sets ACLs and you can’t change it, you can change the bucket’s ownership setting to BucketOwnerPreferred to re-enable ACLs. But this is a step backward in security posture.
Fix 10: Check AWS Organizations Service Control Policies (SCPs)
If your AWS account is part of an AWS Organization, Service Control Policies (SCPs) can restrict S3 access at the organizational unit (OU) or account level. SCPs act as permission boundaries — they don’t grant access, but they can deny it.
You won’t see SCPs in the IAM console for the individual account. You need access to the management account (or delegated admin) to view them:
aws organizations list-policies --filter SERVICE_CONTROL_POLICYThen inspect each policy:
aws organizations describe-policy --policy-id p-1234567890Common SCP patterns that cause S3 Access Denied:
- Region restrictions — The SCP denies all actions outside specific regions. If the S3 bucket is in a region not allowed by the SCP, all operations fail.
- Service restrictions — The SCP explicitly denies S3 actions for certain accounts or OUs.
- Encryption requirements — The SCP denies
s3:PutObjectunless server-side encryption is specified.
Example SCP that requires encryption:
{
"Effect": "Deny",
"Action": "s3:PutObject",
"Resource": "*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "aws:kms"
}
}
}With this SCP, any upload without --sse aws:kms gets denied. The error message doesn’t tell you it’s an SCP — it’s the same generic “Access Denied.”
If you’re setting up Terraform providers for managing S3 and other AWS infrastructure and hitting provider installation errors, see Fix: Terraform Failed to Install Provider.
Fix 11: Use the Correct Region
S3 bucket names are globally unique, but buckets live in specific regions. Some operations require you to target the correct region, and mismatches can cause Access Denied or redirect errors.
# Find the bucket's region
aws s3api get-bucket-location --bucket my-bucketIf the output says null or "LocationConstraint": null, the bucket is in us-east-1. Otherwise, it shows the region name.
Set the correct region when making requests:
aws s3 ls s3://my-bucket/ --region eu-west-1Or set it as an environment variable:
export AWS_DEFAULT_REGION=eu-west-1Region mismatches are particularly problematic with:
- VPC endpoint policies that are region-specific
- Bucket policies that include
aws:SourceVpcoraws:SourceVpceconditions - Presigned URLs generated with the wrong region
- SDK clients initialized with a hardcoded region that doesn’t match the bucket
If you’ve encountered CORS issues when accessing S3 from a browser-based application, the region mismatch can also manifest as CORS errors. See Fix: CORS ‘Access-Control-Allow-Origin’ Error for more on CORS configuration.
Why this matters: When you request a nonexistent object and lack
s3:ListBucketpermission, S3 returns 403 instead of 404. This is a deliberate security measure to prevent attackers from probing your bucket contents, but it makes debugging significantly harder.
Fix 12: Verify the Object Exists (404 Disguised as 403)
A subtle but common gotcha: if you request an object that doesn’t exist, and you don’t have s3:ListBucket permission on the bucket, S3 returns 403 Access Denied instead of 404 Not Found. This is by design — S3 doesn’t want to reveal whether an object exists to unauthorized callers.
Check if the object actually exists:
aws s3api head-object --bucket my-bucket --key path/to/file.txtIf you get Access Denied here too, grant s3:ListBucket on the bucket resource (not just the object resource) so you can distinguish between “not found” and “not authorized”:
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::my-bucket"
}With ListBucket permission, S3 returns a proper 404 for missing objects, making debugging much easier.
Still Not Working?
Enable AWS CloudTrail for S3 data events
CloudTrail can log every S3 API call with the exact error reason. Enable data events for your bucket:
aws cloudtrail put-event-selectors \
--trail-name my-trail \
--event-selectors '[{"ReadWriteType":"All","IncludeManagementEvents":true,"DataResources":[{"Type":"AWS::S3::Object","Values":["arn:aws:s3:::my-bucket/"]}]}]'Then look at the CloudTrail logs for the denied request. The errorCode and errorMessage fields tell you which policy layer denied the request.
Use IAM Access Analyzer
IAM Access Analyzer can help you identify why a specific request was denied. It evaluates all the permission layers (IAM, bucket policy, SCPs, VPC endpoint policies) and shows you the combined effect.
Check for S3 Object Lock or Legal Hold
If the bucket has S3 Object Lock enabled, certain operations (like deleting or overwriting objects) are blocked even with full S3 permissions. Check if Object Lock is enabled:
aws s3api get-object-lock-configuration --bucket my-bucketIf an object has a legal hold or retention period, you can’t delete it until the hold is removed or the retention period expires.
Debug with the --debug flag
Just like with credential issues, the --debug flag shows detailed information about the request, including the exact authorization headers, the endpoint being called, and the raw response from S3:
aws s3 cp file.txt s3://my-bucket/ --debug 2>&1 | tail -50Look for the HTTP response code and any XML error details in the output.
Ensure the bucket exists and you have the name right
Typos in bucket names lead to Access Denied (not “bucket not found”) if someone else owns a bucket with that name. Double-check the bucket name. S3 bucket names are case-sensitive in the API — though all bucket names must be lowercase, a wrong character in the name sends your request to someone else’s bucket (or a nonexistent one), both resulting in 403.
Related: If your AWS CLI can’t find credentials at all, see Fix: Unable to Locate Credentials (AWS CLI / SDK). For Terraform state locking problems with S3 backends, see Fix: Terraform Error Acquiring the State Lock. For Terraform provider issues, see Fix: Terraform Failed to Install Provider. If environment variables for AWS config aren’t loading, see Fix: process.env.VARIABLE_NAME Is Undefined. For CORS issues when accessing S3 from frontend applications, see Fix: CORS ‘Access-Control-Allow-Origin’ Error.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Terraform Failed to install provider (or Failed to query available provider packages)
How to fix 'Failed to install provider' and 'Failed to query available provider packages' errors in Terraform, covering registry issues, version constraints, network problems, platform support, and air-gapped environments.
Fix: Error Acquiring the State Lock (Terraform)
How to fix 'Error acquiring the state lock', 'Error locking state', 'Failed to load backend', and other common Terraform state and backend errors. Covers force-unlock, DynamoDB lock tables, S3 backend issues, state file corruption, and provider version conflicts.
Fix: Unable to Locate Credentials (AWS CLI / SDK)
How to fix 'Unable to locate credentials', 'NoCredentialProviders: no valid providers in chain', and 'The security token included in the request is expired' errors in AWS CLI, SDKs, and applications running on EC2, ECS, Lambda, and Docker.
Fix: Docker Build Cache Not Working - No Cache Being Used
How to fix Docker build cache not working when layers rebuild every time despite no changes, including layer ordering, .dockerignore, COPY invalidation, BuildKit cache mounts, and CI/CD cache strategies.