'AWS CLI S3 A client error (403) occurred when calling the HeadObject operation: Forbidden
I'm trying to setup a Amazon Linux AMI(ami-f0091d91) and have a script that runs a copy command to copy from a S3 bucket.
aws --debug s3 cp s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm .
This script works perfectly on my local machine but fails with the following error on the Amazon Image:
2016-03-22 01:07:47,110 - MainThread - botocore.auth - DEBUG - StringToSign:
HEAD
Tue, 22 Mar 2016 01:07:47 GMT
x-amz-security-token:AQoDYXdzEPr//////////wEa4ANtcDKVDItVq8Z5OKms8wpQ3MS4dxLtxVq6Om1aWDhLmZhL2zdqiasNBV4nQtVqwyPsRVyxl1Urq1BBCnZzDdl4blSklm6dvu+3efjwjhudk7AKaCEHWlTd/VR3cksSNMFTcI9aIUUwzGW8lD9y8MVpKzDkpxzNB7ZJbr9HQNu8uF/st0f45+ABLm8X4FsBPCl2I3wKqvwV/s2VioP/tJf7RGQK3FC079oxw3mOid5sEi28o0Qp4h/Vy9xEHQ28YQNHXOBafHi0vt7vZpOtOfCJBzXvKbk4zRXbLMamnWVe3V0dArncbNEgL1aAi1ooSQ8+Xps8ufFnqDp7HsquAj50p459XnPedv90uFFd6YnwiVkng9nNTAF+2Jo73+eKTt955Us25Chxvk72nAQsAZlt6NpfR+fF/Qs7jjMGSF6ucjkKbm0x5aCqCw6YknsoE1Rtn8Qz9tFxTmUzyCTNd7uRaxbswm7oHOdsM/Q69otjzqSIztlwgUh2M53LzgChQYx5RjYlrjcyAolRguJjpSq3LwZ5NEacm/W17bDOdaZL3y1977rSJrCxb7lmnHCOER5W0tsF9+XUGW1LMX69EWgFYdn5QNqFk6mcJsZWrR9dkehaQwjLPcv/29QcM+b5u/0goazCtwU=
/aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm
2016-03-22 01:07:47,111 - MainThread - botocore.endpoint - DEBUG - Sending http request: <PreparedRequest [HEAD]>
2016-03-22 01:07:47,111 - MainThread - botocore.vendored.requests.packages.urllib3.connectionpool - INFO - Starting new HTTPS connection (1): aws-codedeploy-us-west-2.s3.amazonaws.com
2016-03-22 01:07:47,151 - MainThread - botocore.vendored.requests.packages.urllib3.connectionpool - DEBUG - "HEAD /latest/codedeploy-agent.noarch.rpm HTTP/1.1" 403 0
2016-03-22 01:07:47,151 - MainThread - botocore.parsers - DEBUG - Response headers: {'x-amz-id-2': '0mRvGge9ugu+KKyDmROm4jcTa1hAnA5Ax8vUlkKZXoJ//HVJAKxbpFHvOGaqiECa4sgon2F1kXw=', 'server': 'AmazonS3', 'transfer-encoding': 'chunked', 'x-amz-request-id': '6204CD88E880E5DD', 'date': 'Tue, 22 Mar 2016 01:07:46 GMT', 'content-type': 'application/xml'}
2016-03-22 01:07:47,152 - MainThread - botocore.parsers - DEBUG - Response body:
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.HeadObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7f421075bcd0>
2016-03-22 01:07:47,152 - MainThread - botocore.retryhandler - DEBUG - No retry needed.
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.HeadObject: calling handler <function enhance_error_msg at 0x7f4211085758>
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.HeadObject: calling handler <awscli.errorhandler.ErrorHandler object at 0x7f421100cc90>
2016-03-22 01:07:47,152 - MainThread - awscli.errorhandler - DEBUG - HTTP Response Code: 403
2016-03-22 01:07:47,152 - MainThread - awscli.customizations.s3.s3handler - DEBUG - Exception caught during task execution: A client error (403) occurred when calling the HeadObject operation: Forbidden
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/s3handler.py", line 100, in call
total_files, total_parts = self._enqueue_tasks(files)
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/s3handler.py", line 178, in _enqueue_tasks
for filename in files:
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/fileinfobuilder.py", line 31, in call
for file_base in files:
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 142, in call
for src_path, extra_information in file_iterator:
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 314, in list_objects
yield self._list_single_object(s3_path)
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 343, in _list_single_object
response = self._client.head_object(**params)
File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 228, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 488, in _make_api_call
model=operation_model, context=request_context
File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 226, in emit
return self._emit(event_name, kwargs)
File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 209, in _emit
response = handler(**kwargs)
File "/usr/local/lib/python2.7/site-packages/awscli/errorhandler.py", line 70, in __call__
http_status_code=http_response.status_code)
ClientError: A client error (403) occurred when calling the HeadObject operation: Forbidden
2016-03-22 01:07:47,153 - Thread-1 - awscli.customizations.s3.executor - DEBUG - Received print task: PrintTask(message='A client error (403) occurred when calling the HeadObject operation: Forbidden', error=True, total_parts=None, warning=None)
A client error (403) occurred when calling the HeadObject operation: Forbidden
However, when I run it with the --no-sign-request
option, it works perfectly:
aws --debug --no-sign-request s3 cp s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm .
Can someone please explain what is going on?
Solution 1:[1]
I figured it out. I had an error in my cloud formation template that was creating the EC2 instances. As a result, the EC2 instances that were trying to access the above code deploy buckets, were in different regions (not us-west-2). It seems like the access policies on the buckets (owned by Amazon) only allow access from the region they belong in. When I fixed the error in my template (it was wrong parameter map), the error disappeared
Solution 2:[2]
in my case the problem was the Resource
statement in the user access policy.
First we had "Resource": "arn:aws:s3:::BUCKET_NAME"
,
but in order to have access to objects within a bucket you need a /*
at the end:
"Resource": "arn:aws:s3:::BUCKET_NAME/*"
From the AWS documentation:
Bucket access permissions specify which users are allowed access to the objects in a bucket and which types of access they have. Object access permissions specify which users are allowed access to the object and which types of access they have. For example, one user might have only read permission, while another might have read and write permissions.
Solution 3:[3]
Trying to solve this problem myself, I discovered that there is no HeadBucket permission. It looks like there is, because that's what the error message tells you, but actually the HEAD
operation requires the ListBucket
permission.
I also discovered that my IAM policy and my bucket policy were conflicting. Make sure you check both.
Solution 4:[4]
Check your object owner if you copy the file from another aws account.
In my case, I copy the file from another aws account without acl, so file's owner is the other aws account, it's mean the file belongs to origin account.
To fix it, copy or sync s3 files with acl, example:
aws s3 cp --acl bucket-owner-full-control s3://bucket1/key s3://bucket2/key
Solution 5:[5]
There could be a number of reason, (The most stupid one being) AWS S3 throws 403 error when the specified object or file doesn't exist at the location.
Solution 6:[6]
In my case, i got this error trying to get an object on an S3 bucket folder. But in that folder my object was not here (i put the wrong folder), so S3 send this message. Hope it could help you too.
Solution 7:[7]
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAllS3ActionsInUserFolder",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::your_bucket_name",
"arn:aws:s3:::your_bucket_name/*"
]
}
]
}
Adding both "arn:aws:s3:::your_bucket_name"
and "arn:aws:s3:::your_bucket_name/*"
to policy congiguration fixed the issue for me.
Solution 8:[8]
I was getting the error A client error (403) occurred when calling the HeadObject operation: Forbidden
for my aws cli copy command aws s3 cp s3://bucket/file file
. I was using a IAM role which had full S3 access using an Inline Policy
.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
If I give it the full S3 access from the Managed Policies
instead, then the command works. I think this must be a bug from Amazon, because the policies in both cases were exactly the same.
Solution 9:[9]
One of the reasons for this could be if you try accessing buckets of a region which requires V4-Signing. Try explicitly providing the region, as --region cn-north-1
Solution 10:[10]
I've had this issue, adding --recursive
to the command will help.
At this point it doesn't quite make sense as you (like me) are only trying to copy a single file down, but it does the trick!
Solution 11:[11]
The minimal permissions that worked for me when running HeadObject
on any object in mybucket
:
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::mybucket"
]
}
Solution 12:[12]
403 - means I know who you are but you are not authorized to do what you asking.
In my case, the problem was in a Policy - I didn't choose an object when specified the Policy in Visual Editor
Solution 13:[13]
It's a terrible practice to give away access to the entire s3 (all actions, all buckets), just to unblock yourself.
The 403 error above is usually due to the lack of "Read" permission of files. The Read action for reading a file in S3 is s3:GetObject
.
{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::mybucketname/path/*",
"arn:aws:s3:::mybucketname"
]
}
Solution 1: A new Policy in IAM (Tell Role/User to know S3)
You can create a Policy (e.g. MY_S3_READER
) with the following, and attach it to the user or role that's doing the job. (e.g. EC2 Instance's IAM role)
Here is the exact JSON for your Policy: (just replace mybucketname
and path
)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::mybucketname/path/*",
"arn:aws:s3:::mybucketname"
]
}
]
}
Create this Policy. Then, go to IAM > Roles > Attach Policy and attach it.
Solution 2: Edit Buckey Policy in S3 (Tell S3 to know User/Role)
Go to your bucket in S3, then add the following example: (replace mybucketname
and myip
)
{
"Version": "2012-10-17",
"Id": "SourceIP",
"Statement": [
{
"Sid": "ValidIpAllowRead",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::mybucketname",
"arn:aws:s3:::mybucketname/*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": "myip/32"
}
}
}
]
}
If you want to change this read permission to by User or Role (instead of IP Address), remove the Condition
part, and change "Principal" to "Principal": { "AWS": "<IAM User/Role's ARN>" },
".
Additional Notes
Check the permissions via
aws s3 cp
oraws s3 ls
manually for faster debugging.It sometimes takes up to 30 seconds for the permission change to be effective. Be patient.
Note that for doing "
ls
" (e.g.aws s3 ls s3://mybucket/mypath
) you needs3:ListBucket
access.IMPORTANT Accessing files by their HTTP(S) URL via
cURL
or similar tools (e.g.axios
on AJAX calls) requires you to grant either IP access, or supply proper headers, manually, or get a signedUrl from the SDK first.
Solution 14:[14]
I was getting this error message due to my EC2 instance's clock being out of sync.
I was able to fix on Ubuntu using this:
sudo ntpdate ntp.ubuntu.com
sudo apt-get install ntp
Solution 15:[15]
I got this error with a mis-configured test event. I changed the source buckets ARN but forgot to edit the default S3 bucket name.
I.e. make sure that in the bucket section of the test event both the ARN and bucket name are set correctly:
"bucket": {
"arn": "arn:aws:s3:::your_bucket_name",
"name": "your_bucket_name",
"ownerIdentity": {
"principalId": "EXAMPLE"
}
Solution 16:[16]
I also experienced that behaviour. In my case I've found that if the IAM policy doesn't have access to read the object (s3:GetObject
), the same error is raised.
I agree with you that the error raised from aws console & cli is not really well explained and may cause confusion.
Solution 17:[17]
I had a lambda function doing the same, copy from bucket to bucket.
The lambda had permissions to use the source bucket as trigger.
Configuration tab
But it also needs permissions to OPERATE with buckets.
Permissions tab
If s3 is not there, then you need to edit the Role used by the lambda and add it (see the s3FullAccess)
Solution 18:[18]
I was getting a 403 on HEAD requests while the GET requests were working. It turned out to be the CORS config in s3 permissions. I had to add HEAD
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Solution 19:[19]
I have also experienced this scenario.
I have a bucket with policy that uses AWS4-HMAC-SHA256. Turns out my awscli is not updated to the latest version. Mine was aws-cli/1.10.8. Upgrading it have solved the problem.
pip install awscli --upgrade --user
https://docs.aws.amazon.com/cli/latest/userguide/installing.html
Solution 20:[20]
If running in an environment where the credential/role is not clear, be sure you included the --profile=yourprofile
flag so the cli knows what credentials to use. For example:
aws s3 cp s3://yourbucket destination.txt --profile=yourprofile
will succeed while the following yielded the HeadObject error
aws s3 cp s3://yourbucket destination.txt
The profile settings reference entries in your config
and credentials
files.
Solution 21:[21]
When it comes to cross-account S3 access
An IAM user policy will not over-ride the policy defined for the bucket in the foreign account.
s3:GetObject must be allowed for accountA/user as well as on the accountB/bucket
Solution 22:[22]
I got this fixed by setting the system time correctly.
Ensure the aws bucket region is right and your system time matches the aws region time
Solution 23:[23]
When I faced this issue, I discovered that my problem was that the files in the 'Source Account' were copied there by a 'third party' and the Owner was not the Source Account.
I had to recopy the objects to themselves in the same bucket with the --metadata-directive REPLACE
Detailed explanation in Amazon Documentation
Solution 24:[24]
Permissions
You need the s3:GetObject permission for this operation. For more information, see Specifying Permissions in a Policy. If the object you request does not exist, the error Amazon S3 returns depends on whether you also have the s3:ListBucket permission.
If you have the s3:ListBucket permission on the bucket, Amazon S3 returns an HTTP status code 404 ("no such key") error. If you don’t have the s3:ListBucket permission, Amazon S3 returns an HTTP status code 403 ("access denied") error.
The following operation is related to HeadObject:
GetObject
Source: https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html
Solution 25:[25]
Maybe this will help someone. In my case, I was running a CodeBuild job and the CodeBuild execution role had full access to S3. I was trying to list keys in an S3 bucket via the CLI. However, I was using the CLI from within the aws-cli
Docker image and passing the credentials via environment variables per this article:
https://docs.aws.amazon.com/codebuild/latest/userguide/troubleshooting.html#troubleshooting-versions
No matter what I tried, any calls using aws s3api ...
would fail with that same 403 error. The solution for me was to convert to a normal s3 CLI call (as opposed to s3api CLI call):
aws s3 ls s3://bucket/key --recursive
This change worked. The call using aws s3api did not
.
Solution 26:[26]
Problem is in your Police permission gived for the role that you are using, In Case you are using AWS Glue you need create police with these permissions, https://docs.aws.amazon.com/glue/latest/dg/create-sagemaker-notebook-policy.html
This will solve "(403) occurred when calling the HeadObject operation: Forbidden"
Solution 27:[27]
if someone ends up here and is trying to implement solution in terraform, this is mine:
module "s3_files_label" {
source = "git::https://github.com/cloudposse/terraform-null-label.git?ref=0.24.1"
namespace = var.client
environment = var.environment
attributes = ["backend-api-logs"]
}
module "s3_files" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "2.7.0"
bucket = module.s3_files_label.id
acl = "private"
cors_rule = jsonencode([
{
allowed_headers = ["*"]
allowed_methods = ["GET", "PUT", "POST"]
allowed_origins = ["https://${var.route53_domain}"]
expose_headers = []
max_age_seconds = 0
}
])
}
module "s3_user" {
source = "cloudposse/iam-system-user/aws"
version = "0.20.2"
namespace = var.client
environment = var.environment
name = "s3-user"
inline_policies_map = {
s3 = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"s3:ListBuckets",
"s3:ListBucket",
"s3:HeadObject",
"s3:GetObject"
],
"Resource": [
"${module.s3_files.s3_bucket_arn}",
"${module.s3_files.s3_bucket_arn}/*"
]
},
{
"Sid": "list",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets"
],
"Resource": [
"*"
]
}
]
}
EOF
}
tags = module.config.tags
}
Solution 28:[28]
I had this problem, but the solution was different than all above - I was trying to hit it via transfer accelerated endpoint, but it wasnt enabled on the S3 bucket.
Solution 29:[29]
In addition to other answers, the error can also be caused by missing permissions on the KMS key used for the SSE-KMS of the S3 bucket.
Solution 30:[30]
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:HeadObject"
],
"Resource": [
"arn:aws:s3:::<your_bucket_name>",
"arn:aws:s3:::<your_bucket_name>/*"
],
"Effect": "Allow"
}
]
}
Assign this policy to an IAM role which is assigned to your ec2 (or any service thats trying to access your S3)
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow