'AWS Glue Crawler Not Creating Table
I have a crawler I created in AWS Glue that does not create a table in the Data Catalog after it successfully completes.
The crawler takes roughly 20 seconds to run and the logs show it successfully completed. CloudWatch log shows:
- Benchmark: Running Start Crawl for Crawler
- Benchmark: Classification Complete, writing results to DB
- Benchmark: Finished writing to Catalog
- Benchmark: Crawler has finished running and is in ready state
I am at a loss as to why the tables in the data catalog are not being created. AWS Docs are not of much help debugging.
Solution 1:[1]
check the IAM role associated with the crawler. Most likely you don't have correct permission.
When you create the crawler, if you choose to create an IAM role(the default setting), then it will create a policy for S3 object you specified only. if later you edit the crawler and change the S3 path only. The role associated with the crawler won't have permission to the new S3 path.
Solution 2:[2]
I had the same issue, as advised by others I tried to revise the existing IAM role, to include the new S3 bucket as the resource, but for some reason it did not work. Then I created a completely new role from scratch... this time it worked. Also, one big question I have for AWS is "why this access denied error due to a wrong attached IAM policy does not show up in Cloud watch log??" That makes it difficult to debug.
Solution 3:[3]
If you have existing tables in the target database the crawler may associate your new files with the existing table rather than create a new one.
This occurs when there are similarities in the data or a folder structure that the Glue may interpret as partitioning.
Also on occasion I have needed to refresh the table listing of a database to get new ones to show up.
Solution 4:[4]
You can try excluding some files in the s3 bucket, and those excluded files should appear in the log. I find it helpful in debugging what's happening with the crawler.
Solution 5:[5]
I had a similar IAM issue as mentioned by Ray. But in my case, I did not add an asterisk (*) after the bucket name, which means the crawler did not go into the subfolders, and no table was created.
Wrong:
{
"Statement": [
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket-name"
]
}
],
"Version": "2012-10-17"
}
Correct:
{
"Statement": [
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket-name*"
]
}
],
"Version": "2012-10-17"
}
Solution 6:[6]
Here is my sample role JSON that allows glue to access s3 and create a table.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"ec2:DeleteTags",
"ec2:CreateTags"
],
"Resource": [
"arn:aws:ec2:*:*:instance/*",
"arn:aws:ec2:*:*:security-group/*",
"arn:aws:ec2:*:*:network-interface/*"
],
"Condition": {
"ForAllValues:StringEquals": {
"aws:TagKeys": "aws-glue-service-resource"
}
}
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"iam:GetRole",
"cloudwatch:PutMetricData",
"ec2:DeleteNetworkInterface",
"s3:ListBucket",
"s3:GetBucketAcl",
"logs:PutLogEvents",
"ec2:DescribeVpcAttribute",
"glue:*",
"ec2:DescribeSecurityGroups",
"ec2:CreateNetworkInterface",
"s3:GetObject",
"s3:PutObject",
"logs:CreateLogStream",
"s3:ListAllMyBuckets",
"ec2:DescribeNetworkInterfaces",
"logs:AssociateKmsKey",
"ec2:DescribeVpcEndpoints",
"iam:ListRolePolicies",
"s3:DeleteObject",
"ec2:DescribeSubnets",
"iam:GetRolePolicy",
"s3:GetBucketLocation",
"ec2:DescribeRouteTables"
],
"Resource": "*"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": "s3:CreateBucket",
"Resource": "arn:aws:s3:::aws-glue-*"
},
{
"Sid": "VisualEditor3",
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "*"
}
]
}
Solution 7:[7]
In my case, the problem was in the setting Crawler source type > Repeat crawls of S3 data stores
, which I've set to Crawl new folders only
, because I thought it will crawl everything for the first run, and then continue to discover only new data.
After setting it to Crawl all folders
it discovered all tables.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Ray |
Solution 2 | |
Solution 3 | Kris Bravo |
Solution 4 | cozyss |
Solution 5 | user2210411 |
Solution 6 | Dheeraj |
Solution 7 | astef |