'AWS Lambda Error: Unzipped size must be smaller than 262144000 bytes
I am developing one lambda function, which use the ResumeParser library made in the python 2.7. But when I deploy this function including the library on the AWS it's throwing me following error:
Unzipped size must be smaller than 262144000 bytes
Solution 1:[1]
Perhaps you did not exclude development packages which made your file to grow that big.
I my case, (for NodeJS) I had missing the following in my serverless.yml
:
package:
exclude:
- node_modules/**
- venv/**
See if there are similar for Python or your case.
Solution 2:[2]
This is a hard limit which cannot be changed:
AWS Lambda Limit Errors
Functions that exceed any of the limits listed in the previous limits tables will fail with an exceeded limits exception. These limits are fixed and cannot be changed at this time. For example, if you receive the exception CodeStorageExceededException or an error message similar to "Code storage limit exceeded" from AWS Lambda, you need to reduce the size of your code storage.
You need to reduce the size of your package. If you have large binaries place them in s3 and download on bootstrap. Likewise for dependencies, you can pip install
or easy_install
them from an s3 location which will be faster than pulling from pip repos.
Solution 3:[3]
A workaround that worked for me: Install pyminifier:
pip install pyminifier
Go to the library folder that you want to zip. In my case I wanted to zip the site-packages folder in my virtual env. So I created a site-packages-min folder at the same level where site-packages was. Run the following shell script to minify the python files and create identical structure in the site-packages-min folder. Zip and upload these files to S3.
#/bin/bash
for f in $(find site-packages -name '*.py')
do
ori=$f
res=${f/site-packages/site-packages-min}
filename=$(echo $res| awk -F"/" '{print $NF}')
echo "$filename"
path=${res%$filename}
mkdir -p $path
touch $res
pyminifier --destdir=$path $ori >> $res || cp $ori $res
done
HTH
Solution 4:[4]
The best solution to this problem is to deploy your Lambda function
using a Docker container
that you've built and pushed to AWS ECR
. Containers have a limit of 10 gb apparently.
Here's an example using Python flavored AWS CDK
:
from aws_cdk import aws_lambda as _lambda
self.lambda_from_image = _lambda.DockerImageFunction(
scope=self,
id="LambdaImageExample",
function_name="LambdaImageExample",
code=_lambda.DockerImageCode.from_image_asset(
directory="lambda_funcs/LambdaImageExample"
),
)
An example Dockerfile
contained in the directory lambda_funcs/LambdaImageExample
alongside my lambda_func.py
and requirements.txt
:
FROM amazon/aws-lambda-python:latest
LABEL maintainer="Wesley Cheek"
RUN yum update -y && \
yum install -y python3 python3-dev python3-pip gcc && \
rm -Rf /var/cache/yum
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY lambda_func.py ./
CMD ["lambda_func.handler"]
Run cdk deploy
and the Lambda function
will be automagically bundled into an image along with its dependencies specified in requirements.txt
, pushed to an AWS ECR
repository, and deployed.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Greg Wozniak |
Solution 2 | Community |
Solution 3 | Prashant |
Solution 4 |