Approach #1: Deploying a model stored on Amazon S3
Deploying a ML model as a Python pickle file in an Amazon S3 bucket and using it through a Lambda API makes model deployment simple, scalable, and cost-effective. We set up AWS Lambda to load this model from S3 when needed, enabling quick predictions without requiring a dedicated server. When someone calls the API connected to the Lambda function, the model is fetched, run, and returns predictions based on the input data. This serverless setup ensures high availability, scales automatically, and saves costs because you only pay when the API is used.
Step 1. Create a zip archive for the Lambda layer
A Lambda layer is a zip archive that contains libraries, a custom runtime, and other dependencies. I will demonstrate the creation of a Lambda layer using two Python libraries, Pandas and Scikit-learn, that are often used in ML models. Below is the code for creating a Lambda layer zip archive, containing Pandas and Scikit-learn, using Docker. Create a file, name it createlayer.sh
, and copy the code into it.
if [ "$1" != "" ] || [$# -gt 1]; then
echo "Creating layer compatible with python version $1"
docker run -v "$PWD":/var/task "lambci/lambda:build-python$1" /bin/sh -c "pip install -r requirements.txt -t python/lib/python$1/site-packages/; exit"
zip -r sklearn_pandas_layer.zip python > /dev/null
rm -r python
echo "Done creating layer!"
ls -lah sklearn_pandas_layer.zip
else
echo "Enter python version as argument - ./createlayer.sh 3.6"
Now, in the same directory, create a file named requirements.txt
to store the names and versions of the libraries in the layer. In this case, our requirements.txt
file will list the names and versions of the Pandas and Scikit-learn libraries we’re using.
pandas==0.23.4
scikit-learn==0.20.3
Next, in the terminal, navigate to the directory where you have placed the createlayer.sh
and requirements.txt
files and run the command below to generate the Lambda layer zip file.