'Deploying Huggingface model for inference - pytorch-scatter issues
It's my first time with SageMaker, and I'm having issues when trying to execute this script I took from this Huggingface model (deploy tab)
from sagemaker.huggingface import HuggingFaceModel
import sagemaker
role = sagemaker.get_execution_role()
# Hub Model configuration. https://huggingface.co/models
hub = {
'HF_MODEL_ID':'google/tapas-base-finetuned-wtq',
'HF_TASK':'table-question-answering'
}
# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
transformers_version='4.6.1',
pytorch_version='1.7.1',
py_version='py36',
env=hub,
role=role,
)
# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.m5.xlarge' # ec2 instance type
)
predictor.predict({
'inputs': {
"query": "How many stars does the transformers repository have?",
"table": {
"Repository": ["Transformers", "Datasets", "Tokenizers"],
"Stars": ["36542", "4512", "3934"],
"Contributors": ["651", "77", "34"],
"Programming language": [
"Python",
"Python",
"Rust, Python and NodeJS",
],
}
}
})
The error comes when calling predict()
:
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from model with message "{
"code": 400,
"type": "InternalServerException",
"message": "\nTapasModel requires the torch-scatter library but it was not found in your environment. You can install it with pip as\nexplained here: https://github.com/rusty1s/pytorch_scatter.\n"
}
I installed many versions of torch-scatter, but it's always the same, or even worse logs.
In this guide from HuggingFace says conda_pytorch_p36 kernel is needed. It's not working in any kernel, always because attached error log.
Other models are working properly, but this one is failing no matter what combination of versions.
Solution 1:[1]
I think torch-scatter does not come pre-installed on the Sagemaker images.
I installed many versions of torch-scatter, but it's always the same, or even worse logs.
Did you make these installations on your local machine? The model is executed on the Sagemaker image so you need to prepare an Image with the necessary installations and then use that image. Docs are here: here
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | max_x_x |