ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 159 - MLS-C01 discussion

Report
Export

A Machine Learning Specialist previously trained a logistic regression model using scikit-learn on a local machine, and the Specialist now wants to deploy it to production for inference only.

What steps should be taken to ensure Amazon SageMaker can host a model that was trained locally?

A.
Build the Docker image with the inference code. Tag the Docker image with the registry hostname and upload it to Amazon ECR.
Answers
A.
Build the Docker image with the inference code. Tag the Docker image with the registry hostname and upload it to Amazon ECR.
B.
Serialize the trained model so the format is compressed for deployment. Tag the Docker image with the registry hostname and upload it to Amazon S3.
Answers
B.
Serialize the trained model so the format is compressed for deployment. Tag the Docker image with the registry hostname and upload it to Amazon S3.
C.
Serialize the trained model so the format is compressed for deployment. Build the image and upload it to Docker Hub.
Answers
C.
Serialize the trained model so the format is compressed for deployment. Build the image and upload it to Docker Hub.
D.
Build the Docker image with the inference code. Configure Docker Hub and upload the image to Amazon ECR.
Answers
D.
Build the Docker image with the inference code. Configure Docker Hub and upload the image to Amazon ECR.
Suggested answer: A

Explanation:

To deploy a model that was trained locally to Amazon SageMaker, the steps are:

Build the Docker image with the inference code. The inference code should include the model loading, data preprocessing, prediction, and postprocessing logic. The Docker image should also include the dependencies and libraries required by the inference code and the model.

Tag the Docker image with the registry hostname and upload it to Amazon ECR. Amazon ECR is a fully managed container registry that makes it easy to store, manage, and deploy container images. The registry hostname is the Amazon ECR registry URI for your account and Region. You can use the AWS CLI or the Amazon ECR console to tag and push the Docker image to Amazon ECR.

Create a SageMaker model entity that points to the Docker image in Amazon ECR and the model artifacts in Amazon S3. The model entity is a logical representation of the model that contains the information needed to deploy the model for inference. The model artifacts are the files generated by the model training process, such as the model parameters and weights. You can use the AWS CLI, the SageMaker Python SDK, or the SageMaker console to create the model entity.

Create an endpoint configuration that specifies the instance type and number of instances to use for hosting the model. The endpoint configuration also defines the production variants, which are the different versions of the model that you want to deploy. You can use the AWS CLI, the SageMaker Python SDK, or the SageMaker console to create the endpoint configuration.

Create an endpoint that uses the endpoint configuration to deploy the model. The endpoint is a web service that exposes an HTTP API for inference requests. You can use the AWS CLI, the SageMaker Python SDK, or the SageMaker console to create the endpoint.

References:

AWS Machine Learning Specialty Exam Guide

AWS Machine Learning Training - Deploy a Model on Amazon SageMaker

AWS Machine Learning Training - Use Your Own Inference Code with Amazon SageMaker Hosting Services

asked 16/09/2024
Ajayi Johnson
45 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first