Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon Exam MLS-C01 Topic 2 Question 114 Discussion

Actual exam question for Amazon's MLS-C01 exam
Question #: 114
Topic #: 2
[All MLS-C01 Questions]

An online delivery company wants to choose the fastest courier for each delivery at the moment an order is placed. The company wants to implement this feature for existing users and new users of its application. Data scientists have trained separate models with XGBoost for this purpose, and the models are stored in Amazon S3. There is one model fof each city where the company operates.

The engineers are hosting these models in Amazon EC2 for responding to the web client requests, with one instance for each model, but the instances have only a 5% utilization in CPU and memory, ....operation engineers want to avoid managing unnecessary resources.

Which solution will enable the company to achieve its goal with the LEAST operational overhead?

Show Suggested Answer Hide Answer
Suggested Answer: B

The best solution for this scenario is to use a multi-model endpoint in Amazon SageMaker, which allows hosting multiple models on the same endpoint and invoking them dynamically at runtime. This way, the company can reduce the operational overhead of managing multiple EC2 instances and model servers, and leverage the scalability, security, and performance of SageMaker hosting services. By using a multi-model endpoint, the company can also save on hosting costs by improving endpoint utilization and paying only for the models that are loaded in memory and the API calls that are made. To use a multi-model endpoint, the company needs to prepare a Docker container based on the open-source multi-model server, which is a framework-agnostic library that supports loading and serving multiple models from Amazon S3. The company can then create a multi-model endpoint in SageMaker, pointing to the S3 bucket containing all the models, and invoke the endpoint from the web client at runtime, specifying the TargetModel parameter according to the city of each request. This solution also enables the company to add or remove models from the S3 bucket without redeploying the endpoint, and to use different versions of the same model for different cities if needed.References:

Use Docker containers to build models

Host multiple models in one container behind one endpoint

Multi-model endpoints using Scikit Learn

Multi-model endpoints using XGBoost


Contribute your Thoughts:

Tegan
13 days ago
I'm not a fan of the 'single instance for all models' approach in option C. That's just asking for trouble when demand increases. Give me the SageMaker goodness any day!
upvoted 0 times
...
Leontine
17 days ago
Now this is more like it! Separate SageMaker endpoints for each city, that's a clean and scalable solution. The client can just invoke the right endpoint based on the request.
upvoted 0 times
...
Emerson
19 days ago
Hmm, using a single EC2 instance to host all the models? That could become a bottleneck. Plus, the API Gateway integration adds unnecessary complexity.
upvoted 0 times
...
Evangelina
21 days ago
Option B with the multi-model server in SageMaker seems like a good fit. Centralized model management and real-time inference capabilities - sounds like the right balance of features.
upvoted 0 times
...
Alpha
21 days ago
That's a good point, Mollie. Option D could also be a great solution for the company.
upvoted 0 times
...
Mollie
27 days ago
I prefer option D. Having separate SageMaker endpoints for each city will ensure faster delivery times.
upvoted 0 times
...
Hailey
1 months ago
The SageMaker batch transform solution in option A sounds interesting, but it may not be suitable for real-time inference. We need a more responsive approach.
upvoted 0 times
Catalina
2 days ago
C) Keep only a single EC2 instance for hosting all the models. Install a model server in the instance and load each model by pulling it from Amazon S3. Integrate the instance with the web client using Amazon API Gateway for responding to the requests in real time, specifying the target resource according to the city of each request.
upvoted 0 times
...
Tracie
3 days ago
B) Prepare an Amazon SageMaker Docker container based on the open-source multi-model server. Remove the existing instances and create a multi-model endpoint in SageMaker instead, pointing to the S3 bucket containing all the models Invoke the endpoint from the web client at runtime, specifying the TargetModel parameter according to the city of each request.
upvoted 0 times
...
Fidelia
18 days ago
A) Create an Amazon SageMaker notebook instance for pulling all the models from Amazon S3 using the boto3 library. Remove the existing instances and use the notebook to perform a SageMaker batch transform for performing inferences offline for all the possible users in all the cities. Store the results in different files in Amazon S3. Point the web client to the files.
upvoted 0 times
...
...
Graham
1 months ago
I agree with Alpha. Option B seems efficient and will help avoid managing unnecessary resources.
upvoted 0 times
...
Alpha
1 months ago
I think option B is the best choice. Using a multi-model endpoint in SageMaker will reduce operational overhead.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77