Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon Exam AIF-C01 Topic 2 Question 23 Discussion

Actual exam question for Amazon's AIF-C01 exam
Question #: 23
Topic #: 2
[All AIF-C01 Questions]

A company has developed an ML model for image classification. The company wants to deploy the model to production so that a web application can use the model.

The company needs to implement a solution to host the model and serve predictions without managing any of the underlying infrastructure.

Which solution will meet these requirements?

Show Suggested Answer Hide Answer
Suggested Answer: A

Amazon SageMaker Serverless Inference is the correct solution for deploying an ML model to production in a way that allows a web application to use the model without the need to manage the underlying infrastructure.

Amazon SageMaker Serverless Inference provides a fully managed environment for deploying machine learning models. It automatically provisions, scales, and manages the infrastructure required to host the model, removing the need for the company to manage servers or other underlying infrastructure.

Why Option A is Correct:

No Infrastructure Management: SageMaker Serverless Inference handles the infrastructure management for deploying and serving ML models. The company can simply provide the model and specify the required compute capacity, and SageMaker will handle the rest.

Cost-Effectiveness: The serverless inference option is ideal for applications with intermittent or unpredictable traffic, as the company only pays for the compute time consumed while handling requests.

Integration with Web Applications: This solution allows the model to be easily accessed by web applications via RESTful APIs, making it an ideal choice for hosting the model and serving predictions.

Why Other Options are Incorrect:

B . Use Amazon CloudFront to deploy the model: CloudFront is a content delivery network (CDN) service for distributing content, not for deploying ML models or serving predictions.

C . Use Amazon API Gateway to host the model and serve predictions: API Gateway is used for creating, deploying, and managing APIs, but it does not provide the infrastructure or the required environment to host and run ML models.

D . Use AWS Batch to host the model and serve predictions: AWS Batch is designed for running batch computing workloads and is not optimized for real-time inference or hosting machine learning models.

Thus, A is the correct answer, as it aligns with the requirement of deploying an ML model without managing any underlying infrastructure.


Contribute your Thoughts:

Currently there are no comments in this discussion, be the first to comment!


Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77