Independence Day Deal! Unlock 25% OFF Today – Limited-Time Offer - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Machine Learning Engineer Topic 3 Question 99 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 99
Topic #: 3
[All Professional Machine Learning Engineer Questions]

You are developing an image recognition model using PyTorch based on ResNet50 architecture. Your code is working fine on your local laptop on a small subsample. Your full dataset has 200k labeled images You want to quickly scale your training workload while minimizing cost. You plan to use 4 V100 GPUs. What should you do? (Choose Correct Answer and Give Reference and Explanation)

Show Suggested Answer Hide Answer
Suggested Answer: C

Traffic splitting is a feature of Vertex AI that allows you to distribute the prediction requests among multiple models or model versions within the same endpoint. You can specify the percentage of traffic that each model or model version receives, and change it at any time. Traffic splitting can help you test the new model in production without creating a new endpoint or a separate service. You can deploy the new model to the existing Vertex AI endpoint, and use traffic splitting to send 5% of production traffic to the new model. You can monitor the end-user metrics, such as listening time, to compare the performance of the new model and the previous model. If the end-user metrics improve between models over time, you can gradually increase the percentage of production traffic sent to the new model. This solution can help you test the new model in production while minimizing complexity and cost.Reference:

Traffic splitting | Vertex AI

Deploying models to endpoints | Vertex AI


Contribute your Thoughts:

Norah
2 months ago
Wait, I can use my refrigerator as a GPU? *scratches head* Nah, that's probably not a good idea. Time to get serious and go with Option B. Gotta package that code up and let Vertex AI handle the heavy lifting!
upvoted 0 times
Hyun
23 days ago
That's right, Option B is the best choice for scaling your training workload efficiently.
upvoted 0 times
...
Nicholle
1 months ago
Option B sounds like the way to go. Package your code and let Vertex AI do the heavy lifting.
upvoted 0 times
...
Ronny
1 months ago
Yeah, using your refrigerator as a GPU is definitely not a good idea.
upvoted 0 times
...
...
Rhea
2 months ago
Hmm, I'm not sure I trust any of these options. I think I'll just train my model on my local laptop and hope it scales eventually. Who needs fancy cloud infrastructure anyway? *laughs nervously*
upvoted 0 times
Carmelina
20 days ago
Training on your local laptop may work for small datasets, but for 200k labeled images, utilizing cloud resources like V100 GPUs will definitely improve your model's performance and efficiency.
upvoted 0 times
...
Eva
29 days ago
Using cloud infrastructure like Vertex AI can greatly speed up your training process and save you time in the long run. It's worth considering!
upvoted 0 times
...
Jessenia
2 months ago
Option A seems like the best choice for scaling your training workload with minimal cost. You should configure a Compute Engine VM with the necessary dependencies and use Vertex AI with custom tier containing the required GPUs.
upvoted 0 times
...
...
Lashandra
2 months ago
Option D is intriguing, but it sounds a bit more complex than I'd like to deal with. Setting up a GKE cluster and submitting a TFJob operator seems like a lot of work. I'm more interested in a simpler, managed solution like Vertex AI.
upvoted 0 times
...
Ernestine
2 months ago
I'm leaning towards Option C. Creating a Vertex AI Workbench instance with the required GPUs sounds like a great way to get my model training up and running quickly. Plus, I don't have to worry about managing the infrastructure myself.
upvoted 0 times
Theodora
9 hours ago
Absolutely, Vertex AI Workbench provides a hassle-free way to scale your training workload efficiently.
upvoted 0 times
...
Melvin
13 days ago
It's definitely a convenient option. Plus, you can focus more on optimizing your model rather than setting up the environment.
upvoted 0 times
...
Bev
16 days ago
I agree, managing the infrastructure myself can be time-consuming. Vertex AI Workbench takes care of that.
upvoted 0 times
...
Asha
1 months ago
Option C seems like a good choice. Using Vertex AI Workbench with 4 V100 GPUs can speed up training.
upvoted 0 times
...
...
Lauran
2 months ago
Option A seems like the easiest way to scale my training workload. I can simply configure a Compute Engine VM with the necessary dependencies and use Vertex AI to train my model. Definitely the fastest and most cost-effective solution.
upvoted 0 times
Layla
29 days ago
Option A seems like the easiest way to scale my training workload. I can simply configure a Compute Engine VM with the necessary dependencies and use Vertex AI to train my model. Definitely the fastest and most cost-effective solution.
upvoted 0 times
...
Arlette
1 months ago
A) Configure a Compute Engine VM with all the dependencies that launches the training Train your model with Vertex AI using a custom tier that contains the required GPUs.
upvoted 0 times
...
...
Jesusita
2 months ago
I'm not sure, but I think option D could also be a valid choice. Creating a GKE cluster with a node pool that has 4 V100 GPUs might be a good solution too.
upvoted 0 times
...
Paul
3 months ago
I agree with Robt. Option A seems like the most efficient way to scale the training workload while minimizing cost.
upvoted 0 times
...
Robt
3 months ago
I think the correct answer is A. It involves configuring a Compute Engine VM with the necessary dependencies and using Vertex AI with the required GPUs.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77