Independence Day Deal! Unlock 25% OFF Today – Limited-Time Offer - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Machine Learning Engineer Topic 3 Question 88 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 88
Topic #: 3
[All Professional Machine Learning Engineer Questions]

You need to train a computer vision model that predicts the type of government ID present in a given image using a GPU-powered virtual machine on Compute Engine. You use the following parameters:

* Optimizer: SGD

* Image shape 224x224

* Batch size 64

* Epochs 10

* Verbose 2

During training you encounter the following error: ResourceExhaustedError: out of Memory (oom) when allocating tensor. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: D

The best option to minimize storage and computational overhead is to use the TRANSFORM clause in the CREATE MODEL statement in the SQL query to calculate the required statistics. The TRANSFORM clause allows you to specify feature preprocessing logic that applies to both training and prediction. The preprocessing logic is executed in the same query as the model creation, which avoids the need to create and store intermediate tables. The TRANSFORM clause also supports quantile bucketization and MinMax scaling, which are the preprocessing steps required for this scenario. Option A is incorrect because creating a component in the Vertex AI Pipelines DAG to calculate the required statistics may increase the computational overhead, as the component needs to run separately from the model creation. Moreover, the component needs to pass the statistics to subsequent components, which may increase the storage overhead. Option B is incorrect because preprocessing and staging the data in BigQuery prior to feeding it to the model may also increase the storage and computational overhead, as you need to create and maintain additional tables for the preprocessed data. Moreover, you need to ensure that the preprocessing logic is consistent for both training and inference. Option C is incorrect because creating SQL queries to calculate and store the required statistics in separate BigQuery tables may also increase the storage and computational overhead, as you need to create and maintain additional tables for the statistics. Moreover, you need to ensure that the statistics are updated regularly to reflect the new data.Reference:

BigQuery ML documentation

Using the TRANSFORM clause

Feature preprocessing with BigQuery ML


Contribute your Thoughts:

Sylvie
2 months ago
Alright, who's the genius that chose a 224x224 image shape for a GPU-powered VM? That's like trying to fit a monster truck in a Smart car!
upvoted 0 times
Xenia
1 months ago
C) Change the learning rate
upvoted 0 times
...
Annamae
1 months ago
B) Reduce the batch size
upvoted 0 times
...
Thaddeus
1 months ago
A) Change the optimizer
upvoted 0 times
...
...
Lindsey
2 months ago
Hmm, 'out of Memory' error? Looks like someone's been skipping their GPU diet. Time to go on a batch size reduction binge!
upvoted 0 times
Gaston
6 days ago
D) Reduce the image shape
upvoted 0 times
...
Shawnta
17 days ago
C) Change the learning rate
upvoted 0 times
...
Fletcher
26 days ago
B) Reduce the batch size
upvoted 0 times
...
Gayla
2 months ago
A) Change the optimizer
upvoted 0 times
...
...
Mammie
2 months ago
Changing the learning rate? I don't think that's going to help with the OOM error. Gotta free up that GPU memory, my friend.
upvoted 0 times
...
Reita
2 months ago
I would try reducing the image shape first. Smaller input size means less memory required for the model, and you can always resize the images later.
upvoted 0 times
...
Thad
2 months ago
Reducing the batch size seems like the obvious choice here. Too large a batch can easily exhaust GPU memory, especially with high-resolution images.
upvoted 0 times
Garry
14 days ago
Changing the learning rate could also potentially help with memory management.
upvoted 0 times
...
Laurel
15 days ago
C) Change the learning rate
upvoted 0 times
...
Norah
16 days ago
Yes, reducing the batch size is a common solution to memory errors during training.
upvoted 0 times
...
Val
17 days ago
B) Reduce the batch size
upvoted 0 times
...
Bulah
18 days ago
I think changing the optimizer might also be worth trying to optimize memory usage.
upvoted 0 times
...
Lisbeth
21 days ago
A) Change the optimizer
upvoted 0 times
...
Tijuana
1 months ago
That's a good point, reducing the batch size should help with the memory issue.
upvoted 0 times
...
Evelynn
2 months ago
B) Reduce the batch size
upvoted 0 times
...
...
Paris
2 months ago
I think changing the optimizer might also help in resolving the memory error.
upvoted 0 times
...
Margurite
3 months ago
I agree with Casey, reducing the batch size should help with the memory issue.
upvoted 0 times
...
Casey
3 months ago
I think we should reduce the batch size.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77