Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Machine Learning Engineer Topic 5 Question 80 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 80
Topic #: 5
[All Professional Machine Learning Engineer Questions]

You recently developed a deep learning model using Keras, and now you are experimenting with different training strategies. First, you trained the model using a single GPU, but the training process was too slow. Next, you distributed the training across 4 GPUs using tf.distribute.MirroredStrategy (with no other changes), but you did not observe a decrease in training time. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: D

Contribute your Thoughts:

Micheal
3 days ago
Interesting question. I think option C looks promising - using a TPU with tf.distribute.TPUStrategy could really speed up the training process.
upvoted 0 times
...
Omega
7 days ago
Hmm, I would say option B. Creating a custom training loop can help you fine-tune the distribution of the training process and potentially improve the performance.
upvoted 0 times
...
Gail
14 days ago
I think we should also consider using a TPU with tf.distribute.TPUStrategy for faster training.
upvoted 0 times
...
Pok
18 days ago
I agree with Ellsworth. That might help improve the training time.
upvoted 0 times
...
Ellsworth
21 days ago
I think we should try distributing the dataset with tf.distribute.Strategy.experimental_distribute_dataset.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77