You recently developed a deep learning model using Keras, and now you are experimenting with different training strategies. First, you trained the model using a single GPU, but the training process was too slow. Next, you distributed the training across 4 GPUs using tf.distribute.MirroredStrategy (with no other changes), but you did not observe a decrease in training time. What should you do?
Sharmaine
1 months agoRebbecca
2 months agoTrinidad
14 days agoCarol
17 days agoVerona
26 days agoNada
2 months agoCurt
14 days agoDorathy
17 days agoLourdes
25 days agoMicheal
2 months agoLeoma
5 days agoChi
10 days agoDominque
26 days agoOmega
2 months agoGail
2 months agoPok
2 months agoEllsworth
3 months ago