Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Data Engineer Topic 1 Question 78 Discussion

Actual exam question for Google's Professional Data Engineer exam
Question #: 78
Topic #: 1
[All Professional Data Engineer Questions]

You have a data processing application that runs on Google Kubernetes Engine (GKE). Containers need to be launched with their latest available configurations from a container registry. Your GKE nodes need to have GPUs. local SSDs, and 8 Gbps bandwidth. You want to efficiently provision the data processing infrastructure and manage the deployment process. What should you do?

Show Suggested Answer Hide Answer

Contribute your Thoughts:

Adelina
11 months ago
I see the benefit of using Cloud Scheduler to run the job as well.
upvoted 0 times
...
Tammara
11 months ago
I beliTammara using Dataflow for the data pipeline is the best option.
upvoted 0 times
...
Leila
12 months ago
But wouldn't using GKE to autoscale containers be more efficient?
upvoted 0 times
...
Adelina
12 months ago
I prefer using Cloud Build with Terraform to provision the infrastructure.
upvoted 0 times
...
Leila
12 months ago
I think we should use Compute Engine startup scripts and gloud commands.
upvoted 0 times
...
Amos
1 years ago
Absolutely, and Dataflow and Cloud Scheduler seem more focused on data processing pipelines, which isn't the core requirement here. Cloud Build and Terraform is the way to go.
upvoted 0 times
...
Portia
1 years ago
Yeah, that's a good point. The other options like using Compute Engine startup scripts or GKE autoscaling don't seem to address the need for specialized hardware like GPUs and local SSDs.
upvoted 0 times
...
Ariel
1 years ago
I agree, the Cloud Build and Terraform option seems like the most comprehensive and efficient solution. It allows us to manage the entire provisioning and deployment process programmatically.
upvoted 0 times
...
Theresia
1 years ago
Hmm, this question seems to be testing our knowledge of infrastructure provisioning and deployment strategies for GKE. I'm thinking the best approach would be to use Cloud Build and Terraform to provision the infrastructure and deploy the latest container images.
upvoted 0 times
Dallas
1 years ago
C) Use Cloud Build to schedule a job using Terraform build to provision the infrastructure and launch with the most current container images.
upvoted 0 times
...
Evette
1 years ago
I don't think that option covers the requirement to launch containers with their latest configurations.
upvoted 0 times
...
Winifred
1 years ago
B) Use GKE to autoscale containers, and use gloud commands to provision the infrastructure.
upvoted 0 times
...
Malcolm
1 years ago
That seems like a solid plan to efficiently deploy the latest container images.
upvoted 0 times
...
German
1 years ago
C) Use Cloud Build to schedule a job using Terraform build to provision the infrastructure and launch with the most current container images.
upvoted 0 times
...
Alease
1 years ago
Hmm, that sounds like a good option for provisioning the infrastructure.
upvoted 0 times
...
Lenora
1 years ago
A) Use Compute Engi.no startup scriots to pull container Images, and use gloud commands to provision the infrastructure.
upvoted 0 times
...
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77