Independence Day Deal! Unlock 25% OFF Today – Limited-Time Offer - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Cloud Architect Topic 1 Question 78 Discussion

Actual exam question for Google's Professional Cloud Architect exam
Question #: 78
Topic #: 1
[All Professional Cloud Architect Questions]

Your development team has installed a new Linux kernel module on the batch servers in Google Compute Engine (GCE) virtual machines (VMs) to speed up the nightly batch process. Two days after the installation, 50% of web application deployed in the same

nightly batch run. You want to collect details on the failure to pass back to the development team. Which three actions should you take? Choose 3 answers

Show Suggested Answer Hide Answer
Suggested Answer: B

The practice for managing logs generated on Compute Engine on Google Cloud is to install the Cloud Logging agent and send them to Cloud Logging.

The sent logs will be aggregated into a Cloud Logging sink and exported to Cloud Storage.

The reason for using Cloud Storage as the destination for the logs is that the requirement in question requires setting up a lifecycle based on the storage period.

In this case, the log will be used for active queries for 30 days after it is saved, but after that, it needs to be stored for a longer period of time for auditing purposes.

If the data is to be used for active queries, we can use BigQuery's Cloud Storage data query feature and move the data past 30 days to Coldline to build a cost-optimal solution.

Therefore, the correct answer is as follows

1. Install the Cloud Logging agent on all instances.

Create a sync that exports the logs to the region's Cloud Storage bucket.

3. Create an Object Lifecycle rule to move the files to the Coldline Cloud Storage bucket after one month. 4.

4. set up a bucket-level retention policy using bucket locking.'


Contribute your Thoughts:

Gail
1 months ago
I bet the developers used the 'Kernel Panic' function to make their code more exciting. It's the new 'Hello, World!'
upvoted 0 times
...
Janessa
1 months ago
I wonder if the development team tested the kernel module thoroughly before deploying it to production. Seems like they might have some explaining to do.
upvoted 0 times
Lenora
18 days ago
C) Use gcloud or Cloud Console to connect to the serial console and observe the logs.
upvoted 0 times
...
Francisca
20 days ago
A) Use Stackdriver Logging to search for the module log entries.
upvoted 0 times
...
Alesia
1 months ago
B) Read the debug GCE Activity log using the API or Cloud Console.
upvoted 0 times
...
Alayna
1 months ago
A) Use Stackdriver Logging to search for the module log entries.
upvoted 0 times
...
...
Elli
2 months ago
F seems like a bit of an overkill. Why go through the trouble of exporting and running a debug VM when you can just check the logs on the actual servers?
upvoted 0 times
Ashlyn
8 days ago
C) Use gcloud or Cloud Console to connect to the serial console and observe the logs.
upvoted 0 times
...
Cassi
10 days ago
B) Read the debug GCE Activity log using the API or Cloud Console.
upvoted 0 times
...
Tracey
16 days ago
A) Use Stackdriver Logging to search for the module log entries.
upvoted 0 times
...
...
Precious
2 months ago
D and E could also be helpful, but I'm not sure if they're as important as the first three. Identifying any live migration events and checking the Stackdriver metrics might provide additional context.
upvoted 0 times
Marica
14 days ago
D) Identify whether a live migration event of the failed server occurred, using in the activity log.
upvoted 0 times
...
Delfina
17 days ago
C) Use gcloud or Cloud Console to connect to the serial console and observe the logs.
upvoted 0 times
...
Helga
1 months ago
A) Use Stackdriver Logging to search for the module log entries.
upvoted 0 times
...
...
Yuette
2 months ago
In addition to that, we should adjust the Google Stackdriver timeline to match the failure time and observe the batch server metrics.
upvoted 0 times
...
Brock
2 months ago
I think A, B, and C are the best options here. Looking at the logs is crucial to understand what went wrong with the kernel module.
upvoted 0 times
Pearline
25 days ago
Exporting a debug VM into an image and running it locally could also help in troubleshooting.
upvoted 0 times
...
Rickie
1 months ago
Connecting to the serial console using gcloud or Cloud Console to observe the logs is definitely necessary.
upvoted 0 times
...
Minna
2 months ago
Reading the debug GCE Activity log can provide more insights into the failure.
upvoted 0 times
...
Jeannine
2 months ago
I agree, checking the module log entries with Stackdriver Logging is a good start.
upvoted 0 times
...
...
Kallie
2 months ago
I agree with Samira. We also need to read the debug GCE Activity log using the API or Cloud Console.
upvoted 0 times
...
Samira
3 months ago
I think we should use Stackdriver Logging to search for the module log entries.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77