Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Cloud Architect Topic 1 Question 78 Discussion

Actual exam question for Google's Professional Cloud Architect exam
Question #: 78
Topic #: 1
[All Professional Cloud Architect Questions]

Your development team has installed a new Linux kernel module on the batch servers in Google Compute Engine (GCE) virtual machines (VMs) to speed up the nightly batch process. Two days after the installation, 50% of web application deployed in the same

nightly batch run. You want to collect details on the failure to pass back to the development team. Which three actions should you take? Choose 3 answers

Show Suggested Answer Hide Answer
Suggested Answer: B

The practice for managing logs generated on Compute Engine on Google Cloud is to install the Cloud Logging agent and send them to Cloud Logging.

The sent logs will be aggregated into a Cloud Logging sink and exported to Cloud Storage.

The reason for using Cloud Storage as the destination for the logs is that the requirement in question requires setting up a lifecycle based on the storage period.

In this case, the log will be used for active queries for 30 days after it is saved, but after that, it needs to be stored for a longer period of time for auditing purposes.

If the data is to be used for active queries, we can use BigQuery's Cloud Storage data query feature and move the data past 30 days to Coldline to build a cost-optimal solution.

Therefore, the correct answer is as follows

1. Install the Cloud Logging agent on all instances.

Create a sync that exports the logs to the region's Cloud Storage bucket.

3. Create an Object Lifecycle rule to move the files to the Coldline Cloud Storage bucket after one month. 4.

4. set up a bucket-level retention policy using bucket locking.'


Contribute your Thoughts:

Precious
2 days ago
D and E could also be helpful, but I'm not sure if they're as important as the first three. Identifying any live migration events and checking the Stackdriver metrics might provide additional context.
upvoted 0 times
...
Yuette
15 days ago
In addition to that, we should adjust the Google Stackdriver timeline to match the failure time and observe the batch server metrics.
upvoted 0 times
...
Brock
16 days ago
I think A, B, and C are the best options here. Looking at the logs is crucial to understand what went wrong with the kernel module.
upvoted 0 times
...
Kallie
18 days ago
I agree with Samira. We also need to read the debug GCE Activity log using the API or Cloud Console.
upvoted 0 times
...
Samira
25 days ago
I think we should use Stackdriver Logging to search for the module log entries.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77