Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Machine Learning Engineer Topic 3 Question 92 Discussion

Actual exam question for Google's Professional Machine Learning Engineer exam
Question #: 92
Topic #: 3
[All Professional Machine Learning Engineer Questions]

Your organization manages an online message board A few months ago, you discovered an increase in toxic language and bullying on the message board. You deployed an automated text classifier that flags certain comments as toxic or harmful. Now some users are reporting that benign comments referencing their religion are being misclassified as abusive Upon further inspection, you find that your classifier's false positive rate is higher for comments that reference certain underrepresented religious groups. Your team has a limited budget and is already overextended. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: C

The best way to operationalize your training process is to use Vertex AI Pipelines, which allows you to create and run scalable, portable, and reproducible workflows for your ML models. Vertex AI Pipelines also integrates with Vertex AI Metadata, which tracks the provenance, lineage, and artifacts of your ML models. By using a Vertex AI CustomTrainingJobOp component, you can train your model using the same code as in your Jupyter notebook. By using a ModelUploadOp component, you can upload your trained model to Vertex AI Model Registry, which manages the versions and endpoints of your models. By using Cloud Scheduler and Cloud Functions, you can trigger your Vertex AI pipeline to run weekly, according to your plan.Reference:

Vertex AI Pipelines documentation

Vertex AI Metadata documentation

Vertex AI CustomTrainingJobOp documentation

ModelUploadOp documentation

Cloud Scheduler documentation

[Cloud Functions documentation]


Contribute your Thoughts:

Mollie
8 days ago
Replacing the model? Sounds like a lot of work. Why not just teach the AI to recognize sarcasm and irony? Problem solved!
upvoted 0 times
...
Larue
9 days ago
Synthetic data, huh? Sounds like a job for the AI squad. Though I'd keep an eye on the 'robot overlord' situation, just in case.
upvoted 0 times
...
Tandra
17 days ago
Hmm, option D sounds like the most practical approach given the budget constraints. Why make life harder for the mods, right?
upvoted 0 times
Becky
3 days ago
User 1: I agree, option D seems like the best choice to reduce false positives.
upvoted 0 times
...
...
Galen
24 days ago
I agree with Tracey, raising the threshold could be a simpler solution.
upvoted 0 times
...
Leigha
25 days ago
Oh boy, the old 'toxic language' vs. 'religious sensitivity' conundrum. I bet this one's a real headache for the dev team!
upvoted 0 times
...
Tracey
26 days ago
But wouldn't it be better to raise the threshold for comments instead?
upvoted 0 times
...
Adrianna
28 days ago
I think we should add synthetic training data for those phrases.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77
a