Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Oracle Exam 1Z0-1127-24 Topic 2 Question 20 Discussion

Actual exam question for Oracle's 1Z0-1127-24 exam
Question #: 20
Topic #: 2
[All 1Z0-1127-24 Questions]

Which is a distinguishing feature of "Parameter-Efficient Fine-tuning (PEFT)" as opposed to classic Tine- tuning" in Large Language Model training?

Show Suggested Answer Hide Answer
Suggested Answer: A

Contribute your Thoughts:

Junita
4 months ago
I bet the exam writers were laughing their heads off when they came up with that last answer choice. 'No training data? Really?'
upvoted 0 times
...
Loreen
4 months ago
D) Wait, did they really just throw in an answer about no training data? That's just gotta be a trick question.
upvoted 0 times
Shawna
4 months ago
D) Yeah, that does seem like a tricky option. It's probably not the right choice.
upvoted 0 times
...
Shawna
4 months ago
A) PEFT involves only a few or new parameters and uses labeled, task-specific data.
upvoted 0 times
...
...
Nathalie
5 months ago
B) This one seems to be the opposite of what PEFT is supposed to do. Modifying all parameters and using unlabeled data doesn't sound very parameter-efficient to me.
upvoted 0 times
Glen
3 months ago
A) That makes sense, PEFT is supposed to be parameter-efficient after all.
upvoted 0 times
...
In
4 months ago
C) PEFT does not modify any parameters but uses soft prompting with unlabeled data.
upvoted 0 times
...
Rochell
4 months ago
B) I agree, modifying all parameters and using unlabeled data doesn't sound efficient at all.
upvoted 0 times
...
Jess
4 months ago
A) PEFT involves only a few or new parameters and uses labeled, task-specific data.
upvoted 0 times
...
...
Xochitl
5 months ago
Actually, I think PEFT does not modify any parameters but uses soft prompting with unlabeled data.
upvoted 0 times
...
Myong
5 months ago
C) Hmm, I'm not sure that's correct. Doesn't PEFT actually modify the model parameters, not just use soft prompting?
upvoted 0 times
Claribel
4 months ago
A) PEFT involves only a few or new parameters and uses labeled, task-specific data.
upvoted 0 times
...
Charlene
4 months ago
C) Hmm, I'm not sure that's correct. Doesn't PEFT actually modify the model parameters, not just use soft prompting?
upvoted 0 times
...
Launa
5 months ago
B) PEFT modifies all parameters and uses unlabeled, task-agnostic data.
upvoted 0 times
...
Annabelle
5 months ago
A) PEFT involves only a few or new parameters and uses labeled, task-specific data.
upvoted 0 times
...
...
Bernadine
5 months ago
I disagree, I believe PEFT modifies all parameters and uses unlabeled data.
upvoted 0 times
...
Jaime
5 months ago
A) Looks like the right answer to me. Modifying only a few parameters and using labeled data sounds like a smart way to fine-tune large language models.
upvoted 0 times
Vi
4 months ago
C) Definitely, using soft prompting with unlabeled data while not modifying any parameters doesn't sound very effective.
upvoted 0 times
...
Makeda
5 months ago
B) I agree, it seems like a more efficient approach compared to modifying all parameters and using unlabeled data.
upvoted 0 times
...
Tanesha
5 months ago
A) Looks like the right answer to me. Modifying only a few parameters and using labeled data sounds like a smart way to fine-tune large language models.
upvoted 0 times
...
...
Janna
6 months ago
I think the distinguishing feature of PEFT is that it involves only a few new parameters and uses labeled data.
upvoted 0 times
...

Save Cancel
az-700  pass4success  az-104  200-301  200-201  cissp  350-401  350-201  350-501  350-601  350-801  350-901  az-720  az-305  pl-300  

Warning: Cannot modify header information - headers already sent by (output started at /pass.php:70) in /pass.php on line 77