What differentiates Semantic search from traditional keyword search?
Comprehensive and Detailed In-Depth Explanation=
Semantic search uses embeddings and NLP to understand the meaning, intent, and context behind a query, rather than just matching exact keywords (as in traditional search). This enables more relevant results, even if exact terms aren't present, making Option C correct. Options A and B describe traditional keyword search mechanics. Option D is unrelated, as metadata like date or author isn't the primary focus of semantic search. Semantic search leverages vector representations for deeper understanding.
: OCI 2025 Generative AI documentation likely contrasts semantic and keyword search under search or retrieval sections.
How does the temperature setting in a decoding algorithm influence the probability distribution over the vocabulary?
Comprehensive and Detailed In-Depth Explanation=
Temperature adjusts the softmax distribution in decoding. Increasing it (e.g., to 2.0) flattens the curve, giving lower-probability words a better chance, thus increasing diversity---Option C is correct. Option A exaggerates---top words still have impact, just less dominance. Option B is backwards---decreasing temperature sharpens, not broadens. Option D is false---temperature directly alters distribution, not speed. This controls output creativity.
: OCI 2025 Generative AI documentation likely reiterates temperature effects under decoding parameters.
Why is it challenging to apply diffusion models to text generation?
Comprehensive and Detailed In-Depth Explanation=
Diffusion models, widely used for image generation, iteratively denoise data from noise to a structured output. Images are continuous (pixel values), while text is categorical (discrete tokens), making it challenging to apply diffusion directly to text, as the denoising process struggles with discrete spaces. This makes Option C correct. Option A is false---text generation can benefit from complex models. Option B is incorrect---text is categorical. Option D is wrong, as diffusion models aren't inherently image-only but are better suited to continuous data. Research adapts diffusion for text, but it's less straightforward.
: OCI 2025 Generative AI documentation likely discusses diffusion models under generative techniques, noting their image focus.
Given the following code:
PromptTemplate(input_variables=["human_input", "city"], template=template)
Which statement is true about PromptTemplate in relation to input_variables?
Comprehensive and Detailed In-Depth Explanation=
In LangChain, PromptTemplate supports any number of input_variables (zero, one, or more), allowing flexible prompt design---Option C is correct. The example shows two, but it's not a requirement. Option A (minimum two) is false---no such limit exists. Option B (single variable) is too restrictive. Option D (no variables) contradicts its purpose---variables are optional but supported. This adaptability aids prompt engineering.
: OCI 2025 Generative AI documentation likely covers PromptTemplate under LangChain prompt design.
What does a higher number assigned to a token signify in the "Show Likelihoods" feature of the language model token generation?
Comprehensive and Detailed In-Depth Explanation=
In ''Show Likelihoods,'' a higher number (probability score) indicates a token's greater likelihood of following the current token, reflecting the model's prediction confidence---Option B is correct. Option A (less likely) is the opposite. Option C (unrelated) misinterprets---likelihood ties tokens contextually. Option D (only one) assumes greedy decoding, not the feature's purpose. This helps users understand model preferences.
: OCI 2025 Generative AI documentation likely explains ''Show Likelihoods'' under token generation insights.
Anglea
11 days agoYen
14 days agoLaurel
1 months agoTegan
2 months agoRodrigo
2 months ago