Loading provider exams...
Sign Up & unlock 100% of Exam Questions
No Strings Attached!
Updated
Given the following code:
chain = prompt | llm
Which statement is true about LangChain Expression Language (LCEL)?
This exam has 40 community-verified practice questions. Create a free account to access all questions, comments, and explanations.
Which component of Retrieval-Augmented Generation (RAG) evaluates and prioritizes the information retrieved by the retrieval system?
Analyze the user prompts provided to a language model. Which scenario exemplifies prompt injection (jailbreaking)?
Which is a key characteristic of the annotation process used in Т-Few fine-tuning?
What distinguishes the Cohere Embed v3 model from its predecessor in the OCI Generative AI service?
Want a break from the ads?
Become a Supporter and enjoy a completely ad-free experience, plus unlock Learn Mode, Exam Mode, AstroTutor AI, and more.
Which is NOT a category of pretrained foundational models available in the OCI Generative AI service?
Which is a key advantage of using Т-Few over Vanilla fine-tuning in the OCI Generative AI service?
Why is normalization of vectors important before indexing in a hybrid search system?
Which technique involves prompting the Large Language Model (LLM) to emit intermediate reasoning steps as part of its response?
How does the architecture of dedicated AI clusters contribute to minimizing GPU memory overhead for T-Few fine-tuned model inference?
Given the following prompts used with a Large Language Model, classify each as employing the Chain-of-Thought, Least-to-most, or Step-Back prompting technique.
What is the primary function of the "temperature" parameter in the OCI Generative Al Generation models?
Which is NOT a typical use case for LangSmith Evaluators?
Given a block of code:
qa = ConversationalRetrievalChain.from_llm(llm, retriever=retv, memory=memory)
When does a chain typically interact with memory during execution?
What does a dedicated RDMA cluster network do during model fine-tuning and inference?
Which is the main characteristic of greedy decoding in the context of language model word prediction?