Certified Generative AI Engineer AssociateFree trialFree trial

By databricks
Aug, 2025

Verified

25Q per page

Question 1

A Generative Al Engineer has created a RAG application to look up answers to questions about a series of fantasy novels that are being asked on the author’s web forum. The fantasy novel texts are chunked and embedded into a vector store with metadata (page number, chapter number, book title), retrieved with the user’s query, and provided to an LLM for response generation. The Generative AI Engineer used their intuition to pick the chunking strategy and associated configurations but now wants to more methodically choose the best values.
Which TWO strategies should the Generative AI Engineer take to optimize their chunking strategy and parameters? (Choose two.)

  • A: Change embedding models and compare performance.
  • B: Add a classifier for user queries that predicts which book will best contain the answer. Use this to filter retrieval.
  • C: Choose an appropriate evaluation metric (such as recall or NDCG) and experiment with changes in the chunking strategy, such as splitting chunks by paragraphs or chapters. Choose the strategy that gives the best performance metric.
  • D: Pass known questions and best answers to an LLM and instruct the LLM to provide the best token count. Use a summary statistic (mean, median, etc.) of the best token counts to choose chunk size.
  • E: Create an LLM-as-a-judge metric to evaluate how well previous questions are answered by the most appropriate chunk. Optimize the chunking parameters based upon the values of the metric.

Question 2

A Generative Al Engineer is responsible for developing a chatbot to enable their company’s internal HelpDesk Call Center team to more quickly find related tickets and provide resolution. While creating the GenAI application work breakdown tasks for this project, they realize they need to start planning which data sources (either Unity Catalog volume or Delta table) they could choose for this application. They have collected several candidate data sources for consideration: call_rep_history: a Delta table with primary keys representative_id, call_id. This table is maintained to calculate representatives’ call resolution from fields call_duration and call start_time. transcript Volume: a Unity Catalog Volume of all recordings as a *.wav files, but also a text transcript as *.txt files. call_cust_history: a Delta table with primary keys customer_id, cal1_id. This table is maintained to calculate how much internal customers use the HelpDesk to make sure that the charge back model is consistent with actual service use. call_detail: a Delta table that includes a snapshot of all call details updated hourly. It includes root_cause and resolution fields, but those fields may be empty for calls that are still active. maintenance_schedule – a Delta table that includes a listing of both HelpDesk application outages as well as planned upcoming maintenance downtimes.
They need sources that could add context to best identify ticket root cause and resolution.
Which TWO sources do that? (Choose two.)

  • A: call_cust_history
  • B: maintenance_schedule
  • C: call_rep_history
  • D: call_detail
  • E: transcript Volume

Question 3

What is the most suitable library for building a multi-step LLM-based workflow?

  • A: Pandas
  • B: TensorFlow
  • C: PySpark
  • D: LangChain

Question 4

When developing an LLM application, it’s crucial to ensure that the data used for training the model complies with licensing requirements to avoid legal risks.
Which action is NOT appropriate to avoid legal risks?

  • A: Reach out to the data curators directly before you have started using the trained model to let them know.
  • B: Use any available data you personally created which is completely original and you can decide what license to use.
  • C: Only use data explicitly labeled with an open license and ensure the license terms are followed.
  • D: Reach out to the data curators directly after you have started using the trained model to let them know.

Question 5

A Generative AI Engineer is testing a simple prompt template in LangChain using the code below, but is getting an error.

Image 1

Assuming the API key was properly defined, what change does the Generative AI Engineer need to make to fix their chain?

  • A:
  • B:
  • C:
  • D:

Question 6

A Generative Al Engineer is creating an LLM system that will retrieve news articles from the year 1918 and related to a user's query and summarize them. The engineer has noticed that the summaries are generated well but often also include an explanation of how the summary was generated, which is undesirable.
Which change could the Generative Al Engineer perform to mitigate this issue?

  • A: Split the LLM output by newline characters to truncate away the summarization explanation.
  • B: Tune the chunk size of news articles or experiment with different embedding models.
  • C: Revisit their document ingestion logic, ensuring that the news articles are being ingested properly.
  • D: Provide few shot examples of desired output format to the system and/or user prompt.

Question 7

A Generative Al Engineer has developed an LLM application to answer questions about internal company policies. The Generative AI Engineer must ensure that the application doesn’t hallucinate or leak confidential data.
Which approach should NOT be used to mitigate hallucination or confidential data leakage?

  • A: Add guardrails to filter outputs from the LLM before it is shown to the user
  • B: Fine-tune the model on your data, hoping it will learn what is appropriate and not
  • C: Limit the data available based on the user’s access level
  • D: Use a strong system prompt to ensure the model aligns with your needs.

Question 8

A Generative Al Engineer interfaces with an LLM with prompt/response behavior that has been trained on customer calls inquiring about product availability. The LLM is designed to output “In Stock” if the product is available or only the term “Out of Stock” if not.
Which prompt will work to allow the engineer to respond to call classification labels correctly?

  • A: Respond with “In Stock” if the customer asks for a product.
  • B: You will be given a customer call transcript where the customer asks about product availability. The outputs are either “In Stock” or “Out of Stock”. Format the output in JSON, for example: {“call_id”: “123”, “label”: “In Stock”}.
  • C: Respond with “Out of Stock” if the customer asks for a product.
  • D: You will be given a customer call transcript where the customer inquires about product availability. Respond with “In Stock” if the product is available or “Out of Stock” if not.

Question 9

A Generative Al Engineer is tasked with developing a RAG application that will help a small internal group of experts at their company answer specific questions, augmented by an internal knowledge base. They want the best possible quality in the answers, and neither latency nor throughput is a huge concern given that the user group is small and they’re willing to wait for the best answer. The topics are sensitive in nature and the data is highly confidential and so, due to regulatory requirements, none of the information is allowed to be transmitted to third parties.
Which model meets all the Generative Al Engineer’s needs in this situation?

  • A: Dolly 1.5B
  • B: OpenAI GPT-4
  • C: BGE-large
  • D: Llama2-70B

Question 10

A Generative Al Engineer would like an LLM to generate formatted JSON from emails. This will require parsing and extracting the following information: order ID, date, and sender email. Here’s a sample email:

Image 1

They will need to write a prompt that will extract the relevant information in JSON format with the highest level of output accuracy.
Which prompt will do that?

  • A: You will receive customer emails and need to extract date, sender email, and order ID. You should return the date, sender email, and order ID information in JSON format.
  • B: You will receive customer emails and need to extract date, sender email, and order ID. Return the extracted information in JSON format. Here’s an example: {“date”: “April 16, 2024”, “sender_email”: “sarah.lee925@gmail.com”, “order_id”: “RE987D”}
  • C: You will receive customer emails and need to extract date, sender email, and order ID. Return the extracted information in a human-readable format.
  • D: You will receive customer emails and need to extract date, sender email, and order IReturn the extracted information in JSON format.

Question 11

A Generative AI Engineer has been asked to build an LLM-based question-answering application. The application should take into account new documents that are frequently published. The engineer wants to build this application with the least cost and least development effort and have it operate at the lowest cost possible.
Which combination of chaining components and configuration meets these requirements?

  • A: For the application a prompt, a retriever, and an LLM are required. The retriever output is inserted into the prompt which is given to the LLM to generate answers.
  • B: The LLM needs to be frequently with the new documents in order to provide most up-to-date answers.
  • C: For the question-answering application, prompt engineering and an LLM are required to generate answers.
  • D: For the application a prompt, an agent and a fine-tuned LLM are required. The agent is used by the LLM to retrieve relevant content that is inserted into the prompt which is given to the LLM to generate answers.

Question 12

A Generative AI Engineer is designing a RAG application for answering user questions on technical regulations as they learn a new sport.
What are the steps needed to build this RAG application and deploy it?

  • A: Ingest documents from a source –> Index the documents and saves to Vector Search –> User submits queries against an LLM –> LLM retrieves relevant documents –> Evaluate model –> LLM generates a response –> Deploy it using Model Serving
  • B: Ingest documents from a source –> Index the documents and save to Vector Search –> User submits queries against an LLM –> LLM retrieves relevant documents –> LLM generates a response -> Evaluate model –> Deploy it using Model Serving
  • C: Ingest documents from a source –> Index the documents and save to Vector Search –> Evaluate model –> Deploy it using Model Serving
  • D: User submits queries against an LLM –> Ingest documents from a source –> Index the documents and save to Vector Search –> LLM retrieves relevant documents –> LLM generates a response –> Evaluate model –> Deploy it using Model Serving

Question 13

A Generative AI Engineer is creating an agent-based LLM system for their favorite monster truck team. The system can answer text based questions about the monster truck team, lookup event dates via an API call, or query tables on the team’s latest standings.
How could the Generative AI Engineer best design these capabilities into their system?

  • A: Ingest PDF documents about the monster truck team into a vector store and query it in a RAG architecture.
  • B: Write a system prompt for the agent listing available tools and bundle it into an agent system that runs a number of calls to solve a query.
  • C: Instruct the LLM to respond with “RAG”, “API”, or “TABLE” depending on the query, then use text parsing and conditional statements to resolve the query.
  • D: Build a system prompt with all possible event dates and table information in the system prompt. Use a RAG architecture to lookup generic text questions and otherwise leverage the information in the system prompt.

Question 14

A Generative AI Engineer has been asked to design an LLM-based application that accomplishes the following business objective: answer employee HR questions using HR PDF documentation.
Which set of high level tasks should the Generative AI Engineer's system perform?

  • A: Calculate averaged embeddings for each HR document, compare embeddings to user query to find the best document. Pass the best document with the user query into an LLM with a large context window to generate a response to the employee.
  • B: Use an LLM to summarize HR documentation. Provide summaries of documentation and user query into an LLM with a large context window to generate a response to the user.
  • C: Create an interaction matrix of historical employee questions and HR documentation. Use ALS to factorize the matrix and create embeddings. Calculate the embeddings of new queries and use them to find the best HR documentation. Use an LLM to generate a response to the employee question based upon the documentation retrieved.
  • D: Split HR documentation into chunks and embed into a vector store. Use the employee question to retrieve best matched chunks of documentation, and use the LLM to generate a response to the employee based upon the documentation retrieved.

Question 15

Generative AI Engineer at an electronics company just deployed a RAG application for customers to ask questions about products that the company carries. However, they received feedback that the RAG response often returns information about an irrelevant product.
What can the engineer do to improve the relevance of the RAG’s response?

  • A: Assess the quality of the retrieved context
  • B: Implement caching for frequently asked questions
  • C: Use a different LLM to improve the generated response
  • D: Use a different semantic similarity search algorithm

Question 16

A Generative AI Engineer is developing a chatbot designed to assist users with insurance-related queries. The chatbot is built on a large language model (LLM) and is conversational. However, to maintain the chatbot’s focus and to comply with company policy, it must not provide responses to questions about politics. Instead, when presented with political inquiries, the chatbot should respond with a standard message:
“Sorry, I cannot answer that. I am a chatbot that can only answer questions around insurance.”
Which framework type should be implemented to solve this?

  • A: Safety Guardrail
  • B: Security Guardrail
  • C: Contextual Guardrail
  • D: Compliance Guardrail

That’s the end of your free questions

You’ve reached the preview limit for Certified Generative AI Engineer Associate

Consider upgrading to gain full access!

Page 1 of 4 • Questions 1-25 of 80

Free preview mode

Enjoy the free questions and consider upgrading to gain full access!