Free preview mode
Enjoy the free questions and consider upgrading to gain full access!
AWS Certified AI Practitioner AIF-C01
Free trial
Verified
Question 26
A manufacturing company uses AI to inspect products and find any damages or defects.
Which type of AI application is the company using?
- A: Recommendation system
- B: Natural language processing (NLP)
- C: Computer vision
- D: Image processing
Question 27
A company wants to create an ML model to predict customer satisfaction. The company needs fully automated model tuning.
Which AWS service meets these requirements?
- A: Amazon Personalize
- B: Amazon SageMaker
- C: Amazon Athena
- D: Amazon Comprehend
Question 28
Which technique can a company use to lower bias and toxicity in generative AI applications during the post-processing ML lifecycle?
- A: Human-in-the-loop
- B: Data augmentation
- C: Feature engineering
- D: Adversarial training
Question 29
A bank has fine-tuned a large language model (LLM) to expedite the loan approval process. During an external audit of the model, the company discovered that the model was approving loans at a faster pace for a specific demographic than for other demographics.
How should the bank fix this issue MOST cost-effectively?
- A: Include more diverse training data. Fine-tune the model again by using the new data.
- B: Use Retrieval Augmented Generation (RAG) with the fine-tuned model.
- C: Use AWS Trusted Advisor checks to eliminate bias.
- D: Pre-train a new LLM with more diverse training data.
Question 30
HOTSPOT
A company has developed a large language model (LLM) and wants to make the LLM available to multiple internal teams. The company needs to select the appropriate inference mode for each team.
Select the correct inference mode from the following list for each use case. Each inference mode should be selected one or more times.
Question 31
A company needs to log all requests made to its Amazon Bedrock API. The company must retain the logs securely for 5 years at the lowest possible cost.
Which combination of AWS service and storage class meets these requirements? (Choose two.)
- A: AWS CloudTrail
- B: Amazon CloudWatch
- C: AWS Audit Manager
- D: Amazon S3 Intelligent-Tiering
- E: Amazon S3 Standard
Question 32
An ecommerce company wants to improve search engine recommendations by customizing the results for each user of the company’s ecommerce platform.
Which AWS service meets these requirements?
- A: Amazon Personalize
- B: Amazon Kendra
- C: Amazon Rekognition
- D: Amazon Transcribe
Question 33
A hospital is developing an AI system to assist doctors in diagnosing diseases based on patient records and medical images. To comply with regulations, the sensitive patient data must not leave the country the data is located in.
Which data governance strategy will ensure compliance and protect patient privacy?
- A: Data residency
- B: Data quality
- C: Data discoverability
- D: Data enrichment
Question 34
A company needs to monitor the performance of its ML systems by using a highly scalable AWS service.
Which AWS service meets these requirements?
- A: Amazon CloudWatch
- B: AWS CloudTrail
- C: AWS Trusted Advisor
- D: AWS Config
Question 35
A financial institution is using Amazon Bedrock to develop an AI application. The application is hosted in a VPC. To meet regulatory compliance standards, the VPC is not allowed access to any internet traffic.
Which AWS service or feature will meet these requirements?
- A: AWS PrivateLink
- B: Amazon Macie
- C: Amazon CloudFront
- D: Internet gateway
Question 36
An AI practitioner is developing a prompt for an Amazon Titan model. The model is hosted on Amazon Bedrock. The AI practitioner is using the model to solve numerical reasoning challenges. The AI practitioner adds the following phrase to the end of the prompt: “Ask the model to show its work by explaining its reasoning step by step.”
Which prompt engineering technique is the AI practitioner using?
- A: Chain-of-thought prompting
- B: Prompt injection
- C: Few-shot prompting
- D: Prompt templating
Question 37
Which AWS service makes foundation models (FMs) available to help users build and scale generative AI applications?
- A: Amazon Q Developer
- B: Amazon Bedrock
- C: Amazon Kendra
- D: Amazon Comprehend
Question 38
A company is building a mobile app for users who have a visual impairment. The app must be able to hear what users say and provide voice responses.
Which solution will meet these requirements?
- A: Use a deep learning neural network to perform speech recognition.
- B: Build ML models to search for patterns in numeric data.
- C: Use generative AI summarization to generate human-like text.
- D: Build custom models for image classification and recognition.
Question 39
A company wants to enhance response quality for a large language model (LLM) for complex problem-solving tasks. The tasks require detailed reasoning and a step-by-step explanation process.
Which prompt engineering technique meets these requirements?
- A: Few-shot prompting
- B: Zero-shot prompting
- C: Directional stimulus prompting
- D: Chain-of-thought prompting
Question 40
A company wants to keep its foundation model (FM) relevant by using the most recent data. The company wants to implement a model training strategy that includes regular updates to the FM.
Which solution meets these requirements?
- A: Batch learning
- B: Continuous pre-training
- C: Static training
- D: Latent training
Question 41
HOTSPOT
A company wants to develop ML applications to improve business operations and efficiency.
Select the correct ML paradigm from the following list for each use case. Each ML paradigm should be selected one or more times.
Question 42
Which option is a characteristic of AI governance frameworks for building trust and deploying human-centered AI technologies?
- A: Expanding initiatives across business units to create long-term business value
- B: Ensuring alignment with business standards, revenue goals, and stakeholder expectations
- C: Overcoming challenges to drive business transformation and growth
- D: Developing policies and guidelines for data, transparency, responsible AI, and compliance
Question 43
An ecommerce company is using a generative AI chatbot to respond to customer inquiries. The company wants to measure the financial effect of the chatbot on the company’s operations.
Which metric should the company use?
- A: Number of customer inquiries handled
- B: Cost of training AI models
- C: Cost for each customer conversation
- D: Average handled time (AHT)
Question 44
A company wants to find groups for its customers based on the customers’ demographics and buying patterns.
Which algorithm should the company use to meet this requirement?
- A: K-nearest neighbors (k-NN)
- B: K-means
- C: Decision tree
- D: Support vector machine
Question 45
A company’s large language model (LLM) is experiencing hallucinations.
How can the company decrease hallucinations?
- A: Set up Agents for Amazon Bedrock to supervise the model training.
- B: Use data pre-processing and remove any data that causes hallucinations.
- C: Decrease the temperature inference parameter for the model.
- D: Use a foundation model (FM) that is trained to not hallucinate.
Question 46
A company wants to develop an educational game where users answer questions such as the following: "A jar contains six red, four green, and three yellow marbles. What is the probability of choosing a green marble from the jar?"
Which solution meets these requirements with the LEAST operational overhead?
- A: Use supervised learning to create a regression model that will predict probability.
- B: Use reinforcement learning to train a model to return the probability.
- C: Use code that will calculate probability by using simple rules and computations.
- D: Use unsupervised learning to create a model that will estimate probability density.
Question 47
A company is using a large language model (LLM) on Amazon Bedrock to build a chatbot. The chatbot processes customer support requests. To resolve a request, the customer and the chatbot must interact a few times.
Which solution gives the LLM the ability to use content from previous customer messages?
- A: Turn on model invocation logging to collect messages.
- B: Add messages to the model prompt.
- C: Use Amazon Personalize to save conversation history.
- D: Use Provisioned Throughput for the LLM.
That’s the end of your free questions
You’ve reached the preview limit for AWS Certified AI Practitioner AIF-C01Consider upgrading to gain full access!
Free preview mode
Enjoy the free questions and consider upgrading to gain full access!