AWS Certified Solutions Architect - ProfessionalFree trialFree trial

By amazon
Aug, 2025

Verified

25Q per page

Question 1

Your company policies require encryption of sensitive data at rest. You are considering the possible options for protecting data while storing it at rest on an EBS data volume, attached to an EC2 instance.
Which of these options would allow you to encrypt your data at rest? (Choose three.)

  • A: Implement third party volume encryption tools
  • B: Implement SSL/TLS for all services running on the server
  • C: Encrypt data inside your applications before storing it on EBS
  • D: Encrypt data using native data encryption drivers at the file system level
  • E: Do nothing as EBS volumes are encrypted by default

Question 2

Your team has a tomcat-based Java application you need to deploy into development, test and production environments. After some research, you opt to use
Elastic Beanstalk due to its tight integration with your developer tools and RDS due to its ease of management. Your QA team lead points out that you need to roll a sanitized set of production data into your environment on a nightly basis. Similarly, other software teams in your org want access to that same restored data via their EC2 instances in your VPC.
The optimal setup for persistence and security that meets the above requirements would be the following.

  • A: Create your RDS instance as part of your Elastic Beanstalk definition and alter its security group to allow access to it from hosts in your application subnets.
  • B: Create your RDS instance separately and add its IP address to your application's DB connection strings in your code Alter its security group to allow access to it from hosts within your VPC's IP address block.
  • C: Create your RDS instance separately and pass its DNS name to your app's DB connection string as an environment variable. Create a security group for client machines and add it as a valid source for DB traffic to the security group of the RDS instance itself.
  • D: Create your RDS instance separately and pass its DNS name to your's DB connection string as an environment variable Alter its security group to allow access to It from hosts in your application subnets.

Question 3

Select the correct set of options. These are the initial settings for the default security group:

  • A: Allow no inbound traffic, Allow all outbound traffic and Allow instances associated with this security group to talk to each other
  • B: Allow all inbound traffic, Allow no outbound traffic and Allow instances associated with this security group to talk to each other
  • C: Allow no inbound traffic, Allow all outbound traffic and Does NOT allow instances associated with this security group to talk to each other
  • D: Allow all inbound traffic, Allow all outbound traffic and Does NOT allow instances associated with this security group to talk to each other

Question 4

A company wants to establish a dedicated connection between its on-premises infrastructure and AWS. The company is setting up a 1 Gbps AWS Direct Connect connection to its account VPC. The architecture includes a transit gateway and a Direct Connect gateway to connect multiple VPCs and the on-premises infrastructure.

The company must connect to VPC resources over a transit VIF by using the Direct Connect connection.

Which combination of steps will meet these requirements? (Choose two.)

  • A: Update the 1 Gbps Direct Connect connection to 10 Gbps.
  • B: Advertise the on-premises network prefixes over the transit VIF.
  • C: Advertise the VPC prefixes from the Direct Connect gateway to the on-premises network over the transit VIF.
  • D: Update the Direct Connect connection's MACsec encryption mode attribute to must_encrypt.
  • E: Associate a MACsec Connection Key Name/Connectivity Association Key (CKN/CAK) pair with the Direct Connect connection.

Question 5

A company's solutions architect needs to provide secure Remote Desktop connectivity to users for Amazon EC2 Windows instances that are hosted in a VPC. The solution must integrate centralized user management with the company's on-premises Active Directory. Connectivity to the VPC is through the internet. The company has hardware that can be used to establish an AWS Site-to-Site VPN connection.

Which solution will meet these requirements MOST cost-effectively?

  • A: Deploy a managed Active Directory by using AWS Directory Service for Microsoft Active Directory. Establish a trust with the on-premises Active Directory. Deploy an EC2 instance as a bastion host in the VPC. Ensure that the EC2 instance is joined to the domain. Use the bastion host to access the target instances through RDP.
  • B: Configure AWS Single Sign-On to integrate with the on-premises Active Directory by using the AWS Directory Service for Microsoft Active Directory AD Connector. Configure permission sets against user groups for access to AWS Systems Manager. Use Systems Manager Fleet Manager to access the target instances through RDP.
  • C: Implement a VPN between the on-premises environment and the target VPEnsure that the target instances are joined to the on-premises Active Directory domain over the VPN connection. Configure RDP access through the VPN. Connect from the company's network to the target instances.
  • D: Deploy a managed Active Directory by using AWS Directory Service for Microsoft Active Directory. Establish a trust with the on-premises Active Directory. Deploy a Remote Desktop Gateway on AWS by using an AWS Quick Start. Ensure that the Remote Desktop Gateway is joined to the domain. Use the Remote Desktop Gateway to access the target instances through RDP.

Question 6

A company uses AWS Organizations to manage its AWS accounts. The company needs a list of all its Amazon EC2 instances that have underutilized CPU or memory usage. The company also needs recommendations for how to downsize these underutilized instances.

Which solution will meet these requirements with the LEAST effort?

  • A: Install a CPU and memory monitoring tool from AWS Marketplace on all the EC2 instances. Store the findings in Amazon S3. Implement a Python script to identify underutilized instances. Reference EC2 instance pricing information for recommendations about downsizing options.
  • B: Install the Amazon CloudWatch agent on all the EC2 instances by using AWS Systems Manager. Retrieve the resource optimization recommendations from AWS Cost Explorer in the organization's management account. Use the recommendations to downsize underutilized instances in all accounts of the organization.
  • C: Install the Amazon CloudWatch agent on all the EC2 instances by using AWS Systems Manager. Retrieve the resource optimization recommendations from AWS Cost Explorer in each account of the organization. Use the recommendations to downsize underutilized instances in all accounts of the organization.
  • D: Install the Amazon CloudWatch agent on all the EC2 instances by using AWS Systems Manager. Create an AWS Lambda function to extract CPU and memory usage from all the EC2 instances. Store the findings as files in Amazon S3. Use Amazon Athena to find underutilized instances. Reference EC2 instance pricing information for recommendations about downsizing options.

Question 7

A company uses an on-premises data analytics platform. The system is highly available in a fully redundant configuration across 12 servers in the company's data center.

The system runs scheduled jobs, both hourly and daily, in addition to one-time requests from users. Scheduled jobs can take between 20 minutes and 2 hours to finish running and have tight SLAs. The scheduled jobs account for 65% of the system usage. User jobs typically finish running in less than 5 minutes and have no SLA. The user jobs account for 35% of system usage. During system failures, scheduled jobs must continue to meet SLAs. However, user jobs can be delayed.

A solutions architect needs to move the system to Amazon EC2 instances and adopt a consumption-based model to reduce costs with no long-term commitments. The solution must maintain high availability and must not affect the SLAs.

Which solution will meet these requirements MOST cost-effectively?

  • A: Split the 12 instances across two Availability Zones in the chosen AWS Region. Run two instances in each Availability Zone as On-Demand Instances with Capacity Reservations. Run four instances in each Availability Zone as Spot Instances.
  • B: Split the 12 instances across three Availability Zones in the chosen AWS Region. In one of the Availability Zones, run all four instances as On-Demand Instances with Capacity Reservations. Run the remaining instances as Spot Instances.
  • C: Split the 12 instances across three Availability Zones in the chosen AWS Region. Run two instances in each Availability Zone as On-Demand Instances with a Savings Plan. Run two instances in each Availability Zone as Spot Instances.
  • D: Split the 12 instances across three Availability Zones in the chosen AWS Region. Run three instances in each Availability Zone as On-Demand Instances with Capacity Reservations. Run one instance in each Availability Zone as a Spot Instance.

Question 8

A company operates a fleet of servers on premises and operates a fleet of Amazon EC2 instances in its organization in AWS Organizations. The company's AWS accounts contain hundreds of VPCs. The company wants to connect its AWS accounts to its on-premises network. AWS Site-to-Site VPN connections are already established to a single AWS account. The company wants to control which VPCs can communicate with other VPCs.

Which combination of steps will achieve this level of control with the LEAST operational effort? (Choose three.)

  • A: Create a transit gateway in an AWS account. Share the transit gateway across accounts by using AWS Resource Access Manager (AWS RAM).
  • B: Configure attachments to all VPCs and VPNs.
  • C: Set up transit gateway route tables. Associate the VPCs and VPNs with the route tables.
  • D: Configure VPC peering between the VPCs.
  • E: Configure attachments between the VPCs and VPNs.
  • F: Set up route tables on the VPCs and VPNs.

Question 9

A company recently deployed an application on AWS. The application uses Amazon DynamoDB. The company measured the application load and configured the RCUs and WCUs on the DynamoDB table to match the expected peak load. The peak load occurs once a week for a 4-hour period and is double the average load. The application load is close to the average load for the rest of the week. The access pattern includes many more writes to the table than reads of the table.

A solutions architect needs to implement a solution to minimize the cost of the table.

Which solution will meet these requirements?

  • A: Use AWS Application Auto Scaling to increase capacity during the peak period. Purchase reserved RCUs and WCUs to match the average load.
  • B: Configure on-demand capacity mode for the table.
  • C: Configure DynamoDB Accelerator (DAX) in front of the table. Reduce the provisioned read capacity to match the new peak load on the table.
  • D: Configure DynamoDB Accelerator (DAX) in front of the table. Configure on-demand capacity mode for the table.

Question 10

A solutions architect is auditing the security setup of an AWS Lambda function for a company. The Lambda function retrieves the latest changes from an Amazon Aurora database. The Lambda function and the database run in the same VPC. Lambda environment variables are providing the database credentials to the Lambda function.

The Lambda function aggregates data and makes the data available in an Amazon S3 bucket that is configured for server-side encryption with AWS KMS managed encryption keys (SSE-KMS). The data must not travel across the internet. If any database credentials become compromised, the company needs a solution that minimizes the impact of the compromise.

What should the solutions architect recommend to meet these requirements?

  • A: Enable IAM database authentication on the Aurora DB cluster. Change the IAM role for the Lambda function to allow the function to access the database by using IAM database authentication. Deploy a gateway VPC endpoint for Amazon S3 in the VPC.
  • B: Enable IAM database authentication on the Aurora DB cluster. Change the IAM role for the Lambda function to allow the function to access the database by using IAM database authentication. Enforce HTTPS on the connection to Amazon S3 during data transfers.
  • C: Save the database credentials in AWS Systems Manager Parameter Store. Set up password rotation on the credentials in Parameter Store. Change the IAM role for the Lambda function to allow the function to access Parameter Store. Modify the Lambda function to retrieve the credentials from Parameter Store. Deploy a gateway VPC endpoint for Amazon S3 in the VPC.
  • D: Save the database credentials in AWS Secrets Manager. Set up password rotation on the credentials in Secrets Manager. Change the IAM role for the Lambda function to allow the function to access Secrets Manager. Modify the Lambda function to retrieve the credentials Om Secrets Manager. Enforce HTTPS on the connection to Amazon S3 during data transfers.

Question 11

A company is developing and hosting several projects in the AWS Cloud. The projects are developed across multiple AWS accounts under the same organization in AWS Organizations. The company requires the cost for cloud infrastructure to be allocated to the owning project. The team responsible for all of the AWS accounts has discovered that several Amazon EC2 instances are lacking the Project tag used for cost allocation.

Which actions should a solutions architect take to resolve the problem and prevent it from happening in the future? (Choose three.)

  • A: Create an AWS Config rule in each account to find resources with missing tags.
  • B: Create an SCP in the organization with a deny action for ec2:Runlnstances if the Project tag is missing.
  • C: Use Amazon Inspector in the organization to find resources with missing tags.
  • D: Create an IAM policy in each account with a deny action for ec2:Runlnstances if the Project tag is missing.
  • E: Create an AWS Config aggregator for the organization to collect a list of EC2 instances with the missing Project tag.
  • F: Use AWS Security Hub to aggregate a list of EC2 instances with the missing Project tag.

Question 12

A company that uses AWS Organizations is creating several new AWS accounts. The company is setting up controls to properly allocate AWS costs to business units. The company must implement a solution to ensure that all resources include a tag that has a key of costcenter and a value from a predefined list of business units. The solution must send a notification each time a resource tag does not meet these criteria. The solution must not prevent the creation of resources.

Which solution will meet these requirements with the LEAST operational overhead?

  • A: Create an IAM policy for all actions that create AWS resources. Add a condition to the policy that aws:RequestTag/costcenter must exist and must contain a valid business unit value. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that monitors IAM service events and Amazon EC2 service events for noncompliant tag policies. Configure the rule to send notifications through Amazon Simple Notification Service (Amazon SNS).
  • B: Create an IAM policy for all actions that create AWS resources. Add a condition to the policy that aws:ResourceTag/costcenter must exist and must contain a valid business unit value. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that monitors IAM service events and Amazon EC2 service events for noncompliant tag policies. Configure the rule to send notifications through Amazon Simple Notification Service (Amazon SNS).
  • C: Create an organization tag policy that ensures that all resources have the costcenter tag with a valid business unit value. Do not select the option to prevent operations when tags are noncompliant. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that monitors all events for noncompliant tag policies. Configure the rule to send notifications through Amazon Simple Notification Service (Amazon SNS).
  • D: Create an organization tag policy that ensures that all resources have the costcenter tag with a valid business unit value. Select the option to prevent operations when tags are noncompliant. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that monitors all events for noncompliant tag policies. Configure the rule to send notifications through Amazon Simple Notification Service (Amazon SNS).

Question 13

An international delivery company hosts a delivery management system on AWS. Drivers use the system to upload confirmation of delivery. Confirmation includes the recipient's signature or a photo of the package with the recipient. The driver's handheld device uploads signatures and photos through FTP to a single Amazon EC2 instance. Each handheld device saves a file in a directory based on the signed-in user, and the file name matches the delivery number. The EC2 instance then adds metadata to the file after querying a central database to pull delivery information. The file is then placed in Amazon S3 for archiving.

As the company expands, drivers report that the system is rejecting connections. The FTP server is having problems because of dropped connections and memory issues. In response to these problems, a system engineer schedules a cron task to reboot the EC2 instance every 30 minutes. The billing team reports that files are not always in the archive and that the central system is not always updated.

A solutions architect needs to design a solution that maximizes scalability to ensure that the archive always receives the files and that systems are always updated. The handheld devices cannot be modified, so the company cannot deploy a new application.

Which solution will meet these requirements?

  • A: Create an AMI of the existing EC2 instance. Create an Auto Scaling group of EC2 instances behind an Application Load Balancer. Configure the Auto Scaling group to have a minimum of three instances.
  • B: Use AWS Transfer Family to create an FTP server that places the files in Amazon Elastic File System (Amazon EFS). Mount the EFS volume to the existing EC2 instance. Point the EC2 instance to the new path for file processing.
  • C: Use AWS Transfer Family to create an FTP server that places the files in Amazon S3. Use an S3 event notification through Amazon Simple Notification Service (Amazon SNS) to invoke an AWS Lambda function. Configure the Lambda function to add the metadata and update the delivery system.
  • D: Update the handheld devices to place the files directly in Amazon S3. Use an S3 event notification through Amazon Simple Queue Service (Amazon SQS) to invoke an AWS Lambda function. Configure the Lambda function to add the metadata and update the delivery system.

Question 14

How can an EBS volume that is currently attached to an EC2 instance be migrated from one Availability Zone to another?

  • A: Detach the volume and attach it to another EC2 instance in the other AZ.
  • B: Simply create a new volume in the other AZ and specify the original volume as the source.
  • C: Create a snapshot of the volume, and create a new volume from the snapshot in the other AZ.
  • D: Detach the volume, then use the ec2-migrate-volume command to move it to another AZ.

Question 15

A company needs to deploy its document storage application across two AWS Regions. The company is storing PDF documents that have an average file size of 512 KiB and a minimum file size of 200 KiB. The company needs protection for accidental document overwrites in the primary Region. The secondary Region must have cost-optimized storage. The company needs a solution that provides an SLA of 99.99% that files will be replicated to the secondary Region within 15 minutes.

Which solution will meet these requirements?

  • A: Deploy an Amazon FSx cluster for multiple application hosts to mount in the primary Region. Configure a second Amazon FSx deployment in the secondary Region. Configure replication from the Amazon FSx cluster in the primary Region to the Amazon FSx deployment in the secondary Region.
  • B: Deploy two Amazon S3 buckets, one in each Region. Enable S3 Versioning for each bucket. Enable S3 Replication Time Control (S3 RTC) to replicate objects to the secondary Region. Specify S3 Glacier Deep Archive as the storage class in the secondary Region.
  • C: Deploy two Amazon S3 buckets, one in each Region. Enable S3 Versioning for the bucket in the primary Region. Set up S3 Cross-Region Replication (CRR) from the primary Region to the secondary Region. Create an S3 event secondary bucket to invoke an AWS Lambda function that reviews each replicated object and specifies S3 Glacier Deep Archive as the storage class in the secondary Region.
  • D: Deploy an Amazon FSx multi-Region cluster. Configure the multi-Region cluster with object versioning. Mount the file system as ZFS with versioning support. Activate S3 archiving from Amazon FSx.

Question 16

A company has an application that is deployed on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances are part of an Auto Scaling group. The application has unpredictable workloads and frequently scales out and in. The company's development team wants to analyze application logs to find ways to improve the application's performance. However, the logs are no longer available after instances scale in.

Which solution will give the development team the ability to view the application logs after a scale-in event?

  • A: Enable access logs for the ALB. Store the logs in an Amazon S3 bucket.
  • B: Configure the EC2 instances to publish logs to Amazon CloudWatch Logs by using the unified CloudWatch agent.
  • C: Modify the Auto Scaling group to use a step scaling policy.
  • D: Instrument the application with AWS X-Ray tracing.

Question 17

A company is deploying a new cluster for big data analytics on AWS. The cluster will run across many Linux Amazon EC2 instances that are spread across multiple Availability Zones.

All of the nodes in the cluster must have read and write access to common underlying file storage. The file storage must be highly available, must be resilient, must be compatible with the Portable Operating System Interface (POSIX), and must accommodate high levels of throughput.

Which storage solution will meet these requirements?

  • A: Provision an AWS Storage Gateway file gateway NFS file share that is attached to an Amazon S3 bucket. Mount the NFS file share on each EC2 instance in the cluster.
  • B: Provision a new Amazon Elastic File System (Amazon EFS) file system that uses General Purpose performance mode. Mount the EFS file system on each EC2 instance in the cluster.
  • C: Provision a new Amazon Elastic Block Store (Amazon EBS) volume that uses the io2 volume type. Attach the EBS volume to all of the EC2 instances in the cluster.
  • D: Provision a new Amazon Elastic File System (Amazon EFS) file system that uses Max I/O performance mode. Mount the EFS file system on each EC2 instance in the cluster.

Question 18

A company had a third-party audit of its AWS environment. The auditor identified secrets in developer documentation and found secrets that were hardcoded into AWS CloudFormation templates throughout the environment. The auditor also identified security groups that allowed inbound traffic from the internet and outbound traffic to all destinations on the internet.

A solutions architect must design a solution that will encrypt all secrets and rotate the secrets every 90 days. Additionally, the solutions architect must configure the security groups to prevent resources from being accessible from the internet.

Which solution will meet these requirements?

  • A: Use AWS Secrets Manager to create, store, and access secrets. Create new secrets in AWS CloudFormation by using the AWS::SecretsManager::Secret resource type. Reference the secrets in other templates by using Secrets Manager dynamic references. Configure automatic rotation in Secrets Manager to rotate the secrets every 90 days. Use AWS Firewall Manager to create a policy that identifies all security groups that allow inbound or outbound communications for any protocols to 0.0.0.0/0. Whenever the policy flags a security group in violation, remove the noncompliant rule from security groups.
  • B: Use AWS Systems Manager Parameter Store to create, store, and access secrets. Create new Parameter Store items in AWS CloudFormation by using the AWS::SSM::Parameter resource type. Access these items by using the AWS CLI or AWS APIs. Configure automatic rotation in Parameter Store to rotate the secrets every 90 days. Use AWS Firewall Manager to create a policy that identifies all security groups that allow inbound or outbound communications for any protocols to 0.0.0.0/0. Whenever the policy flags a security group in violation, remove the noncompliant rule from security groups.
  • C: Use AWS Secrets Manager to create, store, and access secrets. Create new secrets in AWS CloudFormation by using the AWS::SecretsManager::Secret resource type. Reference the secrets in other templates by using Secrets Manager dynamic references. Configure automatic rotation in Secrets Manager to rotate the secrets every 90 days. Use AWS Firewall Manager to create a policy that enforces a requirement for all security groups to explicitly deny inbound and outbound communications for all protocols to 0.0.0.0/0.
  • D: Use AWS Systems Manager Parameter Store to create, store, and access secrets. Create new Parameter Store items in AWS CloudFormation by using the AWS::SSM::Parameter resource type. Reference the items in other templates by using Systems Manager dynamic references. Configure automatic rotation in Parameter Store to rotate the secrets every 90 days. Use AWS Firewall Manager to create a policy that enforces a requirement for all security groups to explicitly deny inbound and outbound communications for all protocols to 0.0.0.0/0.

Question 19

A company is running an application on Amazon EC2 instances in the AWS Cloud. The application is using a MongoDB database with a replica set as its data tier. The MongoDB database is installed on systems in the company's on-premises data center and is accessible through an AWS Direct Connect connection to the data center environment.

A solutions architect must migrate the on-premises MongoDB database to Amazon DocumentDB (with MongoDB compatibility).

Which strategy should the solutions architect choose to perform this migration?

  • A: Create a fleet of EC2 instances. Install MongoDB Community Edition on the EC2 instances, and create a database. Configure continuous synchronous replication with the database that is running in the on-premises data center.
  • B: Create an AWS Database Migration Service (AWS DMS) replication instance. Create a source endpoint for the on-premises MongoDB database by using change data capture (CDC). Create a target endpoint for the Amazon DocumentDB database. Create and run a DMS migration task.
  • C: Create a data migration pipeline by using AWS Data Pipeline. Define data nodes for the on-premises MongoDB database and the Amazon DocumentDB database. Create a scheduled task to run the data pipeline.
  • D: Create a source endpoint for the on-premises MongoDB database by using AWS Glue crawlers. Configure continuous asynchronous replication between the MongoDB database and the Amazon DocumentDB database.

Question 20

A company is deploying a new API to AWS. The API uses Amazon API Gateway with a Regional API endpoint and an AWS Lambda function for hosting. The API retrieves data from an external vendor API, stores data in an Amazon DynamoDB global table, and retrieves data from the DynamoDB global table. The API key for the vendor's API is stored in AWS Secrets Manager and is encrypted with a customer managed key in AWS Key Management Service (AWS KMS). The company has deployed its own API into a single AWS Region.

A solutions architect needs to change the API components of the company's API to ensure that the components can run across multiple Regions in an active-active configuration.

Which combination of changes will meet this requirement with the LEAST operational overhead? (Choose three.)

  • A: Deploy the API to multiple Regions. Configure Amazon Route 53 with custom domain names that route traffic to each Regional API endpoint. Implement a Route 53 multivalue answer routing policy.
  • B: Create a new KMS multi-Region customer managed key. Create a new KMS customer managed replica key in each in-scope Region.
  • C: Replicate the existing Secrets Manager secret to other Regions. For each in-scope Region's replicated secret, select the appropriate KMS key.
  • D: Create a new AWS managed KMS key in each in-scope Region. Convert an existing key to a multi-Region key. Use the multi-Region key in other Regions.
  • E: Create a new Secrets Manager secret in each in-scope Region. Copy the secret value from the existing Region to the new secret in each in-scope Region.
  • F: Modify the deployment process for the Lambda function to repeat the deployment across in-scope Regions. Turn on the multi-Region option for the existing API. Select the Lambda function that is deployed in each Region as the backend for the multi-Region API.

Question 21

A company deploys workloads in multiple AWS accounts. Each account has a VPC with VPC flow logs published in text log format to a centralized Amazon S3 bucket. Each log file is compressed with gzip compression. The company must retain the log files indefinitely.

A security engineer occasionally analyzes the logs by using Amazon Athena to query the VPC flow logs. The query performance is degrading over time as the number of ingested logs is growing. A solutions architect must improve the performance of the log analysis and reduce the storage space that the VPC flow logs use.

Which solution will meet these requirements with the LARGEST performance improvement?

  • A: Create an AWS Lambda function to decompress the gzip files and to compress the files with bzip2 compression. Subscribe the Lambda function to an s3:ObjectCreated:Put S3 event notification for the S3 bucket.
  • B: Enable S3 Transfer Acceleration for the S3 bucket. Create an S3 Lifecycle configuration to move files to the S3 Intelligent-Tiering storage class as soon as the files are uploaded.
  • C: Update the VPC flow log configuration to store the files in Apache Parquet format. Specify hourly partitions for the log files.
  • D: Create a new Athena workgroup without data usage control limits. Use Athena engine version 2.

Question 22

A company's solutions architect is managing a learning platform that supports more than 1 million students. The company's business reporting team is experiencing slow performance while extracting large datasets from the database. The learning application is based on PHP and runs on Amazon EC2 instances that are in an Amazon EC2 Auto Scaling group behind an Application Load Balancer (ALB). Application data is stored in an Amazon S3 bucket and in an Amazon RDS for MySOL database. The ALB is the origin of an Amazon CloudFront distribution.

The solutions architect observes that slow read operations for SELECT queries are affecting the RDS for MySOL DB instance's CPU utilization. The solutions architect must find a scalable solution to improve the slow website performance with near-zero downtime. The solution also must provide automatic failover with no data loss.

Which solution will meet these requirements?

  • A: Create an incremental database backup by using Percona XtraBackup. Compress the backup files. Synchronize the backup files to Amazon S3. Restore the backup files from Amazon S3 to Amazon Aurora MySOL. Direct the application endpoint to the new Aurora DB instance.
  • B: Convert the DB instance to a Multi-AZ deployment. Set the query_cache_type parameter on the database to zero. Increase the CloudFront caching TTL to reduce application server CPU utilization.
  • C: Create an Amazon Aurora read replica from the DB instance. Wait until the read replica is synchronized with the source DB instance. Promote the read replica to a standalone DB cluster. Direct the application endpoint to the new Aurora DB instance.
  • D: Create a read replica cluster on the DB instance. Use a Multi-AZ deployment. Synchronize the read replica with the primary DB instance. Promote the read replica as the primary DB instance.

Question 23

A company is using IoT devices on its manufacturing equipment. Data from the devices travels to the AWS Cloud through a connection to AWS IoT Core. An Amazon Kinesis data stream sends the data from AWS IoT Core to the company's processing application. The processing application stores data in Amazon S3.

A new requirement states that the company also must send the raw data to a third-party system by using an HTTP API.

Which solution will meet these requirements with the LEAST amount of development work?

  • A: Create a custom AWS Lambda function to consume records from the Kinesis data stream. Configure the Lambda function to call the third-party HTTP API.
  • B: Create an S3 event notification with Amazon EventBridge (Amazon CloudWatch Events) as the event destination. Create an EventBridge (CloudWatch Events) API destination for the third-party HTTP API.
  • C: Create an Amazon Kinesis Data Firehose delivery stream. Configure an HTTP endpoint destination that targets the third-party HTTP API. Configure the Kinesis data stream to send data to the Kinesis Data Firehose delivery stream.
  • D: Create an S3 event notification with an Amazon Simple Queue Service (Amazon SQS) queue as the event destination. Configure the SOS queue to invoke a custom AWS Lambda function. Configure the Lambda function to call the third-party HTTP API.

Question 24

A solutions architect is deploying a web application that consists of a web tier, an application tier, and a database tier. The infrastructure must be highly available across two Availability Zones. The solution must minimize single points of failure and must be resilient.

Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)

  • A: Deploy an Application Load Balancer (ALB) that is mapped to a public subnet in each Availability Zone for the web tier. Deploy Amazon EC2 instances as web servers in each of the private subnets. Configure the web server instances as the target group for the ALB. Use Amazon EC2 Auto Scaling for the web server instances.
  • B: Deploy an Application Load Balancer (ALB) that is mapped to a public subnet in each Availability Zone for the web tier. Deploy Amazon EC2 instances as web servers in each of the public subnets. Configure the web server instances as the target group for the ALUse Amazon EC2 Auto Scaling for the web server instances.
  • C: Deploy a new Application Load Balancer (ALB) to a private subnet in each Availability Zone for the application tier. Deploy Amazon EC2 instances as application servers in each of the private subnets. Configure the application server instances as targets for the new ALB. Configure the web server instances to forward traffic to the new ALB. Use Amazon EC2 Auto Scaling for the application server instances.
  • D: Deploy a new Application Load Balancer (ALB) to a private subnet in each Availability Zone for the application tier. Deploy Amazon EC2 instances as application servers in each of the private subnets. Configure the web server instances to forward traffic to the application server instances. Use Amazon EC2 Auto Scaling for the application server instances.
  • E: Deploy an Amazon RDS Multi-AZ DB instance. Configure the application to target the DB instance.
  • F: Deploy an Amazon RDS Single-AZ DB instance with a read replica in another Availability Zone. Configure the application to target the primary DB instance.

Question 25

After launching an instance that you intend to serve as a NAT (Network Address Translation) device in a public subnet you modify your route tables to have the
NAT device be the target of internet bound traffic of your private subnet. When you try and make an outbound connection to the internet from an instance in the private subnet, you are not successful.
Which of the following steps could resolve the issue?

  • A: Disabling the Source/Destination Check attribute on the NAT instance
  • B: Attaching an Elastic IP address to the instance in the private subnet
  • C: Attaching a second Elastic Network Interface (ENI) to the NAT instance, and placing it in the private subnet
  • D: Attaching a second Elastic Network Interface (ENI) to the instance in the private subnet, and placing it in the public subnet
Page 1 of 41 • Questions 1-25 of 1019

Free preview mode

Enjoy the free questions and consider upgrading to gain full access!