AWS Certified Database - Specialty
Free trial
Verified
Question 1
A company has deployed an e-commerce web application in a new AWS account. An Amazon RDS for MySQL Multi-AZ DB instance is part of this deployment with a database-1.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com endpoint listening on port 3306. The company's Database Specialist is able to log in to MySQL and run queries from the bastion host using these details.
When users try to utilize the application hosted in the AWS account, they are presented with a generic error message. The application servers are logging a could not connect to server: Connection times out error message to Amazon CloudWatch Logs.
What is the cause of this error?
- A: The user name and password the application is using are incorrect.
- B: The security group assigned to the application servers does not have the necessary rules to allow inbound connections from the DB instance.
- C: The security group assigned to the DB instance does not have the necessary rules to allow inbound connections from the application servers.
- D: The user name and password are correct, but the user is not authorized to use the DB instance.
Question 2
A Database Specialist is migrating a 2 TB Amazon RDS for Oracle DB instance to an RDS for PostgreSQL DB instance using AWS DMS. The source RDS Oracle
DB instance is in a VPC in the us-east-1 Region. The target RDS for PostgreSQL DB instance is in a VPC in the use-west-2 Region.
Where should the AWS DMS replication instance be placed for the MOST optimal performance?
- A: In the same Region and VPC of the source DB instance
- B: In the same Region and VPC as the target DB instance
- C: In the same VPC and Availability Zone as the target DB instance
- D: In the same VPC and Availability Zone as the source DB instance
Question 3
A company is using Amazon Aurora PostgreSQL for the backend of its application. The system users are complaining that the responses are slow. A database specialist has determined that the queries to Aurora take longer during peak times. With the Amazon RDS Performance Insights dashboard, the load in the chart for average active sessions is often above the line that denotes maximum CPU usage and the wait state shows that most wait events are IO:XactSync.
What should the company do to resolve these performance issues?
- A: Add an Aurora Replica to scale the read traffic.
- B: Scale up the DB instance class.
- C: Modify applications to commit transactions in batches.
- D: Modify applications to avoid conflicts by taking locks.
Question 4
A database specialist deployed an Amazon RDS DB instance in Dev-VPC1 used by their development team. Dev-VPC1 has a peering connection with Dev-VPC2 that belongs to a different development team in the same department. The networking team confirmed that the routing between VPCs is correct; however, the database engineers in Dev-VPC2 are getting a timeout connections error when trying to connect to the database in Dev-VPC1.
What is likely causing the timeouts?
- A: The database is deployed in a VPC that is in a different Region.
- B: The database is deployed in a VPC that is in a different Availability Zone.
- C: The database is deployed with misconfigured security groups.
- D: The database is deployed with the wrong client connect timeout configuration.
Question 5
A company has a production environment running on Amazon RDS for SQL Server with an in-house web application as the front end. During the last application maintenance window, new functionality was added to the web application to enhance the reporting capabilities for management. Since the update, the application is slow to respond to some reporting queries.
How should the company identify the source of the problem?
- A: Install and configure Amazon CloudWatch Application Insights for Microsoft .NET and Microsoft SQL Server. Use a CloudWatch dashboard to identify the root cause.
- B: Enable RDS Performance Insights and determine which query is creating the problem. Request changes to the query to address the problem.
- C: Use AWS X-Ray deployed with Amazon RDS to track query system traces.
- D: Create a support request and work with AWS Support to identify the source of the issue.
Question 6
An electric utility company wants to store power plant sensor data in an Amazon DynamoDB table. The utility company has over 100 power plants and each power plant has over 200 sensors that send data every 2 seconds. The sensor data includes time with milliseconds precision, a value, and a fault attribute if the sensor is malfunctioning. Power plants are identified by a globally unique identifier. Sensors are identified by a unique identifier within each power plant. A database specialist needs to design the table to support an efficient method of finding all faulty sensors within a given power plant.
Which schema should the database specialist use when creating the DynamoDB table to achieve the fastest query time when looking for faulty sensors?
- A: Use the plant identifier as the partition key and the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.
- B: Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a local secondary index (LSI) on the fault attribute.
- C: Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.
- D: Use the plant identifier as the partition key and the sensor identifier as the sort key. Create a local secondary index (LSI) on the fault attribute.
Question 7
A company is releasing a new mobile game featuring a team play mode. As a group of mobile device users play together, an item containing their statuses is updated in an Amazon DynamoDB table. Periodically, the other users' devices read the latest statuses of their teammates from the table using the BatchGetltemn operation.
Prior to launch, some testers submitted bug reports claiming that the status data they were seeing in the game was not up-to-date. The developers are unable to replicate this issue and have asked a database specialist for a recommendation.
Which recommendation would resolve this issue?
- A: Ensure the DynamoDB table is configured to be always consistent.
- B: Ensure the BatchGetltem operation is called with the ConsistentRead parameter set to false.
- C: Enable a stream on the DynamoDB table and subscribe each device to the stream to ensure all devices receive up-to-date status information.
- D: Ensure the BatchGetltem operation is called with the ConsistentRead parameter set to true.
Question 8
A company is running an Amazon RDS for MySQL Multi-AZ DB instance for a business-critical workload. RDS encryption for the DB instance is disabled. A recent security audit concluded that all business-critical applications must encrypt data at rest. The company has asked its database specialist to formulate a plan to accomplish this for the DB instance.
Which process should the database specialist recommend?
- A: Create an encrypted snapshot of the unencrypted DB instance. Copy the encrypted snapshot to Amazon S3. Restore the DB instance from the encrypted snapshot using Amazon S3.
- B: Create a new RDS for MySQL DB instance with encryption enabled. Restore the unencrypted snapshot to this DB instance.
- C: Create a snapshot of the unencrypted DB instance. Create an encrypted copy of the snapshot. Restore the DB instance from the encrypted snapshot.
- D: Temporarily shut down the unencrypted DB instance. Enable AWS KMS encryption in the AWS Management Console using an AWS managed CMK. Restart the DB instance in an encrypted state.
Question 9
A company is migrating its on-premises database workloads to the AWS Cloud. A database specialist performing the move has chosen AWS DMS to migrate an
Oracle database with a large table to Amazon RDS. The database specialist notices that AWS DMS is taking significant time to migrate the data.
Which actions would improve the data migration speed? (Choose three.)
- A: Create multiple AWS DMS tasks to migrate the large table.
- B: Configure the AWS DMS replication instance with Multi-AZ.
- C: Increase the capacity of the AWS DMS replication server.
- D: Establish an AWS Direct Connect connection between the on-premises data center and AWS.
- E: Enable an Amazon RDS Multi-AZ configuration.
- F: Enable full large binary object (LOB) mode to migrate all LOB data for all large tables.
Question 10
A company is migrating a mission-critical 2-TB Oracle database from on premises to Amazon Aurora. The cost for the database migration must be kept to a minimum, and both the on-premises Oracle database and the Aurora DB cluster must remain open for write traffic until the company is ready to completely cut over to Aurora.
Which combination of actions should a database specialist take to accomplish this migration as quickly as possible? (Choose two.)
- A: Use the AWS Schema Conversion Tool (AWS SCT) to convert the source database schema. Then restore the converted schema to the target Aurora DB cluster.
- B: Use Oracle's Data Pump tool to export a copy of the source database schema and manually edit the schema in a text editor to make it compatible with Aurora.
- C: Create an AWS DMS task to migrate data from the Oracle database to the Aurora DB cluster. Select the migration type to replicate ongoing changes to keep the source and target databases in sync until the company is ready to move all user traffic to the Aurora DB cluster.
- D: Create an AWS DMS task to migrate data from the Oracle database to the Aurora DB cluster. Once the initial load is complete, create an AWS Kinesis Data Firehose stream to perform change data capture (CDC) until the company is ready to move all user traffic to the Aurora DB cluster.
- E: Create an AWS Glue job and related resources to migrate data from the Oracle database to the Aurora DB cluster. Once the initial load is complete, create an AWS DMS task to perform change data capture (CDC) until the company is ready to move all user traffic to the Aurora DB cluster.
Question 11
A company has a 20 TB production Amazon Aurora DB cluster. The company runs a large batch job overnight to load data into the Aurora DB cluster. To ensure the company's development team has the most up-to-date data for testing, a copy of the DB cluster must be available in the shortest possible time after the batch job completes.
How should this be accomplished?
- A: Use the AWS CLI to schedule a manual snapshot of the DB cluster. Restore the snapshot to a new DB cluster using the AWS CLI.
- B: Create a dump file from the DB cluster. Load the dump file into a new DB cluster.
- C: Schedule a job to create a clone of the DB cluster at the end of the overnight batch process.
- D: Set up a new daily AWS DMS task that will use cloning and change data capture (CDC) on the DB cluster to copy the data to a new DB cluster. Set up a time for the AWS DMS stream to stop when the new cluster is current.
Question 12
A company has two separate AWS accounts: one for the business unit and another for corporate analytics. The company wants to replicate the business unit data stored in Amazon RDS for MySQL in us-east-1 to its corporate analytics Amazon Redshift environment in us-west-1. The company wants to use AWS DMS with
Amazon RDS as the source endpoint and Amazon Redshift as the target endpoint.
Which action will allow AVS DMS to perform the replication?
- A: Configure the AWS DMS replication instance in the same account and Region as Amazon Redshift.
- B: Configure the AWS DMS replication instance in the same account as Amazon Redshift and in the same Region as Amazon RDS.
- C: Configure the AWS DMS replication instance in its own account and in the same Region as Amazon Redshift.
- D: Configure the AWS DMS replication instance in the same account and Region as Amazon RDS.
Question 13
The Development team recently executed a database script containing several data definition language (DDL) and data manipulation language (DML) statements on an Amazon Aurora MySQL DB cluster. The release accidentally deleted thousands of rows from an important table and broke some application functionality.
This was discovered 4 hours after the release. Upon investigation, a Database Specialist tracked the issue to a DELETE command in the script with an incorrect
WHERE clause filtering the wrong set of rows.
The Aurora DB cluster has Backtrack enabled with an 8-hour backtrack window. The Database Administrator also took a manual snapshot of the DB cluster before the release started. The database needs to be returned to the correct state as quickly as possible to resume full application functionality. Data loss must be minimal.
How can the Database Specialist accomplish this?
- A: Quickly rewind the DB cluster to a point in time before the release using Backtrack.
- B: Perform a point-in-time recovery (PITR) of the DB cluster to a time before the release and copy the deleted rows from the restored database to the original database.
- C: Restore the DB cluster using the manual backup snapshot created before the release and change the application configuration settings to point to the new DB cluster.
- D: Create a clone of the DB cluster with Backtrack enabled. Rewind the cloned cluster to a point in time before the release. Copy deleted rows from the clone to the original database.
Question 14
A database specialist is managing an application in the us-west-1 Region and wants to set up disaster recovery in the us-east-1 Region. The Amazon Aurora
MySQL DB cluster needs an RPO of 1 minute and an RTO of 2 minutes.
Which approach meets these requirements with no negative performance impact?
- A: Enable synchronous replication.
- B: Enable asynchronous binlog replication.
- C: Create an Aurora Global Database.
- D: Copy Aurora incremental snapshots to the us-east-1 Region.
Question 15
A gaming company is developing a new mobile game and decides to store the data for each user in Amazon DynamoDB. To make the registration process as easy as possible, users can log in with their existing Facebook or Amazon accounts. The company expects more than 10,000 users.
How should a database specialist implement access control with the LEAST operational effort?
- A: Use web identity federation on the mobile app and AWS STS with an attached IAM role to get temporary credentials to access DynamoDB.
- B: Use web identity federation on the mobile app and create individual IAM users with credentials to access DynamoDB.
- C: Use a self-developed user management system on the mobile app that lets users access the data from DynamoDB through an API.
- D: Use a single IAM user on the mobile app to access DynamoDB.
Question 16
A large retail company recently migrated its three-tier ecommerce applications to AWS. The company's backend database is hosted on Amazon Aurora
PostgreSQL. During peak times, users complain about longer page load times. A database specialist reviewed Amazon RDS Performance Insights and found a spike in IO:XactSync wait events. The SQL attached to the wait events are all single INSERT statements.
How should this issue be resolved?
- A: Modify the application to commit transactions in batches
- B: Add a new Aurora Replica to the Aurora DB cluster.
- C: Add an Amazon ElastiCache for Redis cluster and change the application to write through.
- D: Change the Aurora DB cluster storage to Provisioned IOPS (PIOPS).
Question 17
A company uses Amazon DynamoDB as the data store for its ecommerce website. The website receives little to no traffic at night, and the majority of the traffic occurs during the day. The traffic growth during peak hours is gradual and predictable on a daily basis, but it can be orders of magnitude higher than during off- peak hours.
The company initially provisioned capacity based on its average volume during the day without accounting for the variability in traffic patterns. However, the website is experiencing a significant amount of throttling during peak hours. The company wants to reduce the amount of throttling while minimizing costs.
What should a database specialist do to meet these requirements?
- A: Use reserved capacity. Set it to the capacity levels required for peak daytime throughput.
- B: Use provisioned capacity. Set it to the capacity levels required for peak daytime throughput.
- C: Use provisioned capacity. Create an AWS Application Auto Scaling policy to update capacity based on consumption.
- D: Use on-demand capacity.
Question 18
A company uses an Amazon RDS for PostgreSQL DB instance for its customer relationship management (CRM) system. New compliance requirements specify that the database must be encrypted at rest.
Which action will meet these requirements?
- A: Create an encrypted copy of manual snapshot of the DB instance. Restore a new DB instance from the encrypted snapshot.
- B: Modify the DB instance and enable encryption.
- C: Restore a DB instance from the most recent automated snapshot and enable encryption.
- D: Create an encrypted read replica of the DB instance. Promote the read replica to a standalone instance.
Question 19
A database specialist was alerted that a production Amazon RDS MariaDB instance with 100 GB of storage was out of space. In response, the database specialist modified the DB instance and added 50 GB of storage capacity. Three hours later, a new alert is generated due to a lack of free space on the same DB instance.
The database specialist decides to modify the instance immediately to increase its storage capacity by 20 GB.
What will happen when the modification is submitted?
- A: The request will fail because this storage capacity is too large.
- B: The request will succeed only if the primary instance is in active status.
- C: The request will succeed only if CPU utilization is less than 10%.
- D: The request will fail as the most recent modification was too soon.
Question 20
A company uses Amazon Aurora for secure financial transactions. The data must always be encrypted at rest and in transit to meet compliance requirements.
Which combination of actions should a database specialist take to meet these requirements? (Choose two.)
- A: Create an Aurora Replica with encryption enabled using AWS Key Management Service (AWS KMS). Then promote the replica to master.
- B: Use SSL/TLS to secure the in-transit connection between the financial application and the Aurora DB cluster.
- C: Modify the existing Aurora DB cluster and enable encryption using an AWS Key Management Service (AWS KMS) encryption key. Apply the changes immediately.
- D: Take a snapshot of the Aurora DB cluster and encrypt the snapshot using an AWS Key Management Service (AWS KMS) encryption key. Restore the snapshot to a new DB cluster and update the financial application database endpoints.
- E: Use AWS Key Management Service (AWS KMS) to secure the in-transit connection between the financial application and the Aurora DB cluster.
Question 21
A company is running a website on Amazon EC2 instances deployed in multiple Availability Zones (AZs). The site performs a high number of repetitive reads and writes each second on an Amazon RDS for MySQL Multi-AZ DB instance with General Purpose SSD (gp2) storage. After comprehensive testing and analysis, a database specialist discovers that there is high read latency and high CPU utilization on the DB instance.
Which approach should the database specialist take to resolve this issue without changing the application?
- A: Implement sharding to distribute the load to multiple RDS for MySQL databases.
- B: Use the same RDS for MySQL instance class with Provisioned IOPS (PIOPS) storage.
- C: Add an RDS for MySQL read replica.
- D: Modify the RDS for MySQL database class to a bigger size and implement Provisioned IOPS (PIOPS).
Question 22
A banking company recently launched an Amazon RDS for MySQL DB instance as part of a proof-of-concept project. A database specialist has configured automated database snapshots. As a part of routine testing, the database specialist noticed one day that the automated database snapshot was not created.
Which of the following are possible reasons why the snapshot was not created? (Choose two.)
- A: A copy of the RDS automated snapshot for this DB instance is in progress within the same AWS Region.
- B: A copy of the RDS automated snapshot for this DB instance is in progress in a different AWS Region.
- C: The RDS maintenance window is not configured.
- D: The RDS DB instance is in the STORAGE_FULL state.
- E: RDS event notifications have not been enabled.
Question 23
An online shopping company has a large inflow of shopping requests daily. As a result, there is a consistent load on the company's Amazon RDS database. A database specialist needs to ensure the database is up and running at all times. The database specialist wants an automatic notification system for issues that may cause database downtime or for configuration changes made to the database.
What should the database specialist do to achieve this? (Choose two.)
- A: Create an Amazon CloudWatch Events event to send a notification using Amazon SNS on every API call logged in AWS CloudTrail.
- B: Subscribe to an RDS event subscription and configure it to use an Amazon SNS topic to send notifications.
- C: Use Amazon SES to send notifications based on configured Amazon CloudWatch Events events.
- D: Configure Amazon CloudWatch alarms on various metrics, such as FreeStorageSpace for the RDS instance.
- E: Enable email notifications for AWS Trusted Advisor.
Question 24
A company is load testing its three-tier production web application deployed with an AWS CloudFormation template on AWS. The Application team is making changes to deploy additional Amazon EC2 and AWS Lambda resources to expand the load testing capacity. A Database Specialist wants to ensure that the changes made by the Application team will not change the Amazon RDS database resources already deployed.
Which combination of steps would allow the Database Specialist to accomplish this? (Choose two.)
- A: Review the stack drift before modifying the template
- B: Create and review a change set before applying it
- C: Export the database resources as stack outputs
- D: Define the database resources in a nested stack
- E: Set a stack policy for the database resources
Question 25
A large company has a variety of Amazon DB clusters. Each of these clusters has various configurations that adhere to various requirements. Depending on the team and use case, these configurations can be organized into broader categories.
A database administrator wants to make the process of storing and modifying these parameters more systematic. The database administrator also wants to ensure that changes to individual categories of configurations are automatically applied to all instances when required.
Which AWS service or feature will help automate and achieve this objective?
- A: AWS Systems Manager Parameter Store
- B: DB parameter group
- C: AWS Config
- D: AWS Secrets Manager
Free preview mode
Enjoy the free questions and consider upgrading to gain full access!