A company has deployed an e-commerce web application in a new AWS account. An Amazon RDS for MySQL Multi-AZ DB instance is part of this deployment with a database-1.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com endpoint listening on port 3306. The company's Database Specialist is able to log in to MySQL and run queries from the bastion host using these details.
When users try to utilize the application hosted in the AWS account, they are presented with a generic error message. The application servers are logging a could not connect to server: Connection times out error message to Amazon CloudWatch Logs.
What is the cause of this error?
AThe user name and password the application is using are incorrect.
BThe security group assigned to the application servers does not have the necessary rules to allow inbound connections from the DB instance.
CThe security group assigned to the DB instance does not have the necessary rules to allow inbound connections from the application servers.
DThe user name and password are correct, but the user is not authorized to use the DB instance.
A Database Specialist is migrating a 2 TB Amazon RDS for Oracle DB instance to an RDS for PostgreSQL DB instance using AWS DMS. The source RDS Oracle
DB instance is in a VPC in the us-east-1 Region. The target RDS for PostgreSQL DB instance is in a VPC in the use-west-2 Region.
Where should the AWS DMS replication instance be placed for the MOST optimal performance?
AIn the same Region and VPC of the source DB instance
BIn the same Region and VPC as the target DB instance
CIn the same VPC and Availability Zone as the target DB instance
DIn the same VPC and Availability Zone as the source DB instance
A company is using Amazon Aurora PostgreSQL for the backend of its application. The system users are complaining that the responses are slow. A database specialist has determined that the queries to Aurora take longer during peak times. With the Amazon RDS Performance Insights dashboard, the load in the chart for average active sessions is often above the line that denotes maximum CPU usage and the wait state shows that most wait events are IO:XactSync.
What should the company do to resolve these performance issues?
AAdd an Aurora Replica to scale the read traffic.
BScale up the DB instance class.
CModify applications to commit transactions in batches.
DModify applications to avoid conflicts by taking locks.
A database specialist deployed an Amazon RDS DB instance in Dev-VPC1 used by their development team. Dev-VPC1 has a peering connection with Dev-VPC2 that belongs to a different development team in the same department. The networking team confirmed that the routing between VPCs is correct; however, the database engineers in Dev-VPC2 are getting a timeout connections error when trying to connect to the database in Dev-VPC1.
What is likely causing the timeouts?
AThe database is deployed in a VPC that is in a different Region.
BThe database is deployed in a VPC that is in a different Availability Zone.
CThe database is deployed with misconfigured security groups.
DThe database is deployed with the wrong client connect timeout configuration.
A company has a production environment running on Amazon RDS for SQL Server with an in-house web application as the front end. During the last application maintenance window, new functionality was added to the web application to enhance the reporting capabilities for management. Since the update, the application is slow to respond to some reporting queries.
How should the company identify the source of the problem?
AInstall and configure Amazon CloudWatch Application Insights for Microsoft .NET and Microsoft SQL Server. Use a CloudWatch dashboard to identify the root cause.
BEnable RDS Performance Insights and determine which query is creating the problem. Request changes to the query to address the problem.
CUse AWS X-Ray deployed with Amazon RDS to track query system traces.
DCreate a support request and work with AWS Support to identify the source of the issue.
An electric utility company wants to store power plant sensor data in an Amazon DynamoDB table. The utility company has over 100 power plants and each power plant has over 200 sensors that send data every 2 seconds. The sensor data includes time with milliseconds precision, a value, and a fault attribute if the sensor is malfunctioning. Power plants are identified by a globally unique identifier. Sensors are identified by a unique identifier within each power plant. A database specialist needs to design the table to support an efficient method of finding all faulty sensors within a given power plant.
Which schema should the database specialist use when creating the DynamoDB table to achieve the fastest query time when looking for faulty sensors?
AUse the plant identifier as the partition key and the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.
BCreate a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a local secondary index (LSI) on the fault attribute.
CCreate a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.
DUse the plant identifier as the partition key and the sensor identifier as the sort key. Create a local secondary index (LSI) on the fault attribute.
A company is releasing a new mobile game featuring a team play mode. As a group of mobile device users play together, an item containing their statuses is updated in an Amazon DynamoDB table. Periodically, the other users' devices read the latest statuses of their teammates from the table using the BatchGetltemn operation.
Prior to launch, some testers submitted bug reports claiming that the status data they were seeing in the game was not up-to-date. The developers are unable to replicate this issue and have asked a database specialist for a recommendation.
Which recommendation would resolve this issue?
AEnsure the DynamoDB table is configured to be always consistent.
BEnsure the BatchGetltem operation is called with the ConsistentRead parameter set to false.
CEnable a stream on the DynamoDB table and subscribe each device to the stream to ensure all devices receive up-to-date status information.
DEnsure the BatchGetltem operation is called with the ConsistentRead parameter set to true.
A company is running an Amazon RDS for MySQL Multi-AZ DB instance for a business-critical workload. RDS encryption for the DB instance is disabled. A recent security audit concluded that all business-critical applications must encrypt data at rest. The company has asked its database specialist to formulate a plan to accomplish this for the DB instance.
Which process should the database specialist recommend?
ACreate an encrypted snapshot of the unencrypted DB instance. Copy the encrypted snapshot to Amazon S3. Restore the DB instance from the encrypted snapshot using Amazon S3.
BCreate a new RDS for MySQL DB instance with encryption enabled. Restore the unencrypted snapshot to this DB instance.
CCreate a snapshot of the unencrypted DB instance. Create an encrypted copy of the snapshot. Restore the DB instance from the encrypted snapshot.
DTemporarily shut down the unencrypted DB instance. Enable AWS KMS encryption in the AWS Management Console using an AWS managed CMK. Restart the DB instance in an encrypted state.
A company is migrating its on-premises database workloads to the AWS Cloud. A database specialist performing the move has chosen AWS DMS to migrate an
Oracle database with a large table to Amazon RDS. The database specialist notices that AWS DMS is taking significant time to migrate the data.
Which actions would improve the data migration speed? (Choose three.)
ACreate multiple AWS DMS tasks to migrate the large table.
BConfigure the AWS DMS replication instance with Multi-AZ.
CIncrease the capacity of the AWS DMS replication server.
DEstablish an AWS Direct Connect connection between the on-premises data center and AWS.
EEnable an Amazon RDS Multi-AZ configuration.
FEnable full large binary object (LOB) mode to migrate all LOB data for all large tables.
A company is migrating a mission-critical 2-TB Oracle database from on premises to Amazon Aurora. The cost for the database migration must be kept to a minimum, and both the on-premises Oracle database and the Aurora DB cluster must remain open for write traffic until the company is ready to completely cut over to Aurora.
Which combination of actions should a database specialist take to accomplish this migration as quickly as possible? (Choose two.)
AUse the AWS Schema Conversion Tool (AWS SCT) to convert the source database schema. Then restore the converted schema to the target Aurora DB cluster.
BUse Oracle's Data Pump tool to export a copy of the source database schema and manually edit the schema in a text editor to make it compatible with Aurora.
CCreate an AWS DMS task to migrate data from the Oracle database to the Aurora DB cluster. Select the migration type to replicate ongoing changes to keep the source and target databases in sync until the company is ready to move all user traffic to the Aurora DB cluster.
DCreate an AWS DMS task to migrate data from the Oracle database to the Aurora DB cluster. Once the initial load is complete, create an AWS Kinesis Data Firehose stream to perform change data capture (CDC) until the company is ready to move all user traffic to the Aurora DB cluster.
ECreate an AWS Glue job and related resources to migrate data from the Oracle database to the Aurora DB cluster. Once the initial load is complete, create an AWS DMS task to perform change data capture (CDC) until the company is ready to move all user traffic to the Aurora DB cluster.
A company has a 20 TB production Amazon Aurora DB cluster. The company runs a large batch job overnight to load data into the Aurora DB cluster. To ensure the company's development team has the most up-to-date data for testing, a copy of the DB cluster must be available in the shortest possible time after the batch job completes.
How should this be accomplished?
AUse the AWS CLI to schedule a manual snapshot of the DB cluster. Restore the snapshot to a new DB cluster using the AWS CLI.
BCreate a dump file from the DB cluster. Load the dump file into a new DB cluster.
CSchedule a job to create a clone of the DB cluster at the end of the overnight batch process.
DSet up a new daily AWS DMS task that will use cloning and change data capture (CDC) on the DB cluster to copy the data to a new DB cluster. Set up a time for the AWS DMS stream to stop when the new cluster is current.
A company has two separate AWS accounts: one for the business unit and another for corporate analytics. The company wants to replicate the business unit data stored in Amazon RDS for MySQL in us-east-1 to its corporate analytics Amazon Redshift environment in us-west-1. The company wants to use AWS DMS with
Amazon RDS as the source endpoint and Amazon Redshift as the target endpoint.
Which action will allow AVS DMS to perform the replication?
AConfigure the AWS DMS replication instance in the same account and Region as Amazon Redshift.
BConfigure the AWS DMS replication instance in the same account as Amazon Redshift and in the same Region as Amazon RDS.
CConfigure the AWS DMS replication instance in its own account and in the same Region as Amazon Redshift.
DConfigure the AWS DMS replication instance in the same account and Region as Amazon RDS.
The Development team recently executed a database script containing several data definition language (DDL) and data manipulation language (DML) statements on an Amazon Aurora MySQL DB cluster. The release accidentally deleted thousands of rows from an important table and broke some application functionality.
This was discovered 4 hours after the release. Upon investigation, a Database Specialist tracked the issue to a DELETE command in the script with an incorrect
WHERE clause filtering the wrong set of rows.
The Aurora DB cluster has Backtrack enabled with an 8-hour backtrack window. The Database Administrator also took a manual snapshot of the DB cluster before the release started. The database needs to be returned to the correct state as quickly as possible to resume full application functionality. Data loss must be minimal.
How can the Database Specialist accomplish this?
AQuickly rewind the DB cluster to a point in time before the release using Backtrack.
BPerform a point-in-time recovery (PITR) of the DB cluster to a time before the release and copy the deleted rows from the restored database to the original database.
CRestore the DB cluster using the manual backup snapshot created before the release and change the application configuration settings to point to the new DB cluster.
DCreate a clone of the DB cluster with Backtrack enabled. Rewind the cloned cluster to a point in time before the release. Copy deleted rows from the clone to the original database.
A database specialist is managing an application in the us-west-1 Region and wants to set up disaster recovery in the us-east-1 Region. The Amazon Aurora
MySQL DB cluster needs an RPO of 1 minute and an RTO of 2 minutes.
Which approach meets these requirements with no negative performance impact?
AEnable synchronous replication.
BEnable asynchronous binlog replication.
CCreate an Aurora Global Database.
DCopy Aurora incremental snapshots to the us-east-1 Region.
A gaming company is developing a new mobile game and decides to store the data for each user in Amazon DynamoDB. To make the registration process as easy as possible, users can log in with their existing Facebook or Amazon accounts. The company expects more than 10,000 users.
How should a database specialist implement access control with the LEAST operational effort?
AUse web identity federation on the mobile app and AWS STS with an attached IAM role to get temporary credentials to access DynamoDB.
BUse web identity federation on the mobile app and create individual IAM users with credentials to access DynamoDB.
CUse a self-developed user management system on the mobile app that lets users access the data from DynamoDB through an API.
DUse a single IAM user on the mobile app to access DynamoDB.
A large retail company recently migrated its three-tier ecommerce applications to AWS. The company's backend database is hosted on Amazon Aurora
PostgreSQL. During peak times, users complain about longer page load times. A database specialist reviewed Amazon RDS Performance Insights and found a spike in IO:XactSync wait events. The SQL attached to the wait events are all single INSERT statements.
How should this issue be resolved?
AModify the application to commit transactions in batches
BAdd a new Aurora Replica to the Aurora DB cluster.
CAdd an Amazon ElastiCache for Redis cluster and change the application to write through.
DChange the Aurora DB cluster storage to Provisioned IOPS (PIOPS).
A company uses Amazon DynamoDB as the data store for its ecommerce website. The website receives little to no traffic at night, and the majority of the traffic occurs during the day. The traffic growth during peak hours is gradual and predictable on a daily basis, but it can be orders of magnitude higher than during off- peak hours.
The company initially provisioned capacity based on its average volume during the day without accounting for the variability in traffic patterns. However, the website is experiencing a significant amount of throttling during peak hours. The company wants to reduce the amount of throttling while minimizing costs.
What should a database specialist do to meet these requirements?
AUse reserved capacity. Set it to the capacity levels required for peak daytime throughput.
BUse provisioned capacity. Set it to the capacity levels required for peak daytime throughput.
CUse provisioned capacity. Create an AWS Application Auto Scaling policy to update capacity based on consumption.
DUse on-demand capacity.
A company uses an Amazon RDS for PostgreSQL DB instance for its customer relationship management (CRM) system. New compliance requirements specify that the database must be encrypted at rest.
Which action will meet these requirements?
ACreate an encrypted copy of manual snapshot of the DB instance. Restore a new DB instance from the encrypted snapshot.
BModify the DB instance and enable encryption.
CRestore a DB instance from the most recent automated snapshot and enable encryption.
DCreate an encrypted read replica of the DB instance. Promote the read replica to a standalone instance.
A database specialist was alerted that a production Amazon RDS MariaDB instance with 100 GB of storage was out of space. In response, the database specialist modified the DB instance and added 50 GB of storage capacity. Three hours later, a new alert is generated due to a lack of free space on the same DB instance.
The database specialist decides to modify the instance immediately to increase its storage capacity by 20 GB.
What will happen when the modification is submitted?
AThe request will fail because this storage capacity is too large.
BThe request will succeed only if the primary instance is in active status.
CThe request will succeed only if CPU utilization is less than 10%.
DThe request will fail as the most recent modification was too soon.
A company uses Amazon Aurora for secure financial transactions. The data must always be encrypted at rest and in transit to meet compliance requirements.
Which combination of actions should a database specialist take to meet these requirements? (Choose two.)
ACreate an Aurora Replica with encryption enabled using AWS Key Management Service (AWS KMS). Then promote the replica to master.
BUse SSL/TLS to secure the in-transit connection between the financial application and the Aurora DB cluster.
CModify the existing Aurora DB cluster and enable encryption using an AWS Key Management Service (AWS KMS) encryption key. Apply the changes immediately.
DTake a snapshot of the Aurora DB cluster and encrypt the snapshot using an AWS Key Management Service (AWS KMS) encryption key. Restore the snapshot to a new DB cluster and update the financial application database endpoints.
EUse AWS Key Management Service (AWS KMS) to secure the in-transit connection between the financial application and the Aurora DB cluster.
A company is running a website on Amazon EC2 instances deployed in multiple Availability Zones (AZs). The site performs a high number of repetitive reads and writes each second on an Amazon RDS for MySQL Multi-AZ DB instance with General Purpose SSD (gp2) storage. After comprehensive testing and analysis, a database specialist discovers that there is high read latency and high CPU utilization on the DB instance.
Which approach should the database specialist take to resolve this issue without changing the application?
AImplement sharding to distribute the load to multiple RDS for MySQL databases.
BUse the same RDS for MySQL instance class with Provisioned IOPS (PIOPS) storage.
CAdd an RDS for MySQL read replica.
DModify the RDS for MySQL database class to a bigger size and implement Provisioned IOPS (PIOPS).
A banking company recently launched an Amazon RDS for MySQL DB instance as part of a proof-of-concept project. A database specialist has configured automated database snapshots. As a part of routine testing, the database specialist noticed one day that the automated database snapshot was not created.
Which of the following are possible reasons why the snapshot was not created? (Choose two.)
AA copy of the RDS automated snapshot for this DB instance is in progress within the same AWS Region.
BA copy of the RDS automated snapshot for this DB instance is in progress in a different AWS Region.
CThe RDS maintenance window is not configured.
DThe RDS DB instance is in the STORAGE_FULL state.
ERDS event notifications have not been enabled.
An online shopping company has a large inflow of shopping requests daily. As a result, there is a consistent load on the company's Amazon RDS database. A database specialist needs to ensure the database is up and running at all times. The database specialist wants an automatic notification system for issues that may cause database downtime or for configuration changes made to the database.
What should the database specialist do to achieve this? (Choose two.)
ACreate an Amazon CloudWatch Events event to send a notification using Amazon SNS on every API call logged in AWS CloudTrail.
BSubscribe to an RDS event subscription and configure it to use an Amazon SNS topic to send notifications.
CUse Amazon SES to send notifications based on configured Amazon CloudWatch Events events.
DConfigure Amazon CloudWatch alarms on various metrics, such as FreeStorageSpace for the RDS instance.
EEnable email notifications for AWS Trusted Advisor.
A company is load testing its three-tier production web application deployed with an AWS CloudFormation template on AWS. The Application team is making changes to deploy additional Amazon EC2 and AWS Lambda resources to expand the load testing capacity. A Database Specialist wants to ensure that the changes made by the Application team will not change the Amazon RDS database resources already deployed.
Which combination of steps would allow the Database Specialist to accomplish this? (Choose two.)
AReview the stack drift before modifying the template
BCreate and review a change set before applying it
CExport the database resources as stack outputs
DDefine the database resources in a nested stack
ESet a stack policy for the database resources
A large company has a variety of Amazon DB clusters. Each of these clusters has various configurations that adhere to various requirements. Depending on the team and use case, these configurations can be organized into broader categories.
A database administrator wants to make the process of storing and modifying these parameters more systematic. The database administrator also wants to ensure that changes to individual categories of configurations are automatically applied to all instances when required.
Which AWS service or feature will help automate and achieve this objective?