A data engineer is configuring an AWS Glue job to read data from an Amazon S3 bucket. The data engineer has set up the necessary AWS Glue connection details and an associated IAM role. However, when the data engineer attempts to run the AWS Glue job, the data engineer receives an error message that indicates that there are problems with the Amazon S3 VPC gateway endpoint.
The data engineer must resolve the error and connect the AWS Glue job to the S3 bucket.
Which solution will meet this requirement?
AUpdate the AWS Glue security group to allow inbound traffic from the Amazon S3 VPC gateway endpoint.
BConfigure an S3 bucket policy to explicitly grant the AWS Glue job permissions to access the S3 bucket.
CReview the AWS Glue job code to ensure that the AWS Glue connection details include a fully qualified domain name.
DVerify that the VPC's route table includes inbound and outbound routes for the Amazon S3 VPC gateway endpoint.
A data engineer needs to create an AWS Lambda function that converts the format of data from .csv to Apache Parquet. The Lambda function must run only if a user uploads a .csv file to an Amazon S3 bucket.
Which solution will meet these requirements with the LEAST operational overhead?
ACreate an S3 event notification that has an event type of s3:ObjectCreated:*. Use a filter rule to generate notifications only when the suffix includes .csv. Set the Amazon Resource Name (ARN) of the Lambda function as the destination for the event notification.
BCreate an S3 event notification that has an event type of s3:ObjectTagging:* for objects that have a tag set to .csv. Set the Amazon Resource Name (ARN) of the Lambda function as the destination for the event notification.
CCreate an S3 event notification that has an event type of s3:*. Use a filter rule to generate notifications only when the suffix includes .csv. Set the Amazon Resource Name (ARN) of the Lambda function as the destination for the event notification.
DCreate an S3 event notification that has an event type of s3:ObjectCreated:*. Use a filter rule to generate notifications only when the suffix includes .csv. Set an Amazon Simple Notification Service (Amazon SNS) topic as the destination for the event notification. Subscribe the Lambda function to the SNS topic.
An insurance company stores transaction data that the company compressed with gzip.
The company needs to query the transaction data for occasional audits.
Which solution will meet this requirement in the MOST cost-effective way?
AStore the data in Amazon Glacier Flexible Retrieval. Use Amazon S3 Glacier Select to query the data.
BStore the data in Amazon S3. Use Amazon S3 Select to query the data.
CStore the data in Amazon S3. Use Amazon Athena to query the data.
DStore the data in Amazon Glacier Instant Retrieval. Use Amazon Athena to query the data.
A data engineer finished testing an Amazon Redshift stored procedure that processes and inserts data into a table that is not mission critical. The engineer wants to automatically run the stored procedure on a daily basis.
Which solution will meet this requirement in the MOST cost-effective way?
ACreate an AWS Lambda function to schedule a cron job to run the stored procedure.
BSchedule and run the stored procedure by using the Amazon Redshift Data API in an Amazon EC2 Spot Instance.
CUse query editor v2 to run the stored procedure on a schedule.
DSchedule an AWS Glue Python shell job to run the stored procedure.
A marketing company collects clickstream data. The company sends the clickstream data to Amazon Kinesis Data Firehose and stores the clickstream data in Amazon S3. The company wants to build a series of dashboards that hundreds of users from multiple departments will use.
The company will use Amazon QuickSight to develop the dashboards. The company wants a solution that can scale and provide daily updates about clickstream activity.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose two.)
AUse Amazon Redshift to store and query the clickstream data.
BUse Amazon Athena to query the clickstream data
CUse Amazon S3 analytics to query the clickstream data.
DAccess the query data through a QuickSight direct SQL query.
EAccess the query data through QuickSight SPICE (Super-fast, Parallel, In-memory Calculation Engine). Configure a daily refresh for the dataset.
A data engineer is building a data orchestration workflow. The data engineer plans to use a hybrid model that includes some on-premises resources and some resources that are in the cloud. The data engineer wants to prioritize portability and open source resources.
Which service should the data engineer use in both the on-premises environment and the cloud-based environment?
AAWS Data Exchange
BAmazon Simple Workflow Service (Amazon SWF)
CAmazon Managed Workflows for Apache Airflow (Amazon MWAA)
DAWS Glue
A gaming company uses a NoSQL database to store customer information. The company is planning to migrate to AWS.
The company needs a fully managed AWS solution that will handle high online transaction processing (OLTP) workload, provide single-digit millisecond performance, and provide high availability around the world.
Which solution will meet these requirements with the LEAST operational overhead?
AAmazon Keyspaces (for Apache Cassandra)
BAmazon DocumentDB (with MongoDB compatibility)
CAmazon DynamoDB
DAmazon Timestream
A data engineer creates an AWS Lambda function that an Amazon EventBridge event will invoke. When the data engineer tries to invoke the Lambda function by using an EventBridge event, an AccessDeniedException message appears.
How should the data engineer resolve the exception?
AEnsure that the trust policy of the Lambda function execution role allows EventBridge to assume the execution role.
BEnsure that both the IAM role that EventBridge uses and the Lambda function's resource-based policy have the necessary permissions.
CEnsure that the subnet where the Lambda function is deployed is configured to be a private subnet.
DEnsure that EventBridge schemas are valid and that the event mapping configuration is correct.
A company uses a data lake that is based on an Amazon S3 bucket. To comply with regulations, the company must apply two layers of server-side encryption to files that are uploaded to the S3 bucket. The company wants to use an AWS Lambda function to apply the necessary encryption.
Which solution will meet these requirements?
AUse both server-side encryption with AWS KMS keys (SSE-KMS) and the Amazon S3 Encryption Client.
BUse dual-layer server-side encryption with AWS KMS keys (DSSE-KMS).
CUse server-side encryption with customer-provided keys (SSE-C) before files are uploaded.
DUse server-side encryption with AWS KMS keys (SSE-KMS).
A data engineer notices that Amazon Athena queries are held in a queue before the queries run.
How can the data engineer prevent the queries from queueing?
AIncrease the query result limit.
BConfigure provisioned capacity for an existing workgroup.
CUse federated queries.
DAllow users who run the Athena queries to an existing workgroup.
A data engineer needs to debug an AWS Glue job that reads from Amazon S3 and writes to Amazon Redshift. The data engineer enabled the bookmark feature for the AWS Glue job.
The data engineer has set the maximum concurrency for the AWS Glue job to 1.
The AWS Glue job is successfully writing the output to Amazon Redshift. However, the Amazon S3 files that were loaded during previous runs of the AWS Glue job are being reprocessed by subsequent runs.
What is the likely reason the AWS Glue job is reprocessing the files?
AThe AWS Glue job does not have the s3:GetObjectAcl permission that is required for bookmarks to work correctly.
BThe maximum concurrency for the AWS Glue job is set to 1.
CThe data engineer incorrectly specified an older version of AWS Glue for the Glue job.
DThe AWS Glue job does not have a required commit statement.
An ecommerce company wants to use AWS to migrate data pipelines from an on-premises environment into the AWS Cloud. The company currently uses a third-party tool in the on-premises environment to orchestrate data ingestion processes.
The company wants a migration solution that does not require the company to manage servers. The solution must be able to orchestrate Python and Bash scripts. The solution must not require the company to refactor any code.
Which solution will meet these requirements with the LEAST operational overhead?
AAWS Lambda
BAmazon Managed Workflows for Apache Airflow (Amazon MVVAA)
CAWS Step Functions
DAWS Glue
A data engineer needs Amazon Athena queries to finish faster. The data engineer notices that all the files the Athena queries use are currently stored in uncompressed .csv format. The data engineer also notices that users perform most queries by selecting a specific column.
Which solution will MOST speed up the Athena query performance?
AChange the data format from .csv to JSON format. Apply Snappy compression.
BCompress the .csv files by using Snappy compression.
CChange the data format from .csv to Apache Parquet. Apply Snappy compression.
DCompress the .csv files by using gzip compression.
A retail company stores data from a product lifecycle management (PLM) application in an on-premises MySQL database. The PLM application frequently updates the database when transactions occur.
The company wants to gather insights from the PLM application in near real time. The company wants to integrate the insights with other business datasets and to analyze the combined dataset by using an Amazon Redshift data warehouse.
The company has already established an AWS Direct Connect connection between the on-premises infrastructure and AWS.
Which solution will meet these requirements with the LEAST development effort?
ARun a scheduled AWS Glue extract, transform, and load (ETL) job to get the MySQL database updates by using a Java Database Connectivity (JDBC) connection. Set Amazon Redshift as the destination for the ETL job.
BRun a full load plus CDC task in AWS Database Migration Service (AWS DMS) to continuously replicate the MySQL database changes. Set Amazon Redshift as the destination for the task.
CUse the Amazon AppFlow SDK to build a custom connector for the MySQL database to continuously replicate the database changes. Set Amazon Redshift as the destination for the connector.
DRun scheduled AWS DataSync tasks to synchronize data from the MySQL database. Set Amazon Redshift as the destination for the tasks.
A marketing company uses Amazon S3 to store clickstream data. The company queries the data at the end of each day by using a SQL JOIN clause on S3 objects that are stored in separate buckets.
The company creates key performance indicators (KPIs) based on the objects. The company needs a serverless solution that will give users the ability to query data by partitioning the data. The solution must maintain the atomicity, consistency, isolation, and durability (ACID) properties of the data.
Which solution will meet these requirements MOST cost-effectively?
AAmazon S3 Select
BAmazon Redshift Spectrum
CAmazon Athena
DAmazon EMR
A company wants to migrate data from an Amazon RDS for PostgreSQL DB instance in the eu-east-1 Region of an AWS account named Account_A. The company will migrate the data to an Amazon Redshift cluster in the eu-west-1 Region of an AWS account named Account_B.
Which solution will give AWS Database Migration Service (AWS DMS) the ability to replicate data between two data stores?
ASet up an AWS DMS replication instance in Account_B in eu-west-1.
BSet up an AWS DMS replication instance in Account_B in eu-east-1.
CSet up an AWS DMS replication instance in a new AWS account in eu-west-1.
DSet up an AWS DMS replication instance in Account_A in eu-east-1.
A company uses Amazon S3 as a data lake. The company sets up a data warehouse by using a multi-node Amazon Redshift cluster. The company organizes the data files in the data lake based on the data source of each data file.
The company loads all the data files into one table in the Redshift cluster by using a separate COPY command for each data file location. This approach takes a long time to load all the data files into the table. The company must increase the speed of the data ingestion. The company does not want to increase the cost of the process.
Which solution will meet these requirements?
AUse a provisioned Amazon EMR cluster to copy all the data files into one folder. Use a COPY command to load the data into Amazon Redshift.
BLoad all the data files in parallel into Amazon Aurora. Run an AWS Glue job to load the data into Amazon Redshift.
CUse an AWS Give job to copy all the data files into one folder. Use a COPY command to load the data into Amazon Redshift.
DCreate a manifest file that contains the data file locations. Use a COPY command to load the data into Amazon Redshift.
A company plans to use Amazon Kinesis Data Firehose to store data in Amazon S3. The source data consists of 2 MB .csv files. The company must convert the .csv files to JSON format. The company must store the files in Apache Parquet format.
Which solution will meet these requirements with the LEAST development effort?
AUse Kinesis Data Firehose to convert the .csv files to JSON. Use an AWS Lambda function to store the files in Parquet format.
BUse Kinesis Data Firehose to convert the .csv files to JSON and to store the files in Parquet format.
CUse Kinesis Data Firehose to invoke an AWS Lambda function that transforms the .csv files to JSON and stores the files in Parquet format.
DUse Kinesis Data Firehose to invoke an AWS Lambda function that transforms the .csv files to JSON. Use Kinesis Data Firehose to store the files in Parquet format.
A company is using an AWS Transfer Family server to migrate data from an on-premises environment to AWS. Company policy mandates the use of TLS 1.2 or above to encrypt the data in transit.
Which solution will meet these requirements?
AGenerate new SSH keys for the Transfer Family server. Make the old keys and the new keys available for use.
BUpdate the security group rules for the on-premises network to allow only connections that use TLS 1.2 or above.
CUpdate the security policy of the Transfer Family server to specify a minimum protocol version of TLS 1.2
DInstall an SSL certificate on the Transfer Family server to encrypt data transfers by using TLS 1.2.
A company wants to migrate an application and an on-premises Apache Kafka server to AWS. The application processes incremental updates that an on-premises Oracle database sends to the Kafka server. The company wants to use the replatform migration strategy instead of the refactor strategy.
Which solution will meet these requirements with the LEAST management overhead?
AAmazon Kinesis Data Streams
BAmazon Managed Streaming for Apache Kafka (Amazon MSK) provisioned cluster
CAmazon Kinesis Data Firehose
DAmazon Managed Streaming for Apache Kafka (Amazon MSK) Serverless
A data engineer is building an automated extract, transform, and load (ETL) ingestion pipeline by using AWS Glue. The pipeline ingests compressed files that are in an Amazon S3 bucket. The ingestion pipeline must support incremental data processing.
Which AWS Glue feature should the data engineer use to meet this requirement?
AWorkflows
BTriggers
CJob bookmarks
DClassifiers
A banking company uses an application to collect large volumes of transactional data. The company uses Amazon Kinesis Data Streams for real-time analytics. The company’s application uses the PutRecord action to send data to Kinesis Data Streams.
A data engineer has observed network outages during certain times of day. The data engineer wants to configure exactly-once delivery for the entire processing pipeline.
Which solution will meet this requirement?
ADesign the application so it can remove duplicates during processing by embedding a unique ID in each record at the source.
BUpdate the checkpoint configuration of the Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) data collection application to avoid duplicate processing of events.
CDesign the data source so events are not ingested into Kinesis Data Streams multiple times.
DStop using Kinesis Data Streams. Use Amazon EMR instead. Use Apache Flink and Apache Spark Streaming in Amazon EMR.
A company stores logs in an Amazon S3 bucket. When a data engineer attempts to access several log files, the data engineer discovers that some files have been unintentionally deleted.
The data engineer needs a solution that will prevent unintentional file deletion in the future.
Which solution will meet this requirement with the LEAST operational overhead?
AManually back up the S3 bucket on a regular basis.
BEnable S3 Versioning for the S3 bucket.
CConfigure replication for the S3 bucket.
DUse an Amazon S3 Glacier storage class to archive the data that is in the S3 bucket.
A manufacturing company collects sensor data from its factory floor to monitor and enhance operational efficiency. The company uses Amazon Kinesis Data Streams to publish the data that the sensors collect to a data stream. Then Amazon Kinesis Data Firehose writes the data to an Amazon S3 bucket.
The company needs to display a real-time view of operational efficiency on a large screen in the manufacturing facility.
Which solution will meet these requirements with the LOWEST latency?
AUse Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) to process the sensor data. Use a connector for Apache Flink to write data to an Amazon Timestream database. Use the Timestream database as a source to create a Grafana dashboard.
BConfigure the S3 bucket to send a notification to an AWS Lambda function when any new object is created. Use the Lambda function to publish the data to Amazon Aurora. Use Aurora as a source to create an Amazon QuickSight dashboard.
CUse Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) to process the sensor data. Create a new Data Firehose delivery stream to publish data directly to an Amazon Timestream database. Use the Timestream database as a source to create an Amazon QuickSight dashboard.
DUse AWS Glue bookmarks to read sensor data from the S3 bucket in real time. Publish the data to an Amazon Timestream database. Use the Timestream database as a source to create a Grafana dashboard.
A telecommunications company collects network usage data throughout each day at a rate of several thousand data points each second. The company runs an application to process the usage data in real time. The company aggregates and stores the data in an Amazon Aurora DB instance.
Sudden drops in network usage usually indicate a network outage. The company must be able to identify sudden drops in network usage so the company can take immediate remedial actions.
Which solution will meet this requirement with the LEAST latency?
ACreate an AWS Lambda function to query Aurora for drops in network usage. Use Amazon EventBridge to automatically invoke the Lambda function every minute.
BModify the processing application to publish the data to an Amazon Kinesis data stream. Create an Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) application to detect drops in network usage.
CReplace the Aurora database with an Amazon DynamoDB table. Create an AWS Lambda function to query the DynamoDB table for drops in network usage every minute. Use DynamoDB Accelerator (DAX) between the processing application and DynamoDB table.
DCreate an AWS Lambda function within the Database Activity Streams feature of Aurora to detect drops in network usage.