Professional Cloud Database Engineer
Free trial
Verified
Question 1
You are developing a new application on a VM that is on your corporate network. The application will use Java Database Connectivity (JDBC) to connect to Cloud SQL for PostgreSQL. Your Cloud SQL instance is configured with IP address 192.168.3.48, and SSL is disabled. You want to ensure that your application can access your database instance without requiring configuration changes to your database. What should you do?
- A: Define a connection string using your Google username and password to point to the external (public) IP address of your Cloud SQL instance.
- B: Define a connection string using a database username and password to point to the internal (private) IP address of your Cloud SQL instance.
- C: Define a connection string using Cloud SQL Auth proxy configured with a service account to point to the internal (private) IP address of your Cloud SQL instance.
- D: Define a connection string using Cloud SQL Auth proxy configured with a service account to point to the external (public) IP address of your Cloud SQL instance.
Question 2
Your team recently released a new version of a highly consumed application to accommodate additional user traffic. Shortly after the release, you received an alert from your production monitoring team that there is consistently high replication lag between your primary instance and the read replicas of your Cloud SQL for MySQL instances. You need to resolve the replication lag. What should you do?
- A: Identify and optimize slow running queries, or set parallel replication flags.
- B: Stop all running queries, and re-create the replicas.
- C: Edit the primary instance to upgrade to a larger disk, and increase vCPU count.
- D: Edit the primary instance to add additional memory.
Question 3
During an internal audit, you realized that one of your Cloud SQL for MySQL instances does not have high availability (HA) enabled. You want to follow Google-recommended practices to enable HA on your existing instance. What should you do?
- A: Create a new Cloud SQL for MySQL instance, enable HA, and use the export and import option to migrate your data.
- B: Create a new Cloud SQL for MySQL instance, enable HA, and use Cloud Data Fusion to migrate your data.
- C: Use the gcloud instances patch command to update your existing Cloud SQL for MySQL instance.
- D: Shut down your existing Cloud SQL for MySQL instance, and enable HA.
Question 4
You are managing a set of Cloud SQL databases in Google Cloud. Regulations require that database backups reside in the region where the database is created. You want to minimize operational costs and administrative effort. What should you do?
- A: Configure the automated backups to use a regional Cloud Storage bucket as a custom location.
- B: Use the default configuration for the automated backups location.
- C: Disable automated backups, and create an on-demand backup routine to a regional Cloud Storage bucket.
- D: Disable automated backups, and configure serverless exports to a regional Cloud Storage bucket.
Question 5
Your ecommerce application connecting to your Cloud SQL for SQL Server is expected to have additional traffic due to the holiday weekend. You want to follow Google-recommended practices to set up alerts for CPU and memory metrics so you can be notified by text message at the first sign of potential issues. What should you do?
- A: Use a Cloud Function to pull CPU and memory metrics from your Cloud SQL instance and to call a custom service to send alerts.
- B: Use Error Reporting to monitor CPU and memory metrics and to configure SMS notification channels.
- C: Use Cloud Logging to set up a log sink for CPU and memory metrics and to configure a sink destination to send a message to Pub/Sub.
- D: Use Cloud Monitoring to set up an alerting policy for CPU and memory metrics and to configure SMS notification channels.
Question 6
You finished migrating an on-premises MySQL database to Cloud SQL. You want to ensure that the daily export of a table, which was previously a cron job running on the database server, continues. You want the solution to minimize cost and operations overhead. What should you do?
- A: Use Cloud Scheduler and Cloud Functions to run the daily export.
- B: Create a streaming Datatlow job to export the table.
- C: Set up Cloud Composer, and create a task to export the table daily.
- D: Run the cron job on a Compute Engine instance to continue the export.
Question 7
Your organization needs to migrate a critical, on-premises MySQL database to Cloud SQL for MySQL. The on-premises database is on a version of MySQL that is supported by Cloud SQL and uses the InnoDB storage engine. You need to migrate the database while preserving transactions and minimizing downtime. What should you do?
- A: 1. Use Database Migration Service to connect to your on-premises database, and choose continuous replication. 2. After the on-premises database is migrated, promote the Cloud SQL for MySQL instance, and connect applications to your Cloud SQL instance.
- B: 1. Build a Cloud Data Fusion pipeline for each table to migrate data from the on-premises MySQL database to Cloud SQL for MySQL. 2. Schedule downtime to run each Cloud Data Fusion pipeline. 3. Verify that the migration was successful. 4. Re-point the applications to the Cloud SQL for MySQL instance.
- C: 1. Pause the on-premises applications. 2. Use the mysqldump utility to dump the database content in compressed format. 3. Run gsutil –m to move the dump file to Cloud Storage. 4. Use the Cloud SQL for MySQL import option. 5. After the import operation is complete, re-point the applications to the Cloud SQL for MySQL instance.
- D: 1 Pause the on-premises applications. 2. Use the mysqldump utility to dump the database content in CSV format. 3. Run gsutil –m to move the dump file to Cloud Storage. 4. Use the Cloud SQL for MySQL import option. 5. After the import operation is complete, re-point the applications to the Cloud SQL for MySQL instance.
Question 8
Your company is developing a global ecommerce website on Google Cloud. Your development team is working on a shopping cart service that is durable and elastically scalable with live traffic. Business disruptions from unplanned downtime are expected to be less than 5 minutes per month. In addition, the application needs to have very low latency writes. You need a data storage solution that has high write throughput and provides 99.99% uptime. What should you do?
- A: Use Cloud SQL for data storage.
- B: Use Cloud Spanner for data storage.
- C: Use Memorystore for data storage.
- D: Use Bigtable for data storage.
Question 9
Your organization has hundreds of Cloud SQL for MySQL instances. You want to follow Google-recommended practices to optimize platform costs. What should you do?
- A: Use Query Insights to identify idle instances.
- B: Remove inactive user accounts.
- C: Run the Recommender API to identify overprovisioned instances.
- D: Build indexes on heavily accessed tables.
Question 10
Your organization is running a critical production database on a virtual machine (VM) on Compute Engine. The VM has an ext4-formatted persistent disk for data files. The database will soon run out of storage space. You need to implement a solution that avoids downtime. What should you do?
- A: In the Google Cloud Console, increase the size of the persistent disk, and use the resize2fs command to extend the disk.
- B: In the Google Cloud Console, increase the size of the persistent disk, and use the fdisk command to verify that the new space is ready to use
- C: In the Google Cloud Console, create a snapshot of the persistent disk, restore the snapshot to a new larger disk, unmount the old disk, mount the new disk, and restart the database service.
- D: In the Google Cloud Console, create a new persistent disk attached to the VM, and configure the database service to move the files to the new disk.
Question 11
You want to migrate your on-premises PostgreSQL database to Compute Engine. You need to migrate this database with the minimum downtime possible. What should you do?
- A: Perform a full backup of your on-premises PostgreSQL, and then, in the migration window, perform an incremental backup.
- B: Create a read replica on Cloud SQL, and then promote it to a read/write standalone instance.
- C: Use Database Migration Service to migrate your database.
- D: Create a hot standby on Compute Engine, and use PgBouncer to switch over the connections.
Question 12
You have an application that sends banking events to Bigtable cluster-a in us-east. You decide to add cluster-b in us-central1. Cluster-a replicates data to cluster-b. You need to ensure that Bigtable continues to accept read and write requests if one of the clusters becomes unavailable and that requests are routed automatically to the other cluster. What deployment strategy should you use?
- A: Use the default app profile with single-cluster routing.
- B: Use the default app profile with multi-cluster routing.
- C: Create a custom app profile with multi-cluster routing.
- D: Create a custom app profile with single-cluster routing.
Question 13
Your organization operates in a highly regulated industry. Separation of concerns (SoC) and security principle of least privilege (PoLP) are critical. The operations team consists of:
Person A is a database administrator.
Person B is an analyst who generates metric reports.
Application C is responsible for automatic backups.
You need to assign roles to team members for Cloud Spanner. Which roles should you assign?
- A: roles/spanner.databaseAdmin for Person A roles/spanner.databaseReader for Person B roles/spanner.backupWriter for Application C
- B: roles/spanner.databaseAdmin for Person A roles/spanner.databaseReader for Person B roles/spanner.backupAdmin for Application C
- C: roles/spanner.databaseAdmin for Person A roles/spanner.databaseUser for Person B roles/spanner databaseReader for Application C
- D: roles/spanner.databaseAdmin for Person A roles/spanner.databaseUser for Person B roles/spanner.backupWriter for Application C
Question 14
Your organization works with sensitive data that requires you to manage your own encryption keys. You are working on a project that stores that data in a Cloud SQL database. You need to ensure that stored data is encrypted with your keys. What should you do?
- A: Export data periodically to a Cloud Storage bucket protected by Customer-Supplied Encryption Keys.
- B: Use Cloud SQL Auth proxy.
- C: Connect to Cloud SQL using a connection that has SSL encryption.
- D: Use customer-managed encryption keys with Cloud SQL.
Question 15
Your team is building an application that stores and analyzes streaming time series financial data. You need a database solution that can perform time series-based scans with sub-second latency. The solution must scale into the hundreds of terabytes and be able to write up to 10k records per second and read up to 200 MB per second. What should you do?
- A: Use Firestore.
- B: Use Bigtable
- C: Use BigQuery.
- D: Use Cloud Spanner.
Question 16
You are designing a new gaming application that uses a highly transactional relational database to store player authentication and inventory data in Google Cloud. You want to launch the game in multiple regions. What should you do?
- A: Use Cloud Spanner to deploy the database.
- B: Use Bigtable with clusters in multiple regions to deploy the database
- C: Use BigQuery to deploy the database
- D: Use Cloud SQL with a regional read replica to deploy the database.
Question 17
You are designing a database strategy for a new web application in one region. You need to minimize write latency. What should you do?
- A: Use Cloud SQL with cross-region replicas.
- B: Use high availability (HA) Cloud SQL with multiple zones.
- C: Use zonal Cloud SQL without high availability (HA).
- D: Use Cloud Spanner in a regional configuration.
Question 18
You are running a large, highly transactional application on Oracle Real Application Cluster (RAC) that is multi-tenant and uses shared storage. You need a solution that ensures high-performance throughput and a low-latency connection between applications and databases. The solution must also support existing Oracle features and provide ease of migration to Google Cloud. What should you do?
- A: Migrate to Compute Engine.
- B: Migrate to Bare Metal Solution for Oracle.
- C: Migrate to Google Kubernetes Engine (GKE)
- D: Migrate to Google Cloud VMware Engine
Question 19
You are choosing a new database backend for an existing application. The current database is running PostgreSQL on an on-premises VM and is managed by a database administrator and operations team. The application data is relational and has light traffic. You want to minimize costs and the migration effort for this application. What should you do?
- A: Migrate the existing database to Firestore.
- B: Migrate the existing database to Cloud SQL for PostgreSQL.
- C: Migrate the existing database to Cloud Spanner.
- D: Migrate the existing database to PostgreSQL running on Compute Engine.
Question 20
Your organization is currently updating an existing corporate application that is running in another public cloud to access managed database services in Google Cloud. The application will remain in the other public cloud while the database is migrated to Google Cloud. You want to follow Google-recommended practices for authentication. You need to minimize user disruption during the migration. What should you do?
- A: Use workload identity federation to impersonate a service account.
- B: Ask existing users to set their Google password to match their corporate password.
- C: Migrate the application to Google Cloud, and use Identity and Access Management (IAM).
- D: Use Google Workspace Password Sync to replicate passwords into Google Cloud.
Question 21
You are configuring the networking of a Cloud SQL instance. The only application that connects to this database resides on a Compute Engine VM in the same project as the Cloud SQL instance. The VM and the Cloud SQL instance both use the same VPC network, and both have an external (public) IP address and an internal (private) IP address. You want to improve network security. What should you do?
- A: Disable and remove the internal IP address assignment.
- B: Disable both the external IP address and the internal IP address, and instead rely on Private Google Access.
- C: Specify an authorized network with the CIDR range of the VM.
- D: Disable and remove the external IP address assignment.
Question 22
You are managing two different applications: Order Management and Sales Reporting. Both applications interact with the same Cloud SQL for MySQL database. The Order Management application reads and writes to the database 24/7, but the Sales Reporting application is read-only. Both applications need the latest data. You need to ensure that the Performance of the Order Management application is not affected by the Sales Reporting application. What should you do?
- A: Create a read replica for the Sales Reporting application.
- B: Create two separate databases in the instance, and perform dual writes from the Order Management application.
- C: Use a Cloud SQL federated query for the Sales Reporting application.
- D: Queue up all the requested reports in PubSub, and execute the reports at night.
Question 23
You are the DBA of an online tutoring application that runs on a Cloud SQL for PostgreSQL database. You are testing the implementation of the cross-regional failover configuration. The database in region R1 fails over successfully to region R2, and the database becomes available for the application to process data. During testing, certain scenarios of the application work as expected in region R2, but a few scenarios fail with database errors. The application-related database queries, when executed in isolation from Cloud SQL for PostgreSQL in region R2, work as expected. The application performs completely as expected when the database fails back to region R1. You need to identify the cause of the database errors in region R2. What should you do?
- A: Determine whether the versions of Cloud SQL for PostgreSQL in regions R1 and R2 are different.
- B: Determine whether the database patches of Cloud SQI for PostgreSQL in regions R1 and R2 are different.
- C: Determine whether the failover of Cloud SQL for PostgreSQL from region R1 to region R2 is in progress or has completed successfully.
- D: Determine whether Cloud SQL for PostgreSQL in region R2 is a near-real-time copy of region R1 but not an exact copy.
Question 24
You are designing an augmented reality game for iOS and Android devices. You plan to use Cloud Spanner as the primary backend database for game state storage and player authentication. You want to track in-game rewards that players unlock at every stage of the game. During the testing phase, you discovered that costs are much higher than anticipated, but the query response times are within the SLA. You want to follow Google-recommended practices. You need the database to be performant and highly available while you keep costs low. What should you do?
- A: Manually scale down the number of nodes after the peak period has passed.
- B: Use interleaving to co-locate parent and child rows.
- C: Use the Cloud Spanner query optimizer to determine the most efficient way to execute the SQL query.
- D: Use granular instance sizing in Cloud Spanner and Autoscaler.
Question 25
Your company wants to migrate its MySQL, PostgreSQL, and Microsoft SQL Server on-premises databases to Google Cloud. You need a solution that provides near-zero downtime, requires no application changes, and supports change data capture (CDC). What should you do?
- A: Use the native export and import functionality of the source database.
- B: Create a database on Google Cloud, and use database links to perform the migration.
- C: Create a database on Google Cloud, and use Dataflow for database migration.
- D: Use Database Migration Service.
Free preview mode
Enjoy the free questions and consider upgrading to gain full access!