You are developing a new application on a VM that is on your corporate network. The application will use Java Database Connectivity (JDBC) to connect to Cloud SQL for PostgreSQL. Your Cloud SQL instance is configured with IP address 192.168.3.48, and SSL is disabled. You want to ensure that your application can access your database instance without requiring configuration changes to your database. What should you do?
ADefine a connection string using your Google username and password to point to the external (public) IP address of your Cloud SQL instance.
BDefine a connection string using a database username and password to point to the internal (private) IP address of your Cloud SQL instance.
CDefine a connection string using Cloud SQL Auth proxy configured with a service account to point to the internal (private) IP address of your Cloud SQL instance.
DDefine a connection string using Cloud SQL Auth proxy configured with a service account to point to the external (public) IP address of your Cloud SQL instance.
Your team recently released a new version of a highly consumed application to accommodate additional user traffic. Shortly after the release, you received an alert from your production monitoring team that there is consistently high replication lag between your primary instance and the read replicas of your Cloud SQL for MySQL instances. You need to resolve the replication lag. What should you do?
AIdentify and optimize slow running queries, or set parallel replication flags.
BStop all running queries, and re-create the replicas.
CEdit the primary instance to upgrade to a larger disk, and increase vCPU count.
DEdit the primary instance to add additional memory.
During an internal audit, you realized that one of your Cloud SQL for MySQL instances does not have high availability (HA) enabled. You want to follow Google-recommended practices to enable HA on your existing instance. What should you do?
ACreate a new Cloud SQL for MySQL instance, enable HA, and use the export and import option to migrate your data.
BCreate a new Cloud SQL for MySQL instance, enable HA, and use Cloud Data Fusion to migrate your data.
CUse the gcloud instances patch command to update your existing Cloud SQL for MySQL instance.
DShut down your existing Cloud SQL for MySQL instance, and enable HA.
You are managing a set of Cloud SQL databases in Google Cloud. Regulations require that database backups reside in the region where the database is created. You want to minimize operational costs and administrative effort. What should you do?
AConfigure the automated backups to use a regional Cloud Storage bucket as a custom location.
BUse the default configuration for the automated backups location.
CDisable automated backups, and create an on-demand backup routine to a regional Cloud Storage bucket.
DDisable automated backups, and configure serverless exports to a regional Cloud Storage bucket.
Your ecommerce application connecting to your Cloud SQL for SQL Server is expected to have additional traffic due to the holiday weekend. You want to follow Google-recommended practices to set up alerts for CPU and memory metrics so you can be notified by text message at the first sign of potential issues. What should you do?
AUse a Cloud Function to pull CPU and memory metrics from your Cloud SQL instance and to call a custom service to send alerts.
BUse Error Reporting to monitor CPU and memory metrics and to configure SMS notification channels.
CUse Cloud Logging to set up a log sink for CPU and memory metrics and to configure a sink destination to send a message to Pub/Sub.
DUse Cloud Monitoring to set up an alerting policy for CPU and memory metrics and to configure SMS notification channels.
You finished migrating an on-premises MySQL database to Cloud SQL. You want to ensure that the daily export of a table, which was previously a cron job running on the database server, continues. You want the solution to minimize cost and operations overhead. What should you do?
AUse Cloud Scheduler and Cloud Functions to run the daily export.
BCreate a streaming Datatlow job to export the table.
CSet up Cloud Composer, and create a task to export the table daily.
DRun the cron job on a Compute Engine instance to continue the export.
Your organization needs to migrate a critical, on-premises MySQL database to Cloud SQL for MySQL. The on-premises database is on a version of MySQL that is supported by Cloud SQL and uses the InnoDB storage engine. You need to migrate the database while preserving transactions and minimizing downtime. What should you do?
A
Use Database Migration Service to connect to your on-premises database, and choose continuous replication.
After the on-premises database is migrated, promote the Cloud SQL for MySQL instance, and connect applications to your Cloud SQL instance.
B
Build a Cloud Data Fusion pipeline for each table to migrate data from the on-premises MySQL database to Cloud SQL for MySQL.
Schedule downtime to run each Cloud Data Fusion pipeline.
Verify that the migration was successful.
Re-point the applications to the Cloud SQL for MySQL instance.
C
Pause the on-premises applications.
Use the mysqldump utility to dump the database content in compressed format.
Run gsutil –m to move the dump file to Cloud Storage.
Use the Cloud SQL for MySQL import option.
After the import operation is complete, re-point the applications to the Cloud SQL for MySQL instance.
D1 Pause the on-premises applications.
2. Use the mysqldump utility to dump the database content in CSV format.
3. Run gsutil –m to move the dump file to Cloud Storage.
4. Use the Cloud SQL for MySQL import option.
5. After the import operation is complete, re-point the applications to the Cloud SQL for MySQL instance.
Your company is developing a global ecommerce website on Google Cloud. Your development team is working on a shopping cart service that is durable and elastically scalable with live traffic. Business disruptions from unplanned downtime are expected to be less than 5 minutes per month. In addition, the application needs to have very low latency writes. You need a data storage solution that has high write throughput and provides 99.99% uptime. What should you do?
AUse Cloud SQL for data storage.
BUse Cloud Spanner for data storage.
CUse Memorystore for data storage.
DUse Bigtable for data storage.
Your organization has hundreds of Cloud SQL for MySQL instances. You want to follow Google-recommended practices to optimize platform costs. What should you do?
AUse Query Insights to identify idle instances.
BRemove inactive user accounts.
CRun the Recommender API to identify overprovisioned instances.
DBuild indexes on heavily accessed tables.
Your organization is running a critical production database on a virtual machine (VM) on Compute Engine. The VM has an ext4-formatted persistent disk for data files. The database will soon run out of storage space. You need to implement a solution that avoids downtime. What should you do?
AIn the Google Cloud Console, increase the size of the persistent disk, and use the resize2fs command to extend the disk.
BIn the Google Cloud Console, increase the size of the persistent disk, and use the fdisk command to verify that the new space is ready to use
CIn the Google Cloud Console, create a snapshot of the persistent disk, restore the snapshot to a new larger disk, unmount the old disk, mount the new disk, and restart the database service.
DIn the Google Cloud Console, create a new persistent disk attached to the VM, and configure the database service to move the files to the new disk.
You want to migrate your on-premises PostgreSQL database to Compute Engine. You need to migrate this database with the minimum downtime possible. What should you do?
APerform a full backup of your on-premises PostgreSQL, and then, in the migration window, perform an incremental backup.
BCreate a read replica on Cloud SQL, and then promote it to a read/write standalone instance.
CUse Database Migration Service to migrate your database.
DCreate a hot standby on Compute Engine, and use PgBouncer to switch over the connections.
You have an application that sends banking events to Bigtable cluster-a in us-east. You decide to add cluster-b in us-central1. Cluster-a replicates data to cluster-b. You need to ensure that Bigtable continues to accept read and write requests if one of the clusters becomes unavailable and that requests are routed automatically to the other cluster. What deployment strategy should you use?
AUse the default app profile with single-cluster routing.
BUse the default app profile with multi-cluster routing.
CCreate a custom app profile with multi-cluster routing.
DCreate a custom app profile with single-cluster routing.
Your organization operates in a highly regulated industry. Separation of concerns (SoC) and security principle of least privilege (PoLP) are critical. The operations team consists of:
Person A is a database administrator.
Person B is an analyst who generates metric reports.
Application C is responsible for automatic backups.
You need to assign roles to team members for Cloud Spanner. Which roles should you assign?
Aroles/spanner.databaseAdmin for Person A
roles/spanner.databaseReader for Person B
roles/spanner.backupWriter for Application C
Broles/spanner.databaseAdmin for Person A
roles/spanner.databaseReader for Person B
roles/spanner.backupAdmin for Application C
Croles/spanner.databaseAdmin for Person A
roles/spanner.databaseUser for Person B
roles/spanner databaseReader for Application C
Droles/spanner.databaseAdmin for Person A
roles/spanner.databaseUser for Person B
roles/spanner.backupWriter for Application C
Your organization works with sensitive data that requires you to manage your own encryption keys. You are working on a project that stores that data in a Cloud SQL database. You need to ensure that stored data is encrypted with your keys. What should you do?
AExport data periodically to a Cloud Storage bucket protected by Customer-Supplied Encryption Keys.
BUse Cloud SQL Auth proxy.
CConnect to Cloud SQL using a connection that has SSL encryption.
DUse customer-managed encryption keys with Cloud SQL.
Your team is building an application that stores and analyzes streaming time series financial data. You need a database solution that can perform time series-based scans with sub-second latency. The solution must scale into the hundreds of terabytes and be able to write up to 10k records per second and read up to 200 MB per second. What should you do?
AUse Firestore.
BUse Bigtable
CUse BigQuery.
DUse Cloud Spanner.
You are designing a new gaming application that uses a highly transactional relational database to store player authentication and inventory data in Google Cloud. You want to launch the game in multiple regions. What should you do?
AUse Cloud Spanner to deploy the database.
BUse Bigtable with clusters in multiple regions to deploy the database
CUse BigQuery to deploy the database
DUse Cloud SQL with a regional read replica to deploy the database.
You are designing a database strategy for a new web application in one region. You need to minimize write latency. What should you do?
AUse Cloud SQL with cross-region replicas.
BUse high availability (HA) Cloud SQL with multiple zones.
CUse zonal Cloud SQL without high availability (HA).
DUse Cloud Spanner in a regional configuration.
You are running a large, highly transactional application on Oracle Real Application Cluster (RAC) that is multi-tenant and uses shared storage. You need a solution that ensures high-performance throughput and a low-latency connection between applications and databases. The solution must also support existing Oracle features and provide ease of migration to Google Cloud. What should you do?
AMigrate to Compute Engine.
BMigrate to Bare Metal Solution for Oracle.
CMigrate to Google Kubernetes Engine (GKE)
DMigrate to Google Cloud VMware Engine
You are choosing a new database backend for an existing application. The current database is running PostgreSQL on an on-premises VM and is managed by a database administrator and operations team. The application data is relational and has light traffic. You want to minimize costs and the migration effort for this application. What should you do?
AMigrate the existing database to Firestore.
BMigrate the existing database to Cloud SQL for PostgreSQL.
CMigrate the existing database to Cloud Spanner.
DMigrate the existing database to PostgreSQL running on Compute Engine.
Your organization is currently updating an existing corporate application that is running in another public cloud to access managed database services in Google Cloud. The application will remain in the other public cloud while the database is migrated to Google Cloud. You want to follow Google-recommended practices for authentication. You need to minimize user disruption during the migration. What should you do?
AUse workload identity federation to impersonate a service account.
BAsk existing users to set their Google password to match their corporate password.
CMigrate the application to Google Cloud, and use Identity and Access Management (IAM).
DUse Google Workspace Password Sync to replicate passwords into Google Cloud.
You are configuring the networking of a Cloud SQL instance. The only application that connects to this database resides on a Compute Engine VM in the same project as the Cloud SQL instance. The VM and the Cloud SQL instance both use the same VPC network, and both have an external (public) IP address and an internal (private) IP address. You want to improve network security. What should you do?
ADisable and remove the internal IP address assignment.
BDisable both the external IP address and the internal IP address, and instead rely on Private Google Access.
CSpecify an authorized network with the CIDR range of the VM.
DDisable and remove the external IP address assignment.
You are managing two different applications: Order Management and Sales Reporting. Both applications interact with the same Cloud SQL for MySQL database. The Order Management application reads and writes to the database 24/7, but the Sales Reporting application is read-only. Both applications need the latest data. You need to ensure that the Performance of the Order Management application is not affected by the Sales Reporting application. What should you do?
ACreate a read replica for the Sales Reporting application.
BCreate two separate databases in the instance, and perform dual writes from the Order Management application.
CUse a Cloud SQL federated query for the Sales Reporting application.
DQueue up all the requested reports in PubSub, and execute the reports at night.
You are the DBA of an online tutoring application that runs on a Cloud SQL for PostgreSQL database. You are testing the implementation of the cross-regional failover configuration. The database in region R1 fails over successfully to region R2, and the database becomes available for the application to process data. During testing, certain scenarios of the application work as expected in region R2, but a few scenarios fail with database errors. The application-related database queries, when executed in isolation from Cloud SQL for PostgreSQL in region R2, work as expected. The application performs completely as expected when the database fails back to region R1. You need to identify the cause of the database errors in region R2. What should you do?
ADetermine whether the versions of Cloud SQL for PostgreSQL in regions R1 and R2 are different.
BDetermine whether the database patches of Cloud SQI for PostgreSQL in regions R1 and R2 are different.
CDetermine whether the failover of Cloud SQL for PostgreSQL from region R1 to region R2 is in progress or has completed successfully.
DDetermine whether Cloud SQL for PostgreSQL in region R2 is a near-real-time copy of region R1 but not an exact copy.
You are designing an augmented reality game for iOS and Android devices. You plan to use Cloud Spanner as the primary backend database for game state storage and player authentication. You want to track in-game rewards that players unlock at every stage of the game. During the testing phase, you discovered that costs are much higher than anticipated, but the query response times are within the SLA. You want to follow Google-recommended practices. You need the database to be performant and highly available while you keep costs low. What should you do?
AManually scale down the number of nodes after the peak period has passed.
BUse interleaving to co-locate parent and child rows.
CUse the Cloud Spanner query optimizer to determine the most efficient way to execute the SQL query.
DUse granular instance sizing in Cloud Spanner and Autoscaler.
Your company wants to migrate its MySQL, PostgreSQL, and Microsoft SQL Server on-premises databases to Google Cloud. You need a solution that provides near-zero downtime, requires no application changes, and supports change data capture (CDC). What should you do?
AUse the native export and import functionality of the source database.
BCreate a database on Google Cloud, and use database links to perform the migration.
CCreate a database on Google Cloud, and use Dataflow for database migration.