Your ecommerce website captures user clickstream data to analyze customer traffic patterns in real time and support personalization features on your website. You plan to analyze this data using big data tools. You need a low-latency solution that can store 8 TB of data and can scale to millions of read and write requests per second. What should you do?
AWrite your data into Bigtable and use Dataproc and the Apache Hbase libraries for analysis.
BDeploy a Cloud SQL environment with read replicas for improved performance. Use Datastream to export data to Cloud Storage and analyze with Dataproc and the Cloud Storage connector.
CUse Memorystore to handle your low-latency requirements and for real-time analytics.
DStream your data into BigQuery and use Dataproc and the BigQuery Storage API to analyze large volumes of data.
You are developing a new application on a VM that is on your corporate network. The application will use Java Database Connectivity (JDBC) to connect to Cloud SQL for PostgreSQL. Your Cloud SQL instance is configured with IP address 192.168.3.48, and SSL is disabled. You want to ensure that your application can access your database instance without requiring configuration changes to your database. What should you do?
ADefine a connection string using your Google username and password to point to the external (public) IP address of your Cloud SQL instance.
BDefine a connection string using a database username and password to point to the internal (private) IP address of your Cloud SQL instance.
CDefine a connection string using Cloud SQL Auth proxy configured with a service account to point to the internal (private) IP address of your Cloud SQL instance.
DDefine a connection string using Cloud SQL Auth proxy configured with a service account to point to the external (public) IP address of your Cloud SQL instance.
Your company wants to move to Google Cloud. Your current data center is closing in six months. You are running a large, highly transactional Oracle application footprint on VMWare. You need to design a solution with minimal disruption to the current architecture and provide ease of migration to Google Cloud. What should you do?
AMigrate applications and Oracle databases to Google Cloud VMware Engine (VMware Engine).
BMigrate applications and Oracle databases to Compute Engine.
CMigrate applications to Cloud SQL.
DMigrate applications and Oracle databases to Google Kubernetes Engine (GKE).
Your digital-native business runs its database workloads on Cloud SQL. Your website must be globally accessible 24/7. You need to prepare your Cloud SQL instance for high availability (HA). You want to follow Google-recommended practices. What should you do? (Choose two.)
ASet up manual backups.
BCreate a PostgreSQL database on-premises as the HA option.
CConfigure single zone availability for automated backups.
DEnable point-in-time recovery.
ESchedule automated backups.
Your customer has a global chat application that uses a multi-regional Cloud Spanner instance. The application has recently experienced degraded performance after a new version of the application was launched. Your customer asked you for assistance. During initial troubleshooting, you observed high read latency. What should you do?
AUse query parameters to speed up frequently executed queries.
BChange the Cloud Spanner configuration from multi-region to single region.
CUse SQL statements to analyze SPANNER_SYS.READ_STATS* tables.
DUse SQL statements to analyze SPANNER_SYS.QUERY_STATS* tables.
You are setting up a Bare Metal Solution environment. You need to update the operating system to the latest version. You need to connect the Bare Metal Solution environment to the internet so you can receive software updates. What should you do?
ASetup a static external IP address in your VPC network.
BSet up bring your own IP (BYOIP) in your VPC.
CSet up a Cloud NAT gateway on the Compute Engine VM.
DSet up Cloud NAT service.
Your company has PostgreSQL databases on-premises and on Amazon Web Services (AWS). You are planning multiple database migrations to Cloud SQL in an effort to reduce costs and downtime. You want to follow Google-recommended practices and use Google native data migration tools. You also want to closely monitor the migrations as part of the cutover strategy. What should you do?
AUse Database Migration Service to migrate all databases to Cloud SQL.
BUse Database Migration Service for one-time migrations, and use third-party or partner tools for change data capture (CDC) style migrations.
CUse data replication tools and CDC tools to enable migration.
DUse a combination of Database Migration Service and partner tools to support the data migration strategy.
You are managing multiple applications connecting to a database on Cloud SQL for PostgreSQL. You need to be able to monitor database performance to easily identify applications with long-running and resource-intensive queries. What should you do?
AUse log messages produced by Cloud SQL.
BUse Query Insights for Cloud SQL.
CUse the Cloud Monitoring dashboard with available metrics from Cloud SQL.
DUse Cloud SQL instance monitoring in the Google Cloud Console.
You are troubleshooting a connection issue with a newly deployed Cloud SQL instance on Google Cloud. While investigating the Cloud SQL Proxy logs, you see the message Error 403: Access Not Configured. What should you do?
ACheck the app.yaml value cloud_sql_instances for a misspelled or incorrect instance connection name.
BCheck whether your service account has cloudsql.instances.connect permission.
CEnable the Cloud SQL Admin API.
DEnsure that you are using an external (public) IP address interface.
Your company uses Cloud Spanner for a mission-critical inventory management system that is globally available. You recently loaded stock keeping unit (SKU) and product catalog data from a company acquisition and observed hotspots in the Cloud Spanner database. You want to follow Google-recommended schema design practices to avoid performance degradation. What should you do? (Choose two.)
AUse an auto-incrementing value as the primary key.
BNormalize the data model.
CPromote low-cardinality attributes in multi-attribute primary keys.
DPromote high-cardinality attributes in multi-attribute primary keys.
EUse bit-reverse sequential value as the primary key.
Your team uses thousands of connected IoT devices to collect device maintenance data for your oil and gas customers in real time. You want to design inspection routines, device repair, and replacement schedules based on insights gathered from the data produced by these devices. You need a managed solution that is highly scalable, supports a multi-cloud strategy, and offers low latency for these IoT devices. What should you do?
AUse Firestore with Looker.
BUse Cloud Spanner with Data Studio.
CUse MongoD8 Atlas with Charts.
DUse Bigtable with Looker.
Your application follows a microservices architecture and uses a single large Cloud SQL instance, which is starting to have performance issues as your application grows. in the Cloud Monitoring dashboard, the CPU utilization looks normal You want to follow Google-recommended practices to resolve and prevent these performance issues while avoiding any major refactoring. What should you do?
AUse Cloud Spanner instead of Cloud SQL.
BIncrease the number of CPUs for your instance.
CIncrease the storage size for the instance.
DUse many smaller Cloud SQL instances.
You need to migrate existing databases from Microsoft SQL Server 2016 Standard Edition on a single Windows Server 2019 Datacenter Edition to a single Cloud SQL for SQL Server instance. During the discovery phase of your project, you notice that your on-premises server peaks at around 25,000 read IOPS. You need to ensure that your Cloud SQL instance is sized appropriately to maximize read performance. What should you do?
ACreate a SQL Server 2019 Standard on Standard machine type with 4 vCPUs, 15 GB of RAM, and 800 GB of solid-state drive (SSD).
BCreate a SQL Server 2019 Standard on High Memory machine type with at least 16 vCPUs, 104 GB of RAM, and 200 GB of SSD.
CCreate a SQL Server 2019 Standard on High Memory machine type with 16 vCPUs, 104 GB of RAM, and 4 TB of SSD.
DCreate a SQL Server 2019 Enterprise on High Memory machine type with 16 vCPUs, 104 GB of RAM, and 500 GB of SSD.
Your organization deployed a new version of a critical application that uses Cloud SQL for MySQL with high availability (HA) and binary logging enabled to store transactional information. The latest release of the application had an error that caused massive data corruption in your Cloud SQL for MySQL database. You need to minimize data loss. What should you do?
AOpen the Google Cloud Console, navigate to SQL > Backups, and select the last version of the automated backup before the corruption.
BReload the Cloud SQL for MySQL database using the LOAD DATA command to load data from CSV files that were used to initialize the instance.
CPerform a point-in-time recovery of your Cloud SQL for MySQL database, selecting a date and time before the data was corrupted.
DFail over to the Cloud SQL for MySQL HA instance. Use that instance to recover the transactions that occurred before the corruption.
You are using Compute Engine on Google Cloud and your data center to manage a set of MySQL databases in a hybrid configuration. You need to create replicas to scale reads and to offload part of the management operation. What should you do?
AUse external server replication.
BUse Data Migration Service.
CUse Cloud SQL for MySQL external replica.
DUse the mysqldump utility and binary logs.
You are building an application that allows users to customize their website and mobile experiences. The application will capture user information and preferences. User profiles have a dynamic schema, and users can add or delete information from their profile. You need to ensure that user changes automatically trigger updates to your downstream BigQuery data warehouse. What should you do?
AStore your data in Bigtable, and use the user identifier as the key. Use one column family to store user profile data, and use another column family to store user preferences.
BUse Cloud SQL, and create different tables for user profile data and user preferences from your recommendations model. Use SQL to join the user profile data and preferences
CUse Firestore in Native mode, and store user profile data as a document. Update the user profile with preferences specific to that user and use the user identifier to query.
DUse Firestore in Datastore mode, and store user profile data as a document. Update the user profile with preferences specific to that user and use the user identifier to query.
Your application uses Cloud SQL for MySQL. Your users run reports on data that relies on near-real time; however, the additional analytics caused excessive load on the primary database. You created a read replica for the analytics workloads, but now your users are complaining about the lag in data changes and that their reports are still slow. You need to improve the report performance and shorten the lag in data replication without making changes to the current reports. Which two approaches should you implement? (Choose two.)
ACreate secondary indexes on the replica.
BCreate additional read replicas, and partition your analytics users to use different read replicas.
CDisable replication on the read replica, and set the flag for parallel replication on the read replica. Re-enable replication and optimize performance by setting flags on the primary instance.
DDisable replication on the primary instance, and set the flag for parallel replication on the primary instance. Re-enable replication and optimize performance by setting flags on the read replica.
EMove your analytics workloads to BigQuery, and set up a streaming pipeline to move data and update BigQuery.
Your company is migrating the existing infrastructure for a highly transactional application to Google Cloud. You have several databases in a MySQL database instance and need to decide how to transfer the data to Cloud SQL. You need to minimize the downtime for the migration of your 500 GB instance. What should you do?
A
Create a Cloud SQL for MySQL instance for your databases, and configure Datastream to stream your database changes to Cloud SQL.2. Select the Backfill historical data check box on your stream configuration to initiate Datastream to backfill any data that is out of sync between the source and destination.3. Delete your stream when all changes are moved to Cloud SQL for MySQL, and update your application to use the new instance.
B
Create migration job using Database Migration Service.2. Set the migration job type to Continuous, and allow the databases to complete the full dump phase and start sending data in change data capture (CDC) mode.3. Wait for the replication delay to minimize, initiate a promotion of the new Cloud SQL instance, and wait for the migration job to complete.4. Update your application connections to the new instance.
C
Create migration job using Database Migration Service.2. Set the migration job type to One-time, and perform this migration during a maintenance window.3. Stop all write workloads to the source database and initiate the dump. Wait for the dump to be loaded into the Cloud SQL destination database and the destination database to be promoted to the primary database.4. Update your application connections to the new instance.
D
Use the mysqldump utility to manually initiate a backup of MySQL during the application maintenance window.2. Move the files to Cloud Storage, and import each database into your Cloud SQL instance.3. Continue to dump each database until all the databases are migrated.4. Update your application connections to the new instance.
Your company uses the Cloud SQL out-of-disk recommender to analyze the storage utilization trends of production databases over the last 30 days. Your database operations team uses these recommendations to proactively monitor storage utilization and implement corrective actions. You receive a recommendation that the instance is likely to run out of disk space. What should you do to address this storage alert?
ANormalize the database to the third normal form.
BCompress the data using a different compression algorithm.
CManually or automatically increase the storage capacity.
DCreate another schema to load older data.
You are managing a mission-critical Cloud SQL for PostgreSQL instance. Your application team is running important transactions on the database when another DBA starts an on-demand backup. You want to verify the status of the backup. What should you do?
ACheck the cloudsql.googleapis.com/postgres.log instance log.
BPerform the gcloud sql operations list command.
CUse Cloud Audit Logs to verify the status.
DUse the Google Cloud Console.
Your company uses Bigtable for a user-facing application that displays a low-latency real-time dashboard. You need to recommend the optimal storage type for this read-intensive database. What should you do?
ARecommend solid-state drives (SSD).
BRecommend splitting the Bigtable instance into two instances in order to load balance the concurrent reads.
CRecommend hard disk drives (HDD).
DRecommend mixed storage types.
Your organization has a critical business app that is running with a Cloud SQL for MySQL backend database. Your company wants to build the most fault-tolerant and highly available solution possible. You need to ensure that the application database can survive a zonal and regional failure with a primary region of us-central1 and the backup region of us-east1. What should you do?
A
Provision a Cloud SQL for MySQL instance in us-central1-a.2. Create a multiple-zone instance in us-west1-b.3. Create a read replica in us-east1-c.
B
Provision a Cloud SQL for MySQL instance in us-central1-a.2. Create a multiple-zone instance in us-central1-b.3. Create a read replica in us-east1-b.
C
Provision a Cloud SQL for MySQL instance in us-central1-a.2. Create a multiple-zone instance in us-east-b.3. Create a read replica in us-east1-c.
D
Provision a Cloud SQL for MySQL instance in us-central1-a.2. Create a multiple-zone instance in us-east1-b.3. Create a read replica in us-central1-b.
You are building an Android game that needs to store data on a Google Cloud serverless database. The database will log user activity, store user preferences, and receive in-game updates. The target audience resides in developing countries that have intermittent internet connectivity. You need to ensure that the game can synchronize game data to the backend database whenever an internet network is available. What should you do?
AUse Firestore.
BUse Cloud SQL with an external (public) IP address.
CUse an in-app embedded database.
DUse Cloud Spanner.
You are starting a large CSV import into a Cloud SQL for MySQL instance that has many open connections. You checked memory and CPU usage, and sufficient resources are available. You want to follow Google-recommended practices to ensure that the import will not time out. What should you do?
AClose idle connections or restart the instance before beginning the import operation.
BIncrease the amount of memory allocated to your instance.
CEnsure that the service account has the Storage Admin role.
DIncrease the number of CPUs for the instance to ensure that it can handle the additional import operation.
You are managing a small Cloud SQL instance for developers to do testing. The instance is not critical and has a recovery point objective (RPO) of several days. You want to minimize ongoing costs for this instance. What should you do?
ATake no backups, and turn off transaction log retention.
BTake one manual backup per day, and turn off transaction log retention.
CTurn on automated backup, and turn off transaction log retention.
DTurn on automated backup, and turn on transaction log retention.