Free preview mode
Enjoy the free questions and consider upgrading to gain full access!
Associate Cloud Engineer
Free trial
Verified
Question 51
You have a Compute Engine instance hosting an application used between 9 AM and 6 PM on weekdays. You want to back up this instance daily for disaster recovery purposes. You want to keep the backups for 30 days. You want the Google-recommended solution with the least management overhead and the least number of services. What should you do?
- A: 1. Update your instances' metadata to add the following value: snapshotג€"schedule: 0 1 * * * 2. Update your instances' metadata to add the following value: snapshotג€"retention: 30
- B: 1. In the Cloud Console, go to the Compute Engine Disks page and select your instance's disk. 2. In the Snapshot Schedule section, select Create Schedule and configure the following parameters: - Schedule frequency: Daily - Start time: 1:00 AM ג€" 2:00 AM - Autodelete snapshots after: 30 days
- C: 1. Create a Cloud Function that creates a snapshot of your instance's disk. 2. Create a Cloud Function that deletes snapshots that are older than 30 days. 3. Use Cloud Scheduler to trigger both Cloud Functions daily at 1:00 AM.
- D: 1. Create a bash script in the instance that copies the content of the disk to Cloud Storage. 2. Create a bash script in the instance that deletes data older than 30 days in the backup Cloud Storage bucket. 3. Configure the instance's crontab to execute these scripts daily at 1:00 AM.
Question 52
Your existing application running in Google Kubernetes Engine (GKE) consists of multiple pods running on four GKE n1"standard"2 nodes. You need to deploy additional pods requiring n2"highmem"16 nodes without any downtime. What should you do?
- A: Use gcloud container clusters upgrade. Deploy the new services.
- B: Create a new Node Pool and specify machine type n2ג€"highmemג€"16. Deploy the new pods.
- C: Create a new cluster with n2ג€"highmemג€"16 nodes. Redeploy the pods and delete the old cluster.
- D: Create a new cluster with both n1ג€"standardג€"2 and n2ג€"highmemג€"16 nodes. Redeploy the pods and delete the old cluster.
Question 53
You have an application that uses Cloud Spanner as a database backend to keep current state information about users. Cloud Bigtable logs all events triggered by users. You export Cloud Spanner data to Cloud Storage during daily backups. One of your analysts asks you to join data from Cloud Spanner and Cloud
Bigtable for specific users. You want to complete this ad hoc request as efficiently as possible. What should you do?
- A: Create a dataflow job that copies data from Cloud Bigtable and Cloud Storage for specific users.
- B: Create a dataflow job that copies data from Cloud Bigtable and Cloud Spanner for specific users.
- C: Create a Cloud Dataproc cluster that runs a Spark job to extract data from Cloud Bigtable and Cloud Storage for specific users.
- D: Create two separate BigQuery external tables on Cloud Storage and Cloud Bigtable. Use the BigQuery console to join these tables through user fields, and apply appropriate filters.
Question 54
You are hosting an application from Compute Engine virtual machines (VMs) in us"central1"a. You want to adjust your design to support the failure of a single
Compute Engine zone, eliminate downtime, and minimize cost. What should you do?
- A: ג€" Create Compute Engine resources in usג€"central1ג€"b. ג€" Balance the load across both usג€"central1ג€"a and usג€"central1ג€"b.
- B: ג€" Create a Managed Instance Group and specify usג€"central1ג€"a as the zone. ג€" Configure the Health Check with a short Health Interval.
- C: ג€" Create an HTTP(S) Load Balancer. ג€" Create one or more global forwarding rules to direct traffic to your VMs.
- D: ג€" Perform regular backups of your application. ג€" Create a Cloud Monitoring Alert and be notified if your application becomes unavailable. ג€" Restore from backups when notified.
Question 55
A colleague handed over a Google Cloud Platform project for you to maintain. As part of a security checkup, you want to review who has been granted the Project
Owner role. What should you do?
- A: In the console, validate which SSH keys have been stored as project-wide keys.
- B: Navigate to Identity-Aware Proxy and check the permissions for these resources.
- C: Enable Audit Logs on the IAM & admin page for all resources, and validate the results.
- D: Use the command gcloud projects getג€"iamג€"policy to view the current role assignments.
Question 56
You are running multiple VPC-native Google Kubernetes Engine clusters in the same subnet. The IPs available for the nodes are exhausted, and you want to ensure that the clusters can grow in nodes when needed. What should you do?
- A: Create a new subnet in the same region as the subnet being used.
- B: Add an alias IP range to the subnet used by the GKE clusters.
- C: Create a new VPC, and set up VPC peering with the existing VPC.
- D: Expand the CIDR range of the relevant subnet for the cluster.
Question 57
You need to run an important query in BigQuery but expect it to return a lot of records. You want to find out how much it will cost to run the query. You are using on-demand pricing. What should you do?
- A: Arrange to switch to Flat-Rate pricing for this query, then move back to on-demand.
- B: Use the command line to run a dry run query to estimate the number of bytes read. Then convert that bytes estimate to dollars using the Pricing Calculator.
- C: Use the command line to run a dry run query to estimate the number of bytes returned. Then convert that bytes estimate to dollars using the Pricing Calculator.
- D: Run a select count (*) to get an idea of how many records your query will look through. Then convert that number of rows to dollars using the Pricing Calculator.
Question 58
You have a batch workload that runs every night and uses a large number of virtual machines (VMs). It is fault-tolerant and can tolerate some of the VMs being terminated. The current cost of VMs is too high. What should you do?
- A: Run a test using simulated maintenance events. If the test is successful, use preemptible N1 Standard VMs when running future jobs.
- B: Run a test using simulated maintenance events. If the test is successful, use N1 Standard VMs when running future jobs.
- C: Run a test using a managed instance group. If the test is successful, use N1 Standard VMs in the managed instance group when running future jobs.
- D: Run a test using N1 standard VMs instead of N2. If the test is successful, use N1 Standard VMs when running future jobs.
Question 59
You are working with a user to set up an application in a new VPC behind a firewall. The user is concerned about data egress. You want to configure the fewest open egress ports. What should you do?
- A: Set up a low-priority (65534) rule that blocks all egress and a high-priority rule (1000) that allows only the appropriate ports.
- B: Set up a high-priority (1000) rule that pairs both ingress and egress ports.
- C: Set up a high-priority (1000) rule that blocks all egress and a low-priority (65534) rule that allows only the appropriate ports.
- D: Set up a high-priority (1000) rule to allow the appropriate ports.
Question 60
Your company runs its Linux workloads on Compute Engine instances. Your company will be working with a new operations partner that does not use Google
Accounts. You need to grant access to the instances to your operations partner so they can maintain the installed tooling. What should you do?
- A: Enable Cloud IAP for the Compute Engine instances, and add the operations partner as a Cloud IAP Tunnel User.
- B: Tag all the instances with the same network tag. Create a firewall rule in the VPC to grant TCP access on port 22 for traffic from the operations partner to instances with the network tag.
- C: Set up Cloud VPN between your Google Cloud VPC and the internal network of the operations partner.
- D: Ask the operations partner to generate SSH key pairs, and add the public keys to the VM instances.
Question 61
You have created a code snippet that should be triggered whenever a new file is uploaded to a Cloud Storage bucket. You want to deploy this code snippet. What should you do?
- A: Use App Engine and configure Cloud Scheduler to trigger the application using Pub/Sub.
- B: Use Cloud Functions and configure the bucket as a trigger resource.
- C: Use Google Kubernetes Engine and configure a CronJob to trigger the application using Pub/Sub.
- D: Use Dataflow as a batch job, and configure the bucket as a data source.
That’s the end of your free questions
You’ve reached the preview limit for Associate Cloud EngineerConsider upgrading to gain full access!
Free preview mode
Enjoy the free questions and consider upgrading to gain full access!