Professional Cloud DevOps EngineerFree trialFree trial

By google
Aug, 2025

Verified

25Q per page

Question 1

You support a Node.js application running on Google Kubernetes Engine (GKE) in production. The application makes several HTTP requests to dependent applications. You want to anticipate which dependent applications might cause performance issues. What should you do?

  • A: Instrument all applications with Stackdriver Profiler.
  • B: Instrument all applications with Stackdriver Trace and review inter-service HTTP requests.
  • C: Use Stackdriver Debugger to review the execution of logic within each application to instrument all applications.
  • D: Modify the Node.js application to log HTTP request and response times to dependent applications. Use Stackdriver Logging to find dependent applications that are performing poorly.

Question 2

You have a pool of application servers running on Compute Engine. You need to provide a secure solution that requires the least amount of configuration and allows developers to easily access application logs for troubleshooting. How would you implement the solution on GCP?

  • A: ג€¢ Deploy the Stackdriver logging agent to the application servers. ג€¢ Give the developers the IAM Logs Viewer role to access Stackdriver and view logs.
  • B: ג€¢ Deploy the Stackdriver logging agent to the application servers. ג€¢ Give the developers the IAM Logs Private Logs Viewer role to access Stackdriver and view logs.
  • C: ג€¢ Deploy the Stackdriver monitoring agent to the application servers. ג€¢ Give the developers the IAM Monitoring Viewer role to access Stackdriver and view metrics.
  • D: ג€¢ Install the gsutil command line tool on your application servers. ג€¢ Write a script using gsutil to upload your application log to a Cloud Storage bucket, and then schedule it to run via cron every 5 minutes. ג€¢ Give the developers the IAM Object Viewer access to view the logs in the specified bucket.

Question 3

The new version of your containerized application has been tested and is ready to be deployed to production on Google Kubernetes Engine (GKE). You could not fully load-test the new version in your pre-production environment, and you need to ensure that the application does not have performance problems after deployment. Your deployment must be automated. What should you do?

  • A: Deploy the application through a continuous delivery pipeline by using canary deployments. Use Cloud Monitoring to look for performance issues, and ramp up traffic as supported by the metrics.
  • B: Deploy the application through a continuous delivery pipeline by using blue/green deployments. Migrate traffic to the new version of the application and use Cloud Monitoring to look for performance issues.
  • C: Deploy the application by using kubectl and use Config Connector to slowly ramp up traffic between versions. Use Cloud Monitoring to look for performance issues.
  • D: Deploy the application by using kubectl and set the spec.updateStrategy.type field to RollingUpdate. Use Cloud Monitoring to look for performance issues, and run the kubectl rollback command if there are any issues.

Question 4

You are managing an application that runs in Compute Engine. The application uses a custom HTTP server to expose an API that is accessed by other applications through an internal TCP/UDP load balancer. A firewall rule allows access to the API port from 0.0.0.0/0. You need to configure Cloud Logging to log each IP address that accesses the API by using the fewest number of steps. What should you do first?

  • A: Enable Packet Mirroring on the VPC.
  • B: Install the Ops Agent on the Compute Engine instances.
  • C: Enable logging on the firewall rule.
  • D: Enable VPC Flow Logs on the subnet.

Question 5

Your company runs an ecommerce website built with JVM-based applications and microservice architecture in Google Kubernetes Engine (GKE). The application load increases during the day and decreases during the night. Your operations team has configured the application to run enough Pods to handle the evening peak load. You want to automate scaling by only running enough Pods and nodes for the load. What should you do?

  • A: Configure the Vertical Pod Autoscaler, but keep the node pool size static.
  • B: Configure the Vertical Pod Autoscaler, and enable the cluster autoscaler.
  • C: Configure the Horizontal Pod Autoscaler, but keep the node pool size static.
  • D: Configure the Horizontal Pod Autoscaler, and enable the cluster autoscaler.

Question 6

Your organization wants to increase the availability target of an application from 99.9% to 99.99% for an investment of $2,000. The application's current revenue is $1,000,000. You need to determine whether the increase in availability is worth the investment for a single year of usage. What should you do?

  • A: Calculate the value of improved availability to be $900, and determine that the increase in availability is not worth the investment.
  • B: Calculate the value of improved availability to be $1,000, and determine that the increase in availability is not worth the investment.
  • C: Calculate the value of improved availability to be $1,000, and determine that the increase in availability is worth the investment.
  • D: Calculate the value of improved availability to be $9,000, and determine that the increase in availability is worth the investment.

Question 7

A third-party application needs to have a service account key to work properly. When you try to export the key from your cloud project, you receive an error: “The organization policy constraint iam.disableServiceAccounKeyCreation is enforced.” You need to make the third-party application work while following Google-recommended security practices.

What should you do?

  • A: Enable the default service account key, and download the key.
  • B: Remove the iam.disableServiceAccountKeyCreation policy at the organization level, and create a key.
  • C: Disable the service account key creation policy at the project's folder, and download the default key.
  • D: Add a rule to set the iam.disableServiceAccountKeyCreation policy to off in your project, and create a key.

Question 8

Your team is writing a postmortem after an incident on your external facing application. Your team wants to improve the postmortem policy to include triggers that indicate whether an incident requires a postmortem. Based on Site Reliability Engineering (SRE) practices, what triggers should be defined in the postmortem policy? (Choose two.)

  • A: An external stakeholder asks for a postmortem
  • B: Data is lost due to an incident.
  • C: An internal stakeholder requests a postmortem.
  • D: The monitoring system detects that one of the instances for your application has failed.
  • E: The CD pipeline detects an issue and rolls back a problematic release.

Question 9

You are implementing a CI/CD pipeline for your application in your company’s multi-cloud environment. Your application is deployed by using custom Compute Engine images and the equivalent in other cloud providers. You need to implement a solution that will enable you to build and deploy the images to your current environment and is adaptable to future changes. Which solution stack should you use?

  • A: Cloud Build with Packer
  • B: Cloud Build with Google Cloud Deploy
  • C: Google Kubernetes Engine with Google Cloud Deploy
  • D: Cloud Build with kpt

Question 10

Your application's performance in Google Cloud has degraded since the last release. You suspect that downstream dependencies might be causing some requests to take longer to complete. You need to investigate the issue with your application to determine the cause. What should you do?

  • A: Configure Error Reporting in your application.
  • B: Configure Google Cloud Managed Service for Prometheus in your application.
  • C: Configure Cloud Profiler in your application.
  • D: Configure Cloud Trace in your application.

Question 11

You are creating a CI/CD pipeline in Cloud Build to build an application container image. The application code is stored in GitHub. Your company requires that production image builds are only run against the main branch and that the change control team approves all pushes to the main branch. You want the image build to be as automated as possible. What should you do? (Choose two.)

  • A: Create a trigger on the Cloud Build job. Set the repository event setting to ‘Pull request’.
  • B: Add the OWNERS file to the Included files filter on the trigger.
  • C: Create a trigger on the Cloud Build job. Set the repository event setting to ‘Push to a branch’
  • D: Configure a branch protection rule for the main branch on the repository.
  • E: Enable the Approval option on the trigger.

Question 12

You built a serverless application by using Cloud Run and deployed the application to your production environment. You want to identify the resource utilization of the application for cost optimization. What should you do?

  • A: Use Cloud Trace with distributed tracing to monitor the resource utilization of the application.
  • B: Use Cloud Profiler with Ops Agent to monitor the CPU and memory utilization of the application.
  • C: Use Cloud Monitoring to monitor the container CPU and memory utilization of the application.
  • D: Use Cloud Ops to create logs-based metrics to monitor the resource utilization of the application.

Question 13

You support the backend of a mobile phone game that runs on a Google Kubernetes Engine (GKE) cluster. The application is serving HTTP requests from users.
You need to implement a solution that will reduce the network cost. What should you do?

  • A: Configure the VPC as a Shared VPC Host project.
  • B: Configure your network services on the Standard Tier.
  • C: Configure your Kubernetes cluster as a Private Cluster.
  • D: Configure a Google Cloud HTTP Load Balancer as Ingress.

Question 14

Your company is using HTTPS requests to trigger a public Cloud Run-hosted service accessible at the https://booking-engine-abcdef.a.run.app URL. You need to give developers the ability to test the latest revisions of the service before the service is exposed to customers. What should you do?

  • A: Run the gcloud run deploy booking-engine --no-traffic --tag dev command. Use the https://dev--booking-engine-abcdef.a.run.app URL for testing.
  • B: Run the gcloud run services update-traffic booking-engine --to-revisions LATEST=1 command. Use the https://booking-engine-abcdef.a.run.app URL for testing.
  • C: Pass the curl –H “Authorization:Bearer $(gcloud auth print-identity-token)” auth token. Use the https://booking-engine-abcdef.a.run.app URL to test privately.
  • D: Grant the roles/run.invoker role to the developers testing the booking-engine service. Use the https://booking-engine-abcdef.private.run.app URL for testing.

Question 15

You are configuring connectivity across Google Kubernetes Engine (GKE) clusters in different VPCs. You notice that the nodes in Cluster A are unable to access the nodes in Cluster B. You suspect that the workload access issue is due to the network configuration. You need to troubleshoot the issue but do not have execute access to workloads and nodes. You want to identify the layer at which the network connectivity is broken. What should you do?

  • A: Install a toolbox container on the node in Cluster Confirm that the routes to Cluster B are configured appropriately.
  • B: Use Network Connectivity Center to perform a Connectivity Test from Cluster A to Cluster B.
  • C: Use a debug container to run the traceroute command from Cluster A to Cluster B and from Cluster B to Cluster A. Identify the common failure point.
  • D: Enable VPC Flow Logs in both VPCs, and monitor packet drops.

Question 16

You manage an application that runs in Google Kubernetes Engine (GKE) and uses the blue/green deployment methodology. Extracts of the Kubernetes manifests are shown below:

Image 1

The Deployment app-green was updated to use the new version of the application. During post-deployment monitoring, you notice that the majority of user requests are failing. You did not observe this behavior in the testing environment. You need to mitigate the incident impact on users and enable the developers to troubleshoot the issue. What should you do?

  • A: Update the Deployment app-blue to use the new version of the application.
  • B: Update the Deployment app-green to use the previous version of the application.
  • C: Change the selector on the Service app-svc to app: my-app.
  • D: Change the selector on the Service app-svc to app: my-app, version: blue.

Question 17

You are running a web application deployed to a Compute Engine managed instance group. Ops Agent is installed on all instances. You recently noticed suspicious activity from a specific IP address. You need to configure Cloud Monitoring to view the number of requests from that specific IP address with minimal operational overhead. What should you do?

  • A: Configure the Ops Agent with a logging receiver. Create a logs-based metric. B Create a script to scrape the web server log. Export the IP address request metrics to the Cloud Monitoring API.
  • B: Update the application to export the IP address request metrics to the Cloud Monitoring API.
  • C: Configure the Ops Agent with a metrics receiver.

Question 18

Your organization is using Helm to package containerized applications. Your applications reference both public and private charts. Your security team flagged that using a public Helm repository as a dependency is a risk. You want to manage all charts uniformly, with native access control and VPC Service Controls. What should you do?

  • A: Store public and private charts in OCI format by using Artifact Registry.
  • B: Store public and private charts by using GitHub Enterprise with Google Workspace as the identity provider.
  • C: Store public and private charts by using Git repository. Configure Cloud Build to synchronize contents of the repository into a Cloud Storage bucket. Connect Helm to the bucket by using https://[bucket].storage-googleapis.com/[helmchart] as the Helm repository.
  • D: Configure a Helm chart repository server to run in Google Kubernetes Engine (GKE) with Cloud Storage bucket as the storage backend.

Question 19

You use Terraform to manage an application deployed to a Google Cloud environment. The application runs on instances deployed by a managed instance group. The Terraform code is deployed by using a CI/CD pipeline. When you change the machine type on the instance template used by the managed instance group, the pipeline fails at the terraform apply stage with the following error message:

Image 1

You need to update the instance template and minimize disruption to the application and the number of pipeline runs.

What should you do?

  • A: Delete the managed instance group, and recreate it after updating the instance template.
  • B: Add a new instance template, update the managed instance group to use the new instance template, and delete the old instance template.
  • C: Remove the managed instance group from the Terraform state file, update the instance template, and reimport the managed instance group.
  • D: Set the create_before_destroy meta-argument to true in the lifecycle block on the instance template.

Question 20

Your company operates in a highly regulated domain that requires you to store all organization logs for seven years. You want to minimize logging infrastructure complexity by using managed services. You need to avoid any future loss of log capture or stored logs due to misconfiguration or human error. What should you do?

  • A: Use Cloud Logging to configure an aggregated sink at the organization level to export all logs into a BigQuery dataset.
  • B: Use Cloud Logging to configure an aggregated sink at the organization level to export all logs into Cloud Storage with a seven-year retention policy and Bucket Lock.
  • C: Use Cloud Logging to configure an export sink at each project level to export all logs into a BigQuery dataset
  • D: Use Cloud Logging to configure an export sink at each project level to export all logs into Cloud Storage with a seven-year retention policy and Bucket Lock.

Question 21

You are building the CI/CD pipeline for an application deployed to Google Kubernetes Engine (GKE). The application is deployed by using a Kubernetes Deployment, Service, and Ingress. The application team asked you to deploy the application by using the blue/green deployment methodology. You need to implement the rollback actions. What should you do?

  • A: Run the kubectl rollout undo command.
  • B: Delete the new container image, and delete the running Pods.
  • C: Update the Kubernetes Service to point to the previous Kubernetes Deployment.
  • D: Scale the new Kubernetes Deployment to zero.

Question 22

You are building and running client applications in Cloud Run and Cloud Functions. Your client requires that all logs must be available for one year so that the client can import the logs into their logging service. You must minimize required code changes. What should you do?

  • A: Update all images in Cloud Run and all functions in Cloud Functions to send logs to both Cloud Logging and the client's logging service. Ensure that all the ports required to send logs are open in the VPC firewall.
  • B: Create a Pub/Sub topic, subscription, and logging sink. Configure the logging sink to send all logs into the topic. Give your client access to the topic to retrieve the logs.
  • C: Create a storage bucket and appropriate VPC firewall rules. Update all images in Cloud Run and all functions in Cloud Functions to send logs to a file within the storage bucket.
  • D: Create a logs bucket and logging sink. Set the retention on the logs bucket to 365 days. Configure the logging sink to send logs to the bucket. Give your client access to the bucket to retrieve the logs.

Question 23

You are building and running client applications in Cloud Run and Cloud Functions. Your client requires that all logs must be available for one year so that the client can import the logs into their logging service. You must minimize required code changes. What should you do?

  • A: Deploy Falco or Twistlock on GKE to monitor for vulnerabilities on your running Pods.
  • B: Configure Identity and Access Management (IAM) policies to create a least privilege model on your GKE clusters.
  • C: Use Binary Authorization to attest images during your CI/CD pipeline.
  • D: Enable Container Analysis in Artifact Registry, and check for common vulnerabilities and exposures (CVEs) in your container images.

Question 24

You encountered a major service outage that affected all users of the service for multiple hours. After several hours of incident management, the service returned to normal, and user access was restored. You need to provide an incident summary to relevant stakeholders following the Site Reliability Engineering recommended practices. What should you do first?

  • A: Call individual stakeholders to explain what happened.
  • B: Develop a post-mortem to be distributed to stakeholders.
  • C: Send the Incident State Document to all the stakeholders.
  • D: Require the engineer responsible to write an apology email to all stakeholders.

Question 25

You have an application that runs in Google Kubernetes Engine (GKE). The application consists of several microservices that are deployed to GKE by using Deployments and Services. One of the microservices is experiencing an issue where a Pod returns 403 errors after the Pod has been running for more than five hours. Your development team is working on a solution, but the issue will not be resolved for a month. You need to ensure continued operations until the microservice is fixed. You want to follow Google-recommended practices and use the fewest number of steps. What should you do?

  • A: Create a cron job to terminate any Pods that have been running for more than five hours.
  • B: Add a HTTP liveness probe to the microservice's deployment.
  • C: Monitor the Pods, and terminate any Pods that have been running for more than five hours.
  • D: Configure an alert to notify you whenever a Pod returns 403 errors.
Page 1 of 9 • Questions 1-25 of 202

Free preview mode

Enjoy the free questions and consider upgrading to gain full access!