Your company is building a new architecture to support its data-centric business focus. You are responsible for setting up the network. Your company's mobile and web-facing applications will be deployed on-premises, and all data analysis will be conducted in GCP. The plan is to process and load 7 years of archived .csv files totaling 900 TB of data and then continue loading 10 TB of data daily. You currently have an existing 100-MB internet connection.
What actions will meet your company's needs?
ACompress and upload both archived files and files uploaded daily using the gsutil ג€"m option.
BLease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storage. Establish a connection with Google using a Dedicated Interconnect or Direct Peering connection and use it to upload files daily.
CLease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storage. Establish one Cloud VPN Tunnel to VPC networks over the public internet, and compress and upload files daily using the gsutil ג€"m option.
DLease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storage. Establish a Cloud VPN Tunnel to VPC networks over the public internet, and compress and upload files daily.
You need to upload files from your on-premises environment to Cloud Storage. You want the files to be encrypted on Cloud Storage using customer-supplied encryption keys. What should you do?
ASupply the encryption key in a .boto configuration file. Use gsutil to upload the files.
BSupply the encryption key using gcloud config. Use gsutil to upload the files to that bucket.
CUse gsutil to upload the files, and use the flag --encryption-key to supply the encryption key.
DUse gsutil to create a bucket, and use the flag --encryption-key to supply the encryption key. Use gsutil to upload the files to that bucket.
You are deploying an application on App Engine that needs to integrate with an on-premises database. For security purposes, your on-premises database must not be accessible through the public internet. What should you do?
ADeploy your application on App Engine standard environment and use App Engine firewall rules to limit access to the open on-premises database.
BDeploy your application on App Engine standard environment and use Cloud VPN to limit access to the on-premises database.
CDeploy your application on App Engine flexible environment and use App Engine firewall rules to limit access to the on-premises database.
DDeploy your application on App Engine flexible environment and use Cloud VPN to limit access to the on-premises database.
The development team has provided you with a Kubernetes Deployment file. You have no infrastructure yet and need to deploy the application. What should you do?
AUse gcloud to create a Kubernetes cluster. Use Deployment Manager to create the deployment.
BUse gcloud to create a Kubernetes cluster. Use kubectl to create the deployment.
CUse kubectl to create a Kubernetes cluster. Use Deployment Manager to create the deployment.
DUse kubectl to create a Kubernetes cluster. Use kubectl to create the deployment.
You need to develop procedures to verify resilience of disaster recovery for remote recovery using GCP. Your production environment is hosted on-premises. You need to establish a secure, redundant connection between your on-premises network and the GCP network.
What should you do?
AVerify that Dedicated Interconnect can replicate files to GCP. Verify that direct peering can establish a secure connection between your networks if Dedicated Interconnect fails.
BVerify that Dedicated Interconnect can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your networks if Dedicated Interconnect fails.
CVerify that the Transfer Appliance can replicate files to GCP. Verify that direct peering can establish a secure connection between your networks if the Transfer Appliance fails.
DVerify that the Transfer Appliance can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your networks if the Transfer Appliance fails.
You need to set up Microsoft SQL Server on GCP. Management requires that there's no downtime in case of a data center outage in any of the zones within a
GCP region. What should you do?
AConfigure a Cloud SQL instance with high availability enabled.
BConfigure a Cloud Spanner instance with a regional instance configuration.
CSet up SQL Server on Compute Engine, using Always On Availability Groups using Windows Failover Clustering. Place nodes in different subnets.
DSet up SQL Server Always On Availability Groups using Windows Failover Clustering. Place nodes in different zones.
You have deployed an application to Google Kubernetes Engine (GKE), and are using the Cloud SQL proxy container to make the Cloud SQL database available to the services running on Kubernetes. You are notified that the application is reporting database connection issues. Your company policies require a post- mortem. What should you do?
AUse gcloud sql instances restart.
BValidate that the Service Account used by the Cloud SQL proxy container still has the Cloud Build Editor role.
CIn the GCP Console, navigate to Stackdriver Logging. Consult logs for (GKE) and Cloud SQL.
DIn the GCP Console, navigate to Cloud SQL. Restore the latest backup. Use kubectl to restart all pods.
Your applications will be writing their logs to BigQuery for analysis. Each application should have its own table. Any logs older than 45 days should be removed.
You want to optimize storage and follow Google-recommended practices. What should you do?
AConfigure the expiration time for your tables at 45 days
BMake the tables time-partitioned, and configure the partition expiration at 45 days
CRely on BigQuery's default behavior to prune application logs older than 45 days
DCreate a script that uses the BigQuery command line tool (bq) to remove records older than 45 days
Your company captures all web traffic data in Google Analytics 360 and stores it in BigQuery. Each country has its own dataset. Each dataset has multiple tables.
You want analysts from each country to be able to see and query only the data for their respective countries.
How should you configure the access rights?
ACreate a group per country. Add analysts to their respective country-groups. Create a single group 'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery jobUser. Share the appropriate dataset with view access with each respective analyst country-group.
BCreate a group per country. Add analysts to their respective country-groups. Create a single group 'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery jobUser. Share the appropriate tables with view access with each respective analyst country-group.
CCreate a group per country. Add analysts to their respective country-groups. Create a single group 'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery dataViewer. Share the appropriate dataset with view access with each respective analyst country- group.
DCreate a group per country. Add analysts to their respective country-groups. Create a single group 'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery dataViewer. Share the appropriate table with view access with each respective analyst country-group.
You have an App Engine application that needs to be updated. You want to test the update with production traffic before replacing the current application version.
What should you do?
ADeploy the update using the Instance Group Updater to create a partial rollout, which allows for canary testing.
BDeploy the update as a new version in the App Engine application, and split traffic between the new and current versions.
CDeploy the update in a new VPC, and use Google's global HTTP load balancing to split traffic between the update and current applications.
DDeploy the update as a new App Engine application, and use Google's global HTTP load balancing to split traffic between the new and current applications.
Your company creates rendering software which users can download from the company website. Your company has customers all over the world. You want to minimize latency for all your customers. You want to follow Google-recommended practices.
How should you store the files?
ASave the files in a Multi-Regional Cloud Storage bucket.
BSave the files in a Regional Cloud Storage bucket, one bucket per zone of the region.
CSave the files in multiple Regional Cloud Storage buckets, one bucket per zone per region.
DSave the files in multiple Multi-Regional Cloud Storage buckets, one bucket per multi-region.
To speed up data retrieval, more vehicles will be upgraded to cellular connections and be able to transmit data to the ETL process. The current FTP process is error-prone and restarts the data transfer from the start of the file when connections fail, which happens often. You want to improve the reliability of the solution and minimize data transfer time on the cellular connections.
What should you do?
AUse one Google Container Engine cluster of FTP servers. Save the data to a Multi-Regional bucket. Run the ETL process using data in the bucket
BUse multiple Google Container Engine clusters running FTP servers located in different regions. Save the data to Multi-Regional buckets in US, EU, and Asia. Run the ETL process using the data in the bucket
CDirectly transfer the files to different Google Cloud Multi-Regional Storage bucket locations in US, EU, and Asia using Google APIs over HTTP(S). Run the ETL process using the data in the bucket
DDirectly transfer the files to a different Google Cloud Regional Storage bucket location in US, EU, and Asia using Google APIs over HTTP(S). Run the ETL process to retrieve the data from each Regional bucket
For this question, refer to the TerramEarth case study. To be compliant with European GDPR regulation, TerramEarth is required to delete data generated from its
European customers after a period of 36 months when it contains personal data. In the new architecture, this data will be stored in both Cloud Storage and
BigQuery. What should you do?
ACreate a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.
BCreate a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action when with an Age condition of 36 months.
CCreate a BigQuery time-partitioned table for the European data, and set the partition expiration period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.
DCreate a BigQuery time-partitioned table for the European data, and set the partition expiration period to 36 months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action with an Age condition of 36 months.
Your company wants to start using Google Cloud resources but wants to retain their on-premises Active Directory domain controller for identity management.
What should you do?
AUse the Admin Directory API to authenticate against the Active Directory domain controller.
BUse Google Cloud Directory Sync to synchronize Active Directory usernames with cloud identities and configure SAML SSO.
CUse Cloud Identity-Aware Proxy configured to use the on-premises Active Directory domain controller as an identity provider.
DUse Compute Engine to create an Active Directory (AD) domain controller that is a replica of the on-premises AD domain controller using Google Cloud Directory Sync.
You are using Cloud SQL as the database backend for a large CRM deployment. You want to scale as usage increases and ensure that you don't run out of storage, maintain 75% CPU usage cores, and keep replication lag below 60 seconds. What are the correct steps to meet your requirements?
A
Enable automatic storage increase for the instance. 2. Create a Stackdriver alert when CPU usage exceeds 75%, and change the instance type to reduce CPU usage. 3. Create a Stackdriver alert for replication lag, and shard the database to reduce replication time.
B
Enable automatic storage increase for the instance. 2. Change the instance type to a 32-core machine type to keep CPU usage below 75%. 3. Create a Stackdriver alert for replication lag, and deploy memcache to reduce load on the master.
C
Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on the instance to create more space. 2. Deploy memcached to reduce CPU load. 3. Change the instance type to a 32-core machine type to reduce replication lag.
D
Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on the instance to create more space. 2. Deploy memcached to reduce CPU load. 3. Create a Stackdriver alert for replication lag, and change the instance type to a 32-core machine type to reduce replication lag.
Mountkirk Games wants to set up a continuous delivery pipeline. Their architecture includes many small services that they want to be able to update and roll back quickly. Mountkirk Games has the following requirements:
✑ Services are deployed redundantly across multiple regions in the US and Europe
✑ Only frontend services are exposed on the public internet
✑ They can provide a single frontend IP for their fleet of services
✑ Deployment artifacts are immutable
Which set of products should they use?
AGoogle Cloud Storage, Google Cloud Dataflow, Google Compute Engine
BGoogle Cloud Storage, Google App Engine, Google Network Load Balancer
CGoogle Kubernetes Registry, Google Container Engine, Google HTTP(S) Load Balancer
DGoogle Cloud Functions, Google Cloud Pub/Sub, Google Cloud Deployment Manager
As part of Dress4Win's plans to migrate to the cloud, they want to be able to set up a managed logging and monitoring system so they can handle spikes in their traffic load.
They want to ensure that:
The infrastructure can be notified when it needs to scale up and down to handle the ebb and flow of usage throughout the day
Their administrators are notified automatically when their application reports errors.
They can filter their aggregated logs down in order to debug one piece of the application across many hosts
Which Google StackDriver features should they use?
ALogging, Alerts, Insights, Debug
BMonitoring, Trace, Debug, Logging
CMonitoring, Logging, Alerts, Error Reporting
DMonitoring, Logging, Debug, Error Report
TerramEarth plans to connect all 20 million vehicles in the field to the cloud. This increases the volume to 20 million 600 byte records a second for 40 TB an hour.
How should you design the data ingestion?
AVehicles write data directly to GCS
BVehicles write data directly to Google Cloud Pub/Sub
CVehicles stream data directly to Google BigQuery
DVehicles continue to write data using the existing system (FTP)
The Dress4Win security team has disabled external SSH access into production virtual machines (VMs) on Google Cloud Platform (GCP).
The operations team needs to remotely manage the VMs, build and push Docker containers, and manage Google Cloud Storage objects.
What can they do?
AGrant the operations engineer access to use Google Cloud Shell.
BConfigure a VPN connection to GCP to allow SSH access to the cloud VMs.
CDevelop a new access request process that grants temporary SSH access to cloud VMs when an operations engineer needs to perform a task.
DHave the development team build an API service that allows the operations team to execute specific remote procedure calls to accomplish their tasks.
The migration of JencoMart's application to Google Cloud Platform (GCP) is progressing too slowly. The infrastructure is shown in the diagram. You want to maximize throughput.
What are three potential bottlenecks? (Choose three.)
AA single VPN tunnel, which limits throughput
BA tier of Google Cloud Storage that is not suited for this task
CA copy command that is not suited to operate over long distances
DFewer virtual machines (VMs) in GCP than on-premises machines
EA separate storage layer outside the VMs, which is not suited for this task
FComplicated internet connectivity between the on-premises infrastructure and GCP
Your customer wants to do resilience testing of their authentication layer. This consists of a regional managed instance group serving a public REST API that reads from and writes to a Cloud SQL instance.
What should you do?
AEngage with a security company to run web scrapers that look your for users' authentication data om malicious websites and notify you if any is found.
BDeploy intrusion detection software to your virtual machines to detect and log unauthorized access.
CSchedule a disaster simulation exercise during which you can shut off all VMs in a zone to see how your application behaves.
DConfigure a read replica for your Cloud SQL instance in a different zone than the master, and then manually trigger a failover while monitoring KPIs for our REST API.
Your architecture calls for the centralized collection of all admin activity and VM system logs within your project.
How should you collect these logs from both VMs and services?
AAll admin and VM system logs are automatically collected by Stackdriver.
BStackdriver automatically collects admin activity logs for most services. The Stackdriver Logging agent must be installed on each instance to collect system logs.
CLaunch a custom syslogd compute instance and configure your GCP project and VMs to forward all logs to it.
DInstall the Stackdriver Logging agent on a single compute instance and let it collect all audit and access logs for your environment.
You are developing a globally scaled frontend for a legacy streaming backend data API. This API expects events in strict chronological order with no repeat data for proper processing.
Which products should you deploy to ensure guaranteed-once FIFO (first-in, first-out) delivery of data?
ACloud Pub/Sub alone
BCloud Pub/Sub to Cloud Dataflow
CCloud Pub/Sub to Stackdriver
DCloud Pub/Sub to Cloud SQL
For this question, refer to the TerramEarth case study. You are asked to design a new architecture for the ingestion of the data of the 200,000 vehicles that are connected to a cellular network. You want to follow Google-recommended practices.
Considering the technical requirements, which components should you use for the ingestion of the data?
AGoogle Kubernetes Engine with an SSL Ingress
BCloud IoT Core with public/private key pairs
CCompute Engine with project-wide SSH keys
DCompute Engine with specific SSH keys
The current Dress4Win system architecture has high latency to some customers because it is located in one data center.
As of a future evaluation and optimizing for performance in the cloud, Dresss4Win wants to distribute its system architecture to multiple locations when Google cloud platform.
Which approach should they use?
AUse regional managed instance groups and a global load balancer to increase performance because the regional managed instance group can grow instances in each region separately based on traffic.
BUse a global load balancer with a set of virtual machines that forward the requests to a closer group of virtual machines managed by your operations team.
CUse regional managed instance groups and a global load balancer to increase reliability by providing automatic failover between zones in different regions.
DUse a global load balancer with a set of virtual machines that forward the requests to a closer group of virtual machines as part of a separate managed instance groups.