ExamGecko
Home Home / Google / Professional Cloud DevOps Engineer

Google Professional Cloud DevOps Engineer Practice Test - Questions Answers, Page 8

Question list
Search
Search

List of questions

Search

Related questions











You support a web application that runs on App Engine and uses CloudSQL and Cloud Storage for data storage. After a short spike in website traffic, you notice a big increase in latency for all user requests, increase in CPU use, and the number of processes running the application. Initial troubleshooting reveals:

After the initial spike in traffic, load levels returned to normal but users still experience high latency.

Requests for content from the CloudSQL database and images from Cloud Storage show the same high latency.

No changes were made to the website around the time the latency increased.

There is no increase in the number of errors to the users.

You expect another spike in website traffic in the coming days and want to make sure users don't experience latency. What should you do?

A.
Upgrade the GCS buckets to Multi-Regional.
A.
Upgrade the GCS buckets to Multi-Regional.
Answers
B.
Enable high availability on the CloudSQL instances.
B.
Enable high availability on the CloudSQL instances.
Answers
C.
Move the application from App Engine to Compute Engine.
C.
Move the application from App Engine to Compute Engine.
Answers
D.
Modify the App Engine configuration to have additional idle instances.
D.
Modify the App Engine configuration to have additional idle instances.
Answers
Suggested answer: D

Explanation:

Scaling App Engine scales the number of instances automatically in response to processing volume. This scaling factors in the automatic_scaling settings that are provided on a per-version basis in the configuration file. A service with basic scaling is configured by setting the maximum number of instances in the max_instances parameter of the basic_scaling setting. The number of live instances scales with the processing volume. You configure the number of instances of each version in that service's configuration file. The number of instances usually corresponds to the size of a dataset being held in memory or the desired throughput for offline work. You can adjust the number of instances of a manually-scaled version very quickly, without stopping instances that are currently running, using the Modules API set_num_instances function. https://cloud.google.com/appengine/docs/standard/python/how-instances-are-managed

https://cloud.google.com/appengine/docs/standard/python/config/appref

max_idle_instances Optional. The maximum number of idle instances that App Engine should maintain for this version. Specify a value from 1 to 1000. If not specified, the default value is automatic, which means App Engine will manage the number of idle instances. Keep the following in mind: A high maximum reduces the number of idle instances more gradually when load levels return to normal after a spike. This helps your application maintain steady performance through fluctuations in request load, but also raises the number of idle instances (and consequent running costs) during such periods of heavy load.

Your application runs on Google Cloud Platform (GCP). You need to implement Jenkins for deploying application releases to GCP. You want to streamline the release process, lower operational toil, and keep user data secure. What should you do?

A.
Implement Jenkins on local workstations.
A.
Implement Jenkins on local workstations.
Answers
B.
Implement Jenkins on Kubernetes on-premises
B.
Implement Jenkins on Kubernetes on-premises
Answers
C.
Implement Jenkins on Google Cloud Functions.
C.
Implement Jenkins on Google Cloud Functions.
Answers
D.
Implement Jenkins on Compute Engine virtual machines.
D.
Implement Jenkins on Compute Engine virtual machines.
Answers
Suggested answer: D

Explanation:

Your application runs on Google Cloud Platform (GCP). You need to implement Jenkins for deploying application releases to GCP. You want to streamline the release process, lower operational toil, and keep user data secure. What should you do?

https://plugins.jenkins.io/google-compute-engine/

You are working with a government agency that requires you to archive application logs for seven years. You need to configure Stackdriver to export and store the logs while minimizing costs of storage. What should you do?

A.
Create a Cloud Storage bucket and develop your application to send logs directly to the bucket.
A.
Create a Cloud Storage bucket and develop your application to send logs directly to the bucket.
Answers
B.
Develop an App Engine application that pulls the logs from Stackdriver and saves them in BigQuery.
B.
Develop an App Engine application that pulls the logs from Stackdriver and saves them in BigQuery.
Answers
C.
Create an export in Stackdriver and configure Cloud Pub/Sub to store logs in permanent storage for seven years.
C.
Create an export in Stackdriver and configure Cloud Pub/Sub to store logs in permanent storage for seven years.
Answers
D.
Create a sink in Stackdriver, name it, create a bucket on Cloud Storage for storing archived logs, and then select the bucket as the log export destination.
D.
Create a sink in Stackdriver, name it, create a bucket on Cloud Storage for storing archived logs, and then select the bucket as the log export destination.
Answers
Suggested answer: D

Explanation:

https://cloud.google.com/logging/docs/routing/overview

You support a trading application written in Python and hosted on App Engine flexible environment. You want to customize the error information being sent to Stackdriver Error Reporting. What should you do?

A.
Install the Stackdriver Error Reporting library for Python, and then run your code on a Compute Engine VM.
A.
Install the Stackdriver Error Reporting library for Python, and then run your code on a Compute Engine VM.
Answers
B.
Install the Stackdriver Error Reporting library for Python, and then run your code on Google Kubernetes Engine.
B.
Install the Stackdriver Error Reporting library for Python, and then run your code on Google Kubernetes Engine.
Answers
C.
Install the Stackdriver Error Reporting library for Python, and then run your code on App Engine flexible environment.
C.
Install the Stackdriver Error Reporting library for Python, and then run your code on App Engine flexible environment.
Answers
D.
Use the Stackdriver Error Reporting API to write errors from your application to ReportedErrorEvent, and then generate log entries with properly formatted error messages in Stackdriver Logging.
D.
Use the Stackdriver Error Reporting API to write errors from your application to ReportedErrorEvent, and then generate log entries with properly formatted error messages in Stackdriver Logging.
Answers
Suggested answer: D

Explanation:

https://cloud.google.com/error-reporting/docs/formatting-error-messages

https://cloud.google.com/error-reporting/docs/reference/libraries#client-libraries-install-python

no need to install error reporting library on App Engine Flex.

You need to define Service Level Objectives (SLOs) for a high-traffic multi-region web application. Customers expect the application to always be available and have fast response times. Customers are currently happy with the application performance and availability. Based on current measurement, you observe that the 90th percentile of latency is 120ms and the 95th percentile of latency is 275ms over a 28-day window. What latency SLO would you recommend to the team to publish?

A.
90th percentile -- 100ms 95th percentile -- 250ms
A.
90th percentile -- 100ms 95th percentile -- 250ms
Answers
B.
90th percentile -- 120ms 95th percentile -- 275ms
B.
90th percentile -- 120ms 95th percentile -- 275ms
Answers
C.
90th percentile -- 150ms 95th percentile -- 300ms
C.
90th percentile -- 150ms 95th percentile -- 300ms
Answers
D.
90th percentile -- 250ms 95th percentile -- 400ms
D.
90th percentile -- 250ms 95th percentile -- 400ms
Answers
Suggested answer: C

Explanation:

https://sre.google/sre-book/service-level-objectives/

You support a large service with a well-defined Service Level Objective (SLO). The development team deploys new releases of the service multiple times a week. If a major incident causes the service to miss its SLO, you want the development team to shift its focus from working on features to improving service reliability. What should you do before a major incident occurs?

A.
Develop an appropriate error budget policy in cooperation with all service stakeholders.
A.
Develop an appropriate error budget policy in cooperation with all service stakeholders.
Answers
B.
Negotiate with the product team to always prioritize service reliability over releasing new features.
B.
Negotiate with the product team to always prioritize service reliability over releasing new features.
Answers
C.
Negotiate with the development team to reduce the release frequency to no more than once a week.
C.
Negotiate with the development team to reduce the release frequency to no more than once a week.
Answers
D.
Add a plugin to your Jenkins pipeline that prevents new releases whenever your service is out of SLO.
D.
Add a plugin to your Jenkins pipeline that prevents new releases whenever your service is out of SLO.
Answers
Suggested answer: A

Explanation:

Reason : Incident has not occurred yet, even when development team is already pushing new features multiple times a week. The option A says, to define an error budget 'policy', not to define error budget(It is already present). Just simple means to bring in all stakeholders, and decide how to consume the error budget effectively that could bring balance between feature deployment and reliability.

The goals of this policy are to: -- Protect customers from repeated SLO misses -- Provide an incentive to balance reliability with other features https://sre.google/workbook/error-budget-policy/

Your company is developing applications that are deployed on Google Kubernetes Engine (GKE). Each team manages a different application. You need to create the development and production environments for each team, while minimizing costs. Different teams should not be able to access other teams' environments. What should you do?

A.
Create one GCP Project per team. In each project, create a cluster for Development and one for Production. Grant the teams IAM access to their respective clusters.
A.
Create one GCP Project per team. In each project, create a cluster for Development and one for Production. Grant the teams IAM access to their respective clusters.
Answers
B.
Create one GCP Project per team. In each project, create a cluster with a Kubernetes namespace for Development and one for Production. Grant the teams IAM access to their respective clusters.
B.
Create one GCP Project per team. In each project, create a cluster with a Kubernetes namespace for Development and one for Production. Grant the teams IAM access to their respective clusters.
Answers
C.
Create a Development and a Production GKE cluster in separate projects. In each cluster, create a Kubernetes namespace per team, and then configure Identity Aware Proxy so that each team can only access its own namespace.
C.
Create a Development and a Production GKE cluster in separate projects. In each cluster, create a Kubernetes namespace per team, and then configure Identity Aware Proxy so that each team can only access its own namespace.
Answers
D.
Create a Development and a Production GKE cluster in separate projects. In each cluster, create a Kubernetes namespace per team, and then configure Kubernetes Role-based access control (RBAC) so that each team can only access its own namespace.
D.
Create a Development and a Production GKE cluster in separate projects. In each cluster, create a Kubernetes namespace per team, and then configure Kubernetes Role-based access control (RBAC) so that each team can only access its own namespace.
Answers
Suggested answer: D

Explanation:

https://cloud.google.com/architecture/prep-kubernetes-engine-for-prod#roles_and_groups

Some of your production services are running in Google Kubernetes Engine (GKE) in the eu-west-1 region. Your build system runs in the us-west-1 region. You want to push the container images from your build system to a scalable registry to maximize the bandwidth for transferring the images to the cluster. What should you do?

A.
Push the images to Google Container Registry (GCR) using the gcr.io hostname.
A.
Push the images to Google Container Registry (GCR) using the gcr.io hostname.
Answers
B.
Push the images to Google Container Registry (GCR) using the us.gcr.io hostname.
B.
Push the images to Google Container Registry (GCR) using the us.gcr.io hostname.
Answers
C.
Push the images to Google Container Registry (GCR) using the eu.gcr.io hostname.
C.
Push the images to Google Container Registry (GCR) using the eu.gcr.io hostname.
Answers
D.
Push the images to a private image registry running on a Compute Engine instance in the eu-west-1 region.
D.
Push the images to a private image registry running on a Compute Engine instance in the eu-west-1 region.
Answers
Suggested answer: C

Explanation:

Hostname Storage location gcr.io Stores images in data centers in the United States asia.gcr.io Stores images in data centers in Asia eu.gcr.io Stores images in data centers within member states of the European Union us.gcr.io Stores images in data centers in the United States

You manage several production systems that run on Compute Engine in the same Google Cloud Platform (GCP) project. Each system has its own set of dedicated Compute Engine instances. You want to know how must it costs to run each of the systems. What should you do?

A.
In the Google Cloud Platform Console, use the Cost Breakdown section to visualize the costs per system.
A.
In the Google Cloud Platform Console, use the Cost Breakdown section to visualize the costs per system.
Answers
B.
Assign all instances a label specific to the system they run. Configure BigQuery billing export and query costs per label.
B.
Assign all instances a label specific to the system they run. Configure BigQuery billing export and query costs per label.
Answers
C.
Enrich all instances with metadata specific to the system they run. Configure Stackdriver Logging to export to BigQuery, and query costs based on the metadata.
C.
Enrich all instances with metadata specific to the system they run. Configure Stackdriver Logging to export to BigQuery, and query costs based on the metadata.
Answers
D.
Name each virtual machine (VM) after the system it runs. Set up a usage report export to a Cloud Storage bucket. Configure the bucket as a source in BigQuery to query costs based on VM name.
D.
Name each virtual machine (VM) after the system it runs. Set up a usage report export to a Cloud Storage bucket. Configure the bucket as a source in BigQuery to query costs based on VM name.
Answers
Suggested answer: B

Explanation:

https://cloud.google.com/billing/docs/how-to/export-data-bigquery

You use Cloud Build to build and deploy your application. You want to securely incorporate database credentials and other application secrets into the build pipeline. You also want to minimize the development effort. What should you do?

A.
Create a Cloud Storage bucket and use the built-in encryption at rest. Store the secrets in the bucket and grant Cloud Build access to the bucket.
A.
Create a Cloud Storage bucket and use the built-in encryption at rest. Store the secrets in the bucket and grant Cloud Build access to the bucket.
Answers
B.
Encrypt the secrets and store them in the application repository. Store a decryption key in a separate repository and grant Cloud Build access to the repository.
B.
Encrypt the secrets and store them in the application repository. Store a decryption key in a separate repository and grant Cloud Build access to the repository.
Answers
C.
Use client-side encryption to encrypt the secrets and store them in a Cloud Storage bucket. Store a decryption key in the bucket and grant Cloud Build access to the bucket.
C.
Use client-side encryption to encrypt the secrets and store them in a Cloud Storage bucket. Store a decryption key in the bucket and grant Cloud Build access to the bucket.
Answers
D.
Use Cloud Key Management Service (Cloud KMS) to encrypt the secrets and include them in your Cloud Build deployment configuration. Grant Cloud Build access to the KeyRing.
D.
Use Cloud Key Management Service (Cloud KMS) to encrypt the secrets and include them in your Cloud Build deployment configuration. Grant Cloud Build access to the KeyRing.
Answers
Suggested answer: D

Explanation:

https://cloud.google.com/build/docs/securing-builds/use-encrypted-credentials

Total 166 questions
Go to page: of 17