ExamGecko
Home Home / Google / Professional Cloud DevOps Engineer

Google Professional Cloud DevOps Engineer Practice Test - Questions Answers, Page 6

Question list
Search
Search

List of questions

Search

Related questions











You are running an application in a virtual machine (VM) using a custom Debian image. The image has the Stackdriver Logging agent installed. The VM has the cloud-platform scope. The application is logging information via syslog. You want to use Stackdriver Logging in the Google Cloud Platform Console to visualize the logs. You notice that syslog is not showing up in the 'All logs' dropdown list of the Logs Viewer. What is the first thing you should do?

A.
Look for the agent's test log entry in the Logs Viewer.
A.
Look for the agent's test log entry in the Logs Viewer.
Answers
B.
Install the most recent version of the Stackdriver agent.
B.
Install the most recent version of the Stackdriver agent.
Answers
C.
Verify the VM service account access scope includes the monitoring.write scope.
C.
Verify the VM service account access scope includes the monitoring.write scope.
Answers
D.
SSH to the VM and execute the following commands on your VM: ps ax I grep fluentd
D.
SSH to the VM and execute the following commands on your VM: ps ax I grep fluentd
Answers
Suggested answer: D

Explanation:

https://cloud.google.com/compute/docs/access/service-accounts#associating_a_service_account_to_an_instance

Your organization wants to implement Site Reliability Engineering (SRE) culture and principles. Recently, a service that you support had a limited outage. A manager on another team asks you to provide a formal explanation of what happened so they can action remediations. What should you do?

A.
Develop a postmortem that includes the root causes, resolution, lessons learned, and a prioritized list of action items. Share it with the manager only.
A.
Develop a postmortem that includes the root causes, resolution, lessons learned, and a prioritized list of action items. Share it with the manager only.
Answers
B.
Develop a postmortem that includes the root causes, resolution, lessons learned, and a prioritized list of action items. Share it on the engineering organization's document portal.
B.
Develop a postmortem that includes the root causes, resolution, lessons learned, and a prioritized list of action items. Share it on the engineering organization's document portal.
Answers
C.
Develop a postmortem that includes the root causes, resolution, lessons learned, the list of people responsible, and a list of action items for each person. Share it with the manager only.
C.
Develop a postmortem that includes the root causes, resolution, lessons learned, the list of people responsible, and a list of action items for each person. Share it with the manager only.
Answers
D.
Develop a postmortem that includes the root causes, resolution, lessons learned, the list of people responsible, and a list of action items for each person. Share it on the engineering organization's document portal.
D.
Develop a postmortem that includes the root causes, resolution, lessons learned, the list of people responsible, and a list of action items for each person. Share it on the engineering organization's document portal.
Answers
Suggested answer: B

You have a set of applications running on a Google Kubernetes Engine (GKE) cluster, and you are using Stackdriver Kubernetes Engine Monitoring. You are bringing a new containerized application required by your company into production. This application is written by a third party and cannot be modified or reconfigured. The application writes its log information to /var/log/app_messages.log, and you want to send these log entries to Stackdriver Logging. What should you do?

A.
Use the default Stackdriver Kubernetes Engine Monitoring agent configuration.
A.
Use the default Stackdriver Kubernetes Engine Monitoring agent configuration.
Answers
B.
Deploy a Fluentd daemonset to GKE. Then create a customized input and output configuration to tail the log file in the application's pods and write to Slackdriver Logging.
B.
Deploy a Fluentd daemonset to GKE. Then create a customized input and output configuration to tail the log file in the application's pods and write to Slackdriver Logging.
Answers
C.
Install Kubernetes on Google Compute Engine (GCE> and redeploy your applications. Then customize the built-in Stackdriver Logging configuration to tail the log file in the application's pods and write to Stackdriver Logging.
C.
Install Kubernetes on Google Compute Engine (GCE> and redeploy your applications. Then customize the built-in Stackdriver Logging configuration to tail the log file in the application's pods and write to Stackdriver Logging.
Answers
D.
Write a script to tail the log file within the pod and write entries to standard output. Run the script as a sidecar container with the application's pod. Configure a shared volume between the containers to allow the script to have read access to /var/log in the application container.
D.
Write a script to tail the log file within the pod and write entries to standard output. Run the script as a sidecar container with the application's pod. Configure a shared volume between the containers to allow the script to have read access to /var/log in the application container.
Answers
Suggested answer: B

Explanation:

https://cloud.google.com/architecture/customizing-stackdriver-logs-fluentd

Besides the list of default logs that the Logging agent streams by default, you can customize the Logging agent to send additional logs to Logging or to adjust agent settings by adding input configurations. The configuration definitions in these sections apply to the fluent-plugin-google-cloud output plugin only and specify how logs are transformed and ingested into Cloud Logging. https://cloud.google.com/logging/docs/agent/logging/configuration#configure

You are running a real-time gaming application on Compute Engine that has a production and testing environment. Each environment has their own Virtual Private Cloud (VPC) network. The application frontend and backend servers are located on different subnets in the environment's VPC. You suspect there is a malicious process communicating intermittently in your production frontend servers. You want to ensure that network traffic is captured for analysis. What should you do?

A.
Enable VPC Flow Logs on the production VPC network frontend and backend subnets only with a sample volume scale of 0.5.
A.
Enable VPC Flow Logs on the production VPC network frontend and backend subnets only with a sample volume scale of 0.5.
Answers
B.
Enable VPC Flow Logs on the production VPC network frontend and backend subnets only with a sample volume scale of 1.0.
B.
Enable VPC Flow Logs on the production VPC network frontend and backend subnets only with a sample volume scale of 1.0.
Answers
C.
Enable VPC Flow Logs on the testing and production VPC network frontend and backend subnets with a volume scale of 0.5. Apply changes in testing before production.
C.
Enable VPC Flow Logs on the testing and production VPC network frontend and backend subnets with a volume scale of 0.5. Apply changes in testing before production.
Answers
D.
Enable VPC Flow Logs on the testing and production VPC network frontend and backend subnets with a volume scale of 1.0. Apply changes in testing before production.
D.
Enable VPC Flow Logs on the testing and production VPC network frontend and backend subnets with a volume scale of 1.0. Apply changes in testing before production.
Answers
Suggested answer: D

You support a high-traffic web application and want to ensure that the home page loads in a timely manner. As a first step, you decide to implement a Service Level Indicator (SLI) to represent home page request latency with an acceptable page load time set to 100 ms. What is the Google-recommended way of calculating this SLI?

A.
Buckelize Ihe request latencies into ranges, and then compute the percentile at 100 ms.
A.
Buckelize Ihe request latencies into ranges, and then compute the percentile at 100 ms.
Answers
B.
Bucketize the request latencies into ranges, and then compute the median and 90th percentiles.
B.
Bucketize the request latencies into ranges, and then compute the median and 90th percentiles.
Answers
C.
Count the number of home page requests that load in under 100 ms, and then divide by the total number of home page requests.
C.
Count the number of home page requests that load in under 100 ms, and then divide by the total number of home page requests.
Answers
D.
Count the number of home page requests that load in under 100 ms. and then divide by the total number of all web application requests.
D.
Count the number of home page requests that load in under 100 ms. and then divide by the total number of all web application requests.
Answers
Suggested answer: C

Explanation:

https://sre.google/workbook/implementing-slos/

In the SRE principles book, it's recommended treating the SLI as the ratio of two numbers: the number of good events divided by the total number of events. For example: Number of successful HTTP requests / total HTTP requests (success rate)

You use a multiple step Cloud Build pipeline to build and deploy your application to Google Kubernetes Engine (GKE). You want to integrate with a third-party monitoring platform by performing a HTTP POST of the build information to a webhook. You want to minimize the development effort. What should you do?

A.
Add logic to each Cloud Build step to HTTP POST the build information to a webhook.
A.
Add logic to each Cloud Build step to HTTP POST the build information to a webhook.
Answers
B.
Add a new step at the end of the pipeline in Cloud Build to HTTP POST the build information to a webhook.
B.
Add a new step at the end of the pipeline in Cloud Build to HTTP POST the build information to a webhook.
Answers
C.
Use Stackdriver Logging to create a logs-based metric from the Cloud Buitd logs. Create an Alert with a Webhook notification type.
C.
Use Stackdriver Logging to create a logs-based metric from the Cloud Buitd logs. Create an Alert with a Webhook notification type.
Answers
D.
Create a Cloud Pub/Sub push subscription to the Cloud Build cloud-builds PubSub topic to HTTP POST the build information to a webhook.
D.
Create a Cloud Pub/Sub push subscription to the Cloud Build cloud-builds PubSub topic to HTTP POST the build information to a webhook.
Answers
Suggested answer: D

You created a Stackdriver chart for CPU utilization in a dashboard within your workspace project. You want to share the chart with your Site Reliability Engineering (SRE) team only. You want to ensure you follow the principle of least privilege. What should you do?

A.
Share the workspace Project ID with the SRE team. Assign the SRE team the Monitoring Viewer IAM role in the workspace project.
A.
Share the workspace Project ID with the SRE team. Assign the SRE team the Monitoring Viewer IAM role in the workspace project.
Answers
B.
Share the workspace Project ID with the SRE team. Assign the SRE team the Dashboard Viewer IAM role in the workspace project.
B.
Share the workspace Project ID with the SRE team. Assign the SRE team the Dashboard Viewer IAM role in the workspace project.
Answers
C.
Click 'Share chart by URL' and provide the URL to the SRE team. Assign the SRE team the Monitoring Viewer IAM role in the workspace project.
C.
Click 'Share chart by URL' and provide the URL to the SRE team. Assign the SRE team the Monitoring Viewer IAM role in the workspace project.
Answers
D.
Click 'Share chart by URL' and provide the URL to the SRE team. Assign the SRE team the Dashboard Viewer IAM role in the workspace project.
D.
Click 'Share chart by URL' and provide the URL to the SRE team. Assign the SRE team the Dashboard Viewer IAM role in the workspace project.
Answers
Suggested answer: C

Explanation:

https://cloud.google.com/monitoring/access-control

You support a stateless web-based API that is deployed on a single Compute Engine instance in the europe-west2-a zone . The Service Level Indicator (SLI) for service availability is below the specified Service Level Objective (SLO). A postmortem has revealed that requests to the API regularly time out. The time outs are due to the API having a high number of requests and running out memory. You want to improve service availability. What should you do?

A.
Change the specified SLO to match the measured SLI.
A.
Change the specified SLO to match the measured SLI.
Answers
B.
Move the service to higher-specification compute instances with more memory.
B.
Move the service to higher-specification compute instances with more memory.
Answers
C.
Set up additional service instances in other zones and load balance the traffic between all instances.
C.
Set up additional service instances in other zones and load balance the traffic between all instances.
Answers
D.
Set up additional service instances in other zones and use them as a failover in case the primary instance is unavailable.
D.
Set up additional service instances in other zones and use them as a failover in case the primary instance is unavailable.
Answers
Suggested answer: C

You deploy a new release of an internal application during a weekend maintenance window when there is minimal user traffic. After the window ends, you learn that one of the new features isn't working as expected in the production environment. After an extended outage, you roll back the new release and deploy a fix. You want to modify your release process to reduce the mean time to recovery so you can avoid extended outages in the future. What should you do?

Choose 2 answers

A.
Before merging new code, require 2 different peers to review the code changes.
A.
Before merging new code, require 2 different peers to review the code changes.
Answers
B.
Adopt the blue/green deployment strategy when releasing new code via a CD server.
B.
Adopt the blue/green deployment strategy when releasing new code via a CD server.
Answers
C.
Integrate a code linting tool to validate coding standards before any code is accepted into the repository.
C.
Integrate a code linting tool to validate coding standards before any code is accepted into the repository.
Answers
D.
Require developers to run automated integration tests on their local development environments before release.
D.
Require developers to run automated integration tests on their local development environments before release.
Answers
E.
Configure a CI server. Add a suite of unit tests to your code and have your CI server run them on commit and verify any changes.
E.
Configure a CI server. Add a suite of unit tests to your code and have your CI server run them on commit and verify any changes.
Answers
Suggested answer: B, E

You have a pool of application servers running on Compute Engine. You need to provide a secure solution that requires the least amount of configuration and allows developers to easily access application logs for troubleshooting. How would you implement the solution on GCP?

A.
* Deploy the Stackdriver logging agent to the application servers. * Give the developers the IAM Logs Viewer role to access Stackdriver and view logs.
A.
* Deploy the Stackdriver logging agent to the application servers. * Give the developers the IAM Logs Viewer role to access Stackdriver and view logs.
Answers
B.
* Deploy the Stackdriver logging agent to the application servers. * Give the developers the IAM Logs Private Logs Viewer role to access Stackdriver and view logs.
B.
* Deploy the Stackdriver logging agent to the application servers. * Give the developers the IAM Logs Private Logs Viewer role to access Stackdriver and view logs.
Answers
C.
* Deploy the Stackdriver monitoring agent to the application servers. * Give the developers the IAM Monitoring Viewer role to access Stackdriver and view metrics.
C.
* Deploy the Stackdriver monitoring agent to the application servers. * Give the developers the IAM Monitoring Viewer role to access Stackdriver and view metrics.
Answers
D.
* Install the gsutil command line tool on your application servers. * Write a script using gsutil to upload your application log to a Cloud Storage bucket, and then schedule it to run via cron every 5 minutes. * Give the developers IAM Object Viewer access to view the logs in the specified bucket.
D.
* Install the gsutil command line tool on your application servers. * Write a script using gsutil to upload your application log to a Cloud Storage bucket, and then schedule it to run via cron every 5 minutes. * Give the developers IAM Object Viewer access to view the logs in the specified bucket.
Answers
Suggested answer: A

Explanation:

https://cloud.google.com/logging/docs/audit#access-control

Total 166 questions
Go to page: of 17