ExamGecko
Home Home / Google / Professional Cloud DevOps Engineer

Google Professional Cloud DevOps Engineer Practice Test - Questions Answers, Page 10

Question list
Search
Search

List of questions

Search

Related questions











You support a service with a well-defined Service Level Objective (SLO). Over the previous 6 months, your service has consistently met its SLO and customer satisfaction has been consistently high. Most of your service's operations tasks are automated and few repetitive tasks occur frequently. You want to optimize the balance between reliability and deployment velocity while following site reliability engineering best practices. What should you do? (Choose two.)

A.
Make the service's SLO more strict.
A.
Make the service's SLO more strict.
Answers
B.
Increase the service's deployment velocity and/or risk.
B.
Increase the service's deployment velocity and/or risk.
Answers
C.
Shift engineering time to other services that need more reliability.
C.
Shift engineering time to other services that need more reliability.
Answers
D.
Get the product team to prioritize reliability work over new features.
D.
Get the product team to prioritize reliability work over new features.
Answers
E.
Change the implementation of your Service Level Indicators (SLIs) to increase coverage.
E.
Change the implementation of your Service Level Indicators (SLIs) to increase coverage.
Answers
Suggested answer: B, C

Explanation:

(https://sre.google/workbook/implementing-slos/#slo-decision-matrix)

Your organization uses a change advisory board (CAB) to approve all changes to an existing service You want to revise this process to eliminate any negative impact on the software delivery performance What should you do?

Choose 2 answers

A.
Replace the CAB with a senior manager to ensure continuous oversight from development to deployment
A.
Replace the CAB with a senior manager to ensure continuous oversight from development to deployment
Answers
B.
Let developers merge their own changes but ensure that the team's deployment platform can roll back changes if any issues are discovered
B.
Let developers merge their own changes but ensure that the team's deployment platform can roll back changes if any issues are discovered
Answers
C.
Move to a peer-review based process for individual changes that is enforced at code check-in time and supported by automated tests
C.
Move to a peer-review based process for individual changes that is enforced at code check-in time and supported by automated tests
Answers
D.
Batch changes into larger but less frequent software releases
D.
Batch changes into larger but less frequent software releases
Answers
E.
Ensure that the team's development platform enables developers to get fast feedback on the impact of their changes
E.
Ensure that the team's development platform enables developers to get fast feedback on the impact of their changes
Answers
Suggested answer: C, E

Explanation:

A change advisory board (CAB) is a traditional way of approving changes to a service, but it can slow down the software delivery performance and introduce bottlenecks. A better way to improve the speed and quality of changes is to use a peer-review based process for individual changes that is enforced at code check-in time and supported by automated tests. This way, developers can get fast feedback on the impact of their changes and catch any errors or bugs before they reach production. Additionally, the team's development platform should enable developers to get fast feedback on the impact of their changes, such as using Cloud Code, Cloud Build, or Cloud Debugger.

Your organization has a containerized web application that runs on-premises As part of the migration plan to Google Cloud you need to select a deployment strategy and platform that meets the following acceptance criteria

1 The platform must be able to direct traffic from Android devices to an Android-specific microservice

2 The platform must allow for arbitrary percentage-based traffic splitting

3 The deployment strategy must allow for continuous testing of multiple versions of any microservice

What should you do?

A.
Deploy the canary release of the application to Cloud Run Use traffic splitting to direct 10% of user traffic to the canary release based on the revision tag
A.
Deploy the canary release of the application to Cloud Run Use traffic splitting to direct 10% of user traffic to the canary release based on the revision tag
Answers
B.
Deploy the canary release of the application to App Engine Use traffic splitting to direct a subset of user traffic to the new version based on the IP address
B.
Deploy the canary release of the application to App Engine Use traffic splitting to direct a subset of user traffic to the new version based on the IP address
Answers
C.
Deploy the canary release of the application to Compute Engine Use Anthos Service Mesh with Compute Engine to direct 10% of user traffic to the canary release by configuring the virtual service.
C.
Deploy the canary release of the application to Compute Engine Use Anthos Service Mesh with Compute Engine to direct 10% of user traffic to the canary release by configuring the virtual service.
Answers
D.
Deploy the canary release to Google Kubernetes Engine with Anthos Sen/ice Mesh Use traffic splitting to direct 10% of user traffic to the new version based on the user-agent header configured in the virtual service
D.
Deploy the canary release to Google Kubernetes Engine with Anthos Sen/ice Mesh Use traffic splitting to direct 10% of user traffic to the new version based on the user-agent header configured in the virtual service
Answers
Suggested answer: D

Explanation:

The best option for deploying a containerized web application to Google Cloud with the given acceptance criteria is to use Google Kubernetes Engine (GKE) with Anthos Service Mesh. GKE is a managed service for running Kubernetes clusters on Google Cloud, and Anthos Service Mesh is a service mesh that provides observability, traffic management, and security features for microservices. With Anthos Service Mesh, you can use traffic splitting to direct traffic from Android devices to an Android-specific microservice by configuring the user-agent header in the virtual service. You can also use traffic splitting to direct arbitrary percentage-based traffic to different versions of any microservice for continuous testing. For example, you can use a canary release strategy to direct 10% of user traffic to a new version of a microservice and monitor its performance and reliability.

Your team is running microservices in Google Kubernetes Engine (GKE) You want to detect consumption of an error budget to protect customers and define release policies What should you do?

A.
Create SLIs from metrics Enable Alert Policies if the services do not pass
A.
Create SLIs from metrics Enable Alert Policies if the services do not pass
Answers
B.
Use the metrics from Anthos Service Mesh to measure the health of the microservices
B.
Use the metrics from Anthos Service Mesh to measure the health of the microservices
Answers
C.
Create a SLO Create an Alert Policy on select_slo_bum_rate
C.
Create a SLO Create an Alert Policy on select_slo_bum_rate
Answers
D.
Create a SLO and configure uptime checks for your services Enable Alert Policies if the services do not pass
D.
Create a SLO and configure uptime checks for your services Enable Alert Policies if the services do not pass
Answers
Suggested answer: C

Explanation:

The best option for detecting consumption of an error budget to protect customers and define release policies is to create a service level objective (SLO) and create an alert policy on select_slo_burn_rate. A SLO is a target value or range of values for a service level indicator (SLI) that measures some aspect of the service quality, such as availability or latency. An error budget is the amount of time or number of errors that a service can tolerate while still meeting its SLO. A select_slo_burn_rate is a metric that indicates how fast the error budget is being consumed by the service. By creating an alert policy on select_slo_burn_rate, you can trigger notifications or actions when the error budget consumption exceeds a certain threshold. This way, you can balance change, velocity, and reliability of the service by adjusting the release policies based on the error budget status.

Your organization wants to collect system logs that will be used to generate dashboards in Cloud Operations for their Google Cloud project. You need to configure all current and future Compute Engine instances to collect the system logs and you must ensure that the Ops Agent remains up to date. What should you do?

A.
Use the gcloud CLI to install the Ops Agent on each VM listed in the Cloud Asset Inventory
A.
Use the gcloud CLI to install the Ops Agent on each VM listed in the Cloud Asset Inventory
Answers
B.
Select all VMs with an Agent status of Not detected on the Cloud Operations VMs dashboard Then select Install agents
B.
Select all VMs with an Agent status of Not detected on the Cloud Operations VMs dashboard Then select Install agents
Answers
C.
Use the gcloud CLI to create an Agent Policy.
C.
Use the gcloud CLI to create an Agent Policy.
Answers
D.
Install the Ops Agent on the Compute Engine image by using a startup script
D.
Install the Ops Agent on the Compute Engine image by using a startup script
Answers
Suggested answer: C

Explanation:

The best option for configuring all current and future Compute Engine instances to collect system logs and ensure that the Ops Agent remains up to date is to use the gcloud CLI to create an Agent Policy. An Agent Policy is a resource that defines how Ops Agents are installed and configured on VM instances that match certain criteria, such as labels or zones. Ops Agents are software agents that collect metrics and logs from VM instances and send them to Cloud Operations products, such as Cloud Monitoring and Cloud Logging. By creating an Agent Policy, you can ensure that all current and future VM instances that match the policy criteria will have the Ops Agent installed and updated automatically. This way, you can collect system logs from all VM instances and use them to generate dashboards in Cloud Operations.

Your company has a Google Cloud resource hierarchy with folders for production test and development Your cyber security team needs to review your company's Google Cloud security posture to accelerate security issue identification and resolution You need to centralize the logs generated by Google Cloud services from all projects only inside your production folder to allow for alerting and near-real time analysis. What should you do?

A.
Enable the Workflows API and route all the logs to Cloud Logging
A.
Enable the Workflows API and route all the logs to Cloud Logging
Answers
B.
Create a central Cloud Monitoring workspace and attach all related projects
B.
Create a central Cloud Monitoring workspace and attach all related projects
Answers
C.
Create an aggregated log sink associated with the production folder that uses a Pub Sub topic as the destination
C.
Create an aggregated log sink associated with the production folder that uses a Pub Sub topic as the destination
Answers
D.
Create an aggregated log sink associated with the production folder that uses a Cloud Logging bucket as the destination
D.
Create an aggregated log sink associated with the production folder that uses a Cloud Logging bucket as the destination
Answers
Suggested answer: D

Explanation:

The best option for centralizing the logs generated by Google Cloud services from all projects only inside your production folder is to create an aggregated log sink associated with the production folder that uses a Cloud Logging bucket as the destination. An aggregated log sink is a log sink that collects logs from multiple sources, such as projects, folders, or organizations. A Cloud Logging bucket is a storage location for logs that can be used as a destination for log sinks. By creating an aggregated log sink with a Cloud Logging bucket, you can collect and store all the logs from the production folder in one place and allow for alerting and near-real time analysis using Cloud Monitoring and Cloud Operations.

You are configuring the frontend tier of an application deployed in Google Cloud The frontend tier is hosted in ngmx and deployed using a managed instance group with an Envoy-based external HTTP(S) load balancer in front The application is deployed entirely within the europe-west2 region: and only serves users based in the United Kingdom. You need to choose the most cost-effective network tier and load balancing configuration What should you use?

A.
Premium Tier with a global load balancer
A.
Premium Tier with a global load balancer
Answers
B.
Premium Tier with a regional load balancer
B.
Premium Tier with a regional load balancer
Answers
C.
Standard Tier with a global load balancer
C.
Standard Tier with a global load balancer
Answers
D.
Standard Tier with a regional load balancer
D.
Standard Tier with a regional load balancer
Answers
Suggested answer: B

Explanation:

The most cost-effective network tier and load balancing configuration for your frontend tier is to use Premium Tier with a regional load balancer. Premium Tier is a network tier that provides high-performance and low-latency network connectivity across Google's global network. A regional load balancer is a load balancer that distributes traffic within a single region. Since your application is deployed entirely within the europe-west2 region and only serves users based in the United Kingdom, you can use Premium Tier with a regional load balancer to optimize the network performance and cost.

You recently deployed your application in Google Kubernetes Engine (GKE) and now need to release a new version of the application You need the ability to instantly roll back to the previous version of the application in case there are issues with the new version Which deployment model should you use?

A.
Perform a rolling deployment and test your new application after the deployment is complete
A.
Perform a rolling deployment and test your new application after the deployment is complete
Answers
B.
Perform A. B testing, and test your application periodically after the deployment is complete
B.
Perform A. B testing, and test your application periodically after the deployment is complete
Answers
C.
Perform a canary deployment, and test your new application periodically after the new version is deployed
C.
Perform a canary deployment, and test your new application periodically after the new version is deployed
Answers
D.
Perform a blue/green deployment and test your new application after the deployment is complete
D.
Perform a blue/green deployment and test your new application after the deployment is complete
Answers
Suggested answer: D

Explanation:

The best deployment model for releasing a new version of your application in GKE with the ability to instantly roll back to the previous version is to perform a blue/green deployment and test your new application after the deployment is complete. A blue/green deployment is a deployment strategy that involves creating two identical environments, one running the current version of the application (blue) and one running the new version of the application (green). The traffic is switched from blue to green after testing the new version, and if any issues are discovered, the traffic can be switched back to blue instantly. This way, you can minimize downtime and risk during deployment.

You are building and deploying a microservice on Cloud Run for your organization Your service is used by many applications internally You are deploying a new release, and you need to test the new version extensively in the staging and production environments You must minimize user and developer impact. What should you do?

A.
Deploy the new version of the service to the staging environment Split the traffic, and allow 1 % of traffic through to the latest version Test the latest version If the test passes gradually roll out the latest version to the staging and production environments
A.
Deploy the new version of the service to the staging environment Split the traffic, and allow 1 % of traffic through to the latest version Test the latest version If the test passes gradually roll out the latest version to the staging and production environments
Answers
B.
Deploy the new version of the service to the staging environment Split the traffic, and allow 50% of traffic through to the latest version Test the latest version If the test passes, send all traffic to the latest version Repeat for the production environment
B.
Deploy the new version of the service to the staging environment Split the traffic, and allow 50% of traffic through to the latest version Test the latest version If the test passes, send all traffic to the latest version Repeat for the production environment
Answers
C.
Deploy the new version of the service to the staging environment with a new-release tag without serving traffic Test the new-release version If the test passes; gradually roll out this tagged version Repeat for the production environment
C.
Deploy the new version of the service to the staging environment with a new-release tag without serving traffic Test the new-release version If the test passes; gradually roll out this tagged version Repeat for the production environment
Answers
D.
Deploy a new environment with the green tag to use as the staging environment Deploy the new version of the service to the green environment and test the new version If the tests pass, send all traffic to the green environment and delete the existing staging environment Repeat for the production environment
D.
Deploy a new environment with the green tag to use as the staging environment Deploy the new version of the service to the green environment and test the new version If the tests pass, send all traffic to the green environment and delete the existing staging environment Repeat for the production environment
Answers
Suggested answer: C

Explanation:

The best option for deploying a new release of your microservice on Cloud Run and testing it extensively in the staging and production environments with minimal user and developer impact is to deploy the new version of the service to the staging environment with a new-release tag without serving traffic, test the new-release version, and if the test passes, gradually roll out this tagged version. A tag is a label that you can assign to a revision of your service on Cloud Run. You can use tags to create different versions of your service without affecting traffic. You can also use tags to gradually roll out traffic to a new version of your service by using traffic splitting. This way, you can test your new release extensively in both environments and minimize user and developer impact.

You work for a global organization and run a service with an availability target of 99% with limited engineering resources. For the current calendar month you noticed that the service has 99 5% availability. You must ensure that your service meets the defined availability goals and can react to business changes including the upcoming launch of new features You also need to reduce technical debt while minimizing operational costs You want to follow Google-recommended practices What should you do?

A.
Add N+1 redundancy to your service by adding additional compute resources to the service
A.
Add N+1 redundancy to your service by adding additional compute resources to the service
Answers
B.
Identify, measure and eliminate toil by automating repetitive tasks
B.
Identify, measure and eliminate toil by automating repetitive tasks
Answers
C.
Define an error budget for your service level availability and minimize the remaining error budget
C.
Define an error budget for your service level availability and minimize the remaining error budget
Answers
D.
Allocate available engineers to the feature backlog while you ensure that the sen/ice remains within the availability target
D.
Allocate available engineers to the feature backlog while you ensure that the sen/ice remains within the availability target
Answers
Suggested answer: C
Total 166 questions
Go to page: of 17