ExamGecko
Ask Question

Google Professional Cloud DevOps Engineer Practice Test - Questions Answers, Page 10

List of questions

Question 91

Report
Export
Collapse

You support a service with a well-defined Service Level Objective (SLO). Over the previous 6 months, your service has consistently met its SLO and customer satisfaction has been consistently high. Most of your service's operations tasks are automated and few repetitive tasks occur frequently. You want to optimize the balance between reliability and deployment velocity while following site reliability engineering best practices. What should you do? (Choose two.)

Make the service's SLO more strict.
Make the service's SLO more strict.
Increase the service's deployment velocity and/or risk.
Increase the service's deployment velocity and/or risk.
Shift engineering time to other services that need more reliability.
Shift engineering time to other services that need more reliability.
Get the product team to prioritize reliability work over new features.
Get the product team to prioritize reliability work over new features.
Change the implementation of your Service Level Indicators (SLIs) to increase coverage.
Change the implementation of your Service Level Indicators (SLIs) to increase coverage.
Suggested answer: B, C

Explanation:

(https://sre.google/workbook/implementing-slos/#slo-decision-matrix)

asked 18/09/2024
Johannes Bickel
55 questions

Question 92

Report
Export
Collapse

Your organization uses a change advisory board (CAB) to approve all changes to an existing service You want to revise this process to eliminate any negative impact on the software delivery performance What should you do?

Choose 2 answers

Replace the CAB with a senior manager to ensure continuous oversight from development to deployment
Replace the CAB with a senior manager to ensure continuous oversight from development to deployment
Let developers merge their own changes but ensure that the team's deployment platform can roll back changes if any issues are discovered
Let developers merge their own changes but ensure that the team's deployment platform can roll back changes if any issues are discovered
Move to a peer-review based process for individual changes that is enforced at code check-in time and supported by automated tests
Move to a peer-review based process for individual changes that is enforced at code check-in time and supported by automated tests
Batch changes into larger but less frequent software releases
Batch changes into larger but less frequent software releases
Ensure that the team's development platform enables developers to get fast feedback on the impact of their changes
Ensure that the team's development platform enables developers to get fast feedback on the impact of their changes
Suggested answer: C, E

Explanation:

A change advisory board (CAB) is a traditional way of approving changes to a service, but it can slow down the software delivery performance and introduce bottlenecks. A better way to improve the speed and quality of changes is to use a peer-review based process for individual changes that is enforced at code check-in time and supported by automated tests. This way, developers can get fast feedback on the impact of their changes and catch any errors or bugs before they reach production. Additionally, the team's development platform should enable developers to get fast feedback on the impact of their changes, such as using Cloud Code, Cloud Build, or Cloud Debugger.

asked 18/09/2024
EDUARDO VIDAL
41 questions

Question 93

Report
Export
Collapse

Your organization has a containerized web application that runs on-premises As part of the migration plan to Google Cloud you need to select a deployment strategy and platform that meets the following acceptance criteria

1 The platform must be able to direct traffic from Android devices to an Android-specific microservice

2 The platform must allow for arbitrary percentage-based traffic splitting

3 The deployment strategy must allow for continuous testing of multiple versions of any microservice

What should you do?

Deploy the canary release of the application to Cloud Run Use traffic splitting to direct 10% of user traffic to the canary release based on the revision tag
Deploy the canary release of the application to Cloud Run Use traffic splitting to direct 10% of user traffic to the canary release based on the revision tag
Deploy the canary release of the application to App Engine Use traffic splitting to direct a subset of user traffic to the new version based on the IP address
Deploy the canary release of the application to App Engine Use traffic splitting to direct a subset of user traffic to the new version based on the IP address
Deploy the canary release of the application to Compute Engine Use Anthos Service Mesh with Compute Engine to direct 10% of user traffic to the canary release by configuring the virtual service.
Deploy the canary release of the application to Compute Engine Use Anthos Service Mesh with Compute Engine to direct 10% of user traffic to the canary release by configuring the virtual service.
Deploy the canary release to Google Kubernetes Engine with Anthos Sen/ice Mesh Use traffic splitting to direct 10% of user traffic to the new version based on the user-agent header configured in the virtual service
Deploy the canary release to Google Kubernetes Engine with Anthos Sen/ice Mesh Use traffic splitting to direct 10% of user traffic to the new version based on the user-agent header configured in the virtual service
Suggested answer: D

Explanation:

The best option for deploying a containerized web application to Google Cloud with the given acceptance criteria is to use Google Kubernetes Engine (GKE) with Anthos Service Mesh. GKE is a managed service for running Kubernetes clusters on Google Cloud, and Anthos Service Mesh is a service mesh that provides observability, traffic management, and security features for microservices. With Anthos Service Mesh, you can use traffic splitting to direct traffic from Android devices to an Android-specific microservice by configuring the user-agent header in the virtual service. You can also use traffic splitting to direct arbitrary percentage-based traffic to different versions of any microservice for continuous testing. For example, you can use a canary release strategy to direct 10% of user traffic to a new version of a microservice and monitor its performance and reliability.

asked 18/09/2024
Brian Carlo Hubilla
36 questions

Question 94

Report
Export
Collapse

Your team is running microservices in Google Kubernetes Engine (GKE) You want to detect consumption of an error budget to protect customers and define release policies What should you do?

Create SLIs from metrics Enable Alert Policies if the services do not pass
Create SLIs from metrics Enable Alert Policies if the services do not pass
Use the metrics from Anthos Service Mesh to measure the health of the microservices
Use the metrics from Anthos Service Mesh to measure the health of the microservices
Create a SLO Create an Alert Policy on select_slo_bum_rate
Create a SLO Create an Alert Policy on select_slo_bum_rate
Create a SLO and configure uptime checks for your services Enable Alert Policies if the services do not pass
Create a SLO and configure uptime checks for your services Enable Alert Policies if the services do not pass
Suggested answer: C

Explanation:

The best option for detecting consumption of an error budget to protect customers and define release policies is to create a service level objective (SLO) and create an alert policy on select_slo_burn_rate. A SLO is a target value or range of values for a service level indicator (SLI) that measures some aspect of the service quality, such as availability or latency. An error budget is the amount of time or number of errors that a service can tolerate while still meeting its SLO. A select_slo_burn_rate is a metric that indicates how fast the error budget is being consumed by the service. By creating an alert policy on select_slo_burn_rate, you can trigger notifications or actions when the error budget consumption exceeds a certain threshold. This way, you can balance change, velocity, and reliability of the service by adjusting the release policies based on the error budget status.

asked 18/09/2024
Muhammad Imran Khan
35 questions

Question 95

Report
Export
Collapse

Your organization wants to collect system logs that will be used to generate dashboards in Cloud Operations for their Google Cloud project. You need to configure all current and future Compute Engine instances to collect the system logs and you must ensure that the Ops Agent remains up to date. What should you do?

Use the gcloud CLI to install the Ops Agent on each VM listed in the Cloud Asset Inventory
Use the gcloud CLI to install the Ops Agent on each VM listed in the Cloud Asset Inventory
Select all VMs with an Agent status of Not detected on the Cloud Operations VMs dashboard Then select Install agents
Select all VMs with an Agent status of Not detected on the Cloud Operations VMs dashboard Then select Install agents
Use the gcloud CLI to create an Agent Policy.
Use the gcloud CLI to create an Agent Policy.
Install the Ops Agent on the Compute Engine image by using a startup script
Install the Ops Agent on the Compute Engine image by using a startup script
Suggested answer: C

Explanation:

The best option for configuring all current and future Compute Engine instances to collect system logs and ensure that the Ops Agent remains up to date is to use the gcloud CLI to create an Agent Policy. An Agent Policy is a resource that defines how Ops Agents are installed and configured on VM instances that match certain criteria, such as labels or zones. Ops Agents are software agents that collect metrics and logs from VM instances and send them to Cloud Operations products, such as Cloud Monitoring and Cloud Logging. By creating an Agent Policy, you can ensure that all current and future VM instances that match the policy criteria will have the Ops Agent installed and updated automatically. This way, you can collect system logs from all VM instances and use them to generate dashboards in Cloud Operations.

asked 18/09/2024
An Khang Nguyen
48 questions

Question 96

Report
Export
Collapse

Your company has a Google Cloud resource hierarchy with folders for production test and development Your cyber security team needs to review your company's Google Cloud security posture to accelerate security issue identification and resolution You need to centralize the logs generated by Google Cloud services from all projects only inside your production folder to allow for alerting and near-real time analysis. What should you do?

Enable the Workflows API and route all the logs to Cloud Logging
Enable the Workflows API and route all the logs to Cloud Logging
Create a central Cloud Monitoring workspace and attach all related projects
Create a central Cloud Monitoring workspace and attach all related projects
Create an aggregated log sink associated with the production folder that uses a Pub Sub topic as the destination
Create an aggregated log sink associated with the production folder that uses a Pub Sub topic as the destination
Create an aggregated log sink associated with the production folder that uses a Cloud Logging bucket as the destination
Create an aggregated log sink associated with the production folder that uses a Cloud Logging bucket as the destination
Suggested answer: D

Explanation:

The best option for centralizing the logs generated by Google Cloud services from all projects only inside your production folder is to create an aggregated log sink associated with the production folder that uses a Cloud Logging bucket as the destination. An aggregated log sink is a log sink that collects logs from multiple sources, such as projects, folders, or organizations. A Cloud Logging bucket is a storage location for logs that can be used as a destination for log sinks. By creating an aggregated log sink with a Cloud Logging bucket, you can collect and store all the logs from the production folder in one place and allow for alerting and near-real time analysis using Cloud Monitoring and Cloud Operations.

asked 18/09/2024
Giuseppina Mancinelli
34 questions

Question 97

Report
Export
Collapse

You are configuring the frontend tier of an application deployed in Google Cloud The frontend tier is hosted in ngmx and deployed using a managed instance group with an Envoy-based external HTTP(S) load balancer in front The application is deployed entirely within the europe-west2 region: and only serves users based in the United Kingdom. You need to choose the most cost-effective network tier and load balancing configuration What should you use?

Premium Tier with a global load balancer
Premium Tier with a global load balancer
Premium Tier with a regional load balancer
Premium Tier with a regional load balancer
Standard Tier with a global load balancer
Standard Tier with a global load balancer
Standard Tier with a regional load balancer
Standard Tier with a regional load balancer
Suggested answer: B

Explanation:

The most cost-effective network tier and load balancing configuration for your frontend tier is to use Premium Tier with a regional load balancer. Premium Tier is a network tier that provides high-performance and low-latency network connectivity across Google's global network. A regional load balancer is a load balancer that distributes traffic within a single region. Since your application is deployed entirely within the europe-west2 region and only serves users based in the United Kingdom, you can use Premium Tier with a regional load balancer to optimize the network performance and cost.

asked 18/09/2024
Thao Nguyen
46 questions

Question 98

Report
Export
Collapse

You recently deployed your application in Google Kubernetes Engine (GKE) and now need to release a new version of the application You need the ability to instantly roll back to the previous version of the application in case there are issues with the new version Which deployment model should you use?

Perform a rolling deployment and test your new application after the deployment is complete
Perform a rolling deployment and test your new application after the deployment is complete
Perform A. B testing, and test your application periodically after the deployment is complete
Perform A. B testing, and test your application periodically after the deployment is complete
Perform a canary deployment, and test your new application periodically after the new version is deployed
Perform a canary deployment, and test your new application periodically after the new version is deployed
Perform a blue/green deployment and test your new application after the deployment is complete
Perform a blue/green deployment and test your new application after the deployment is complete
Suggested answer: D

Explanation:

The best deployment model for releasing a new version of your application in GKE with the ability to instantly roll back to the previous version is to perform a blue/green deployment and test your new application after the deployment is complete. A blue/green deployment is a deployment strategy that involves creating two identical environments, one running the current version of the application (blue) and one running the new version of the application (green). The traffic is switched from blue to green after testing the new version, and if any issues are discovered, the traffic can be switched back to blue instantly. This way, you can minimize downtime and risk during deployment.

asked 18/09/2024
Preety Koul
39 questions

Question 99

Report
Export
Collapse

You are building and deploying a microservice on Cloud Run for your organization Your service is used by many applications internally You are deploying a new release, and you need to test the new version extensively in the staging and production environments You must minimize user and developer impact. What should you do?

Deploy the new version of the service to the staging environment Split the traffic, and allow 1 % of traffic through to the latest version Test the latest version If the test passes gradually roll out the latest version to the staging and production environments
Deploy the new version of the service to the staging environment Split the traffic, and allow 1 % of traffic through to the latest version Test the latest version If the test passes gradually roll out the latest version to the staging and production environments
Deploy the new version of the service to the staging environment Split the traffic, and allow 50% of traffic through to the latest version Test the latest version If the test passes, send all traffic to the latest version Repeat for the production environment
Deploy the new version of the service to the staging environment Split the traffic, and allow 50% of traffic through to the latest version Test the latest version If the test passes, send all traffic to the latest version Repeat for the production environment
Deploy the new version of the service to the staging environment with a new-release tag without serving traffic Test the new-release version If the test passes; gradually roll out this tagged version Repeat for the production environment
Deploy the new version of the service to the staging environment with a new-release tag without serving traffic Test the new-release version If the test passes; gradually roll out this tagged version Repeat for the production environment
Deploy a new environment with the green tag to use as the staging environment Deploy the new version of the service to the green environment and test the new version If the tests pass, send all traffic to the green environment and delete the existing staging environment Repeat for the production environment
Deploy a new environment with the green tag to use as the staging environment Deploy the new version of the service to the green environment and test the new version If the tests pass, send all traffic to the green environment and delete the existing staging environment Repeat for the production environment
Suggested answer: C

Explanation:

The best option for deploying a new release of your microservice on Cloud Run and testing it extensively in the staging and production environments with minimal user and developer impact is to deploy the new version of the service to the staging environment with a new-release tag without serving traffic, test the new-release version, and if the test passes, gradually roll out this tagged version. A tag is a label that you can assign to a revision of your service on Cloud Run. You can use tags to create different versions of your service without affecting traffic. You can also use tags to gradually roll out traffic to a new version of your service by using traffic splitting. This way, you can test your new release extensively in both environments and minimize user and developer impact.

asked 18/09/2024
Nathalie Agustin
36 questions

Question 100

Report
Export
Collapse

You work for a global organization and run a service with an availability target of 99% with limited engineering resources. For the current calendar month you noticed that the service has 99 5% availability. You must ensure that your service meets the defined availability goals and can react to business changes including the upcoming launch of new features You also need to reduce technical debt while minimizing operational costs You want to follow Google-recommended practices What should you do?

Add N+1 redundancy to your service by adding additional compute resources to the service
Add N+1 redundancy to your service by adding additional compute resources to the service
Identify, measure and eliminate toil by automating repetitive tasks
Identify, measure and eliminate toil by automating repetitive tasks
Define an error budget for your service level availability and minimize the remaining error budget
Define an error budget for your service level availability and minimize the remaining error budget
Allocate available engineers to the feature backlog while you ensure that the sen/ice remains within the availability target
Allocate available engineers to the feature backlog while you ensure that the sen/ice remains within the availability target
Suggested answer: C
asked 18/09/2024
Gary Corley
38 questions
Total 166 questions
Go to page: of 17
Search

Related questions