Google Professional Cloud DevOps Engineer Practice Test - Questions Answers, Page 11
List of questions
Related questions
Question 101
You are developing the deployment and testing strategies for your CI/CD pipeline in Google Cloud You must be able to
* Reduce the complexity of release deployments and minimize the duration of deployment rollbacks
* Test real production traffic with a gradual increase in the number of affected users
You want to select a deployment and testing strategy that meets your requirements What should you do?
Explanation:
The best option for selecting a deployment and testing strategy that meets your requirements is to use blue/green deployment and canary testing. A blue/green deployment is a deployment strategy that involves creating two identical environments, one running the current version of the application (blue) and one running the new version of the application (green). The traffic is switched from blue to green after testing the new version, and if any issues are discovered, the traffic can be switched back to blue instantly. This way, you can reduce the complexity of release deployments and minimize the duration of deployment rollbacks. A canary testing is a testing strategy that involves releasing a new version of an application to a subset of users or servers and monitoring its performance and reliability. This way, you can test real production traffic with a gradual increase in the number of affected users.
Question 102
You support a user-facing web application When analyzing the application's error budget over the previous six months you notice that the application never consumed more than 5% of its error budget You hold a SLO review with business stakeholders and confirm that the SLO is set appropriately You want your application's reliability to more closely reflect its SLO What steps can you take to further that goal while balancing velocity, reliability, and business needs?
Choose 2 answers
Explanation:
The best options for furthering your application's reliability goal while balancing velocity, reliability, and business needs are to have more frequent or potentially risky application releases and to tighten the SLO to match the application's observed reliability. Having more frequent or potentially risky application releases can help you increase the change velocity and deliver new features faster. However, this also increases the likelihood of consuming more error budget and reducing the reliability of your service. Therefore, you should monitor your error budget consumption and adjust your release policies accordingly. For example, you can freeze or slow down releases when the error budget is low, or accelerate releases when the error budget is high. Tightening the SLO to match the application's observed reliability can help you align your service quality with your users' expectations and business needs. However, this also means that you have less room for error and need to maintain a higher level of reliability. Therefore, you should ensure that your SLO is realistic and achievable, and that you have sufficient engineering resources and processes to meet it.
Question 103
Your company runs an ecommerce website built with JVM-based applications and microservice architecture in Google Kubernetes Engine (GKE) The application load increases during the day and decreases during the night Your operations team has configured the application to run enough Pods to handle the evening peak load You want to automate scaling by only running enough Pods and nodes for the load What should you do?
Explanation:
The best option for automating scaling by only running enough Pods and nodes for the load is to configure the Horizontal Pod Autoscaler and enable the cluster autoscaler. The Horizontal Pod Autoscaler is a feature that automatically adjusts the number of Pods in a deployment or replica set based on observed CPU utilization or custom metrics. The cluster autoscaler is a feature that automatically adjusts the size of a node pool based on the demand for node capacity. By using both features together, you can ensure that your application runs enough Pods to handle the load, and that your cluster runs enough nodes to host the Pods. This way, you can optimize your resource utilization and cost efficiency.
Question 104
Your organization wants to increase the availability target of an application from 99 9% to 99 99% for an investment of $2 000 The application's current revenue is S1,000,000 You need to determine whether the increase in availability is worth the investment for a single year of usage What should you do?
Explanation:
The best option for determining whether the increase in availability is worth the investment for a single year of usage is to calculate the value of improved availability to be $900, and determine that the increase in availability is not worth the investment. To calculate the value of improved availability, we can use the following formula:
Value of improved availability = Revenue * (New availability - Current availability)
Plugging in the given numbers, we get:
Value of improved availability = $1,000,000 * (0.9999 - 0.999) = $900
Since the value of improved availability is less than the investment of $2,000, we can conclude that the increase in availability is not worth the investment.
Question 105
A third-party application needs to have a service account key to work properly When you try to export the key from your cloud project you receive an error 'The organization policy constraint larn.disableServiceAccountKeyCreation is enforcedM You need to make the third-party application work while following Google-recommended security practices What should you do?
Explanation:
The best option for making the third-party application work while following Google-recommended security practices is to add a rule to set the iam.disableServiceAccountKeyCreation policy to off in your project and create a key. The iam.disableServiceAccountKeyCreation policy is an organization policy that controls whether service account keys can be created in a project or organization. By default, this policy is set to on, which means that service account keys cannot be created. However, you can override this policy at a lower level, such as a project, by adding a rule to set it to off. This way, you can create a service account key for your project without affecting other projects or organizations. You should also follow the best practices for managing service account keys, such as rotating them regularly, storing them securely, and deleting them when they are no longer needed.
Question 106
Your team is writing a postmortem after an incident on your external facing application Your team wants to improve the postmortem policy to include triggers that indicate whether an incident requires a postmortem Based on Site Reliability Engineenng (SRE) practices, what triggers should be defined in the postmortem policy?
Choose 2 answers
Explanation:
The best options for defining triggers that indicate whether an incident requires a postmortem based on Site Reliability Engineering (SRE) practices are an external stakeholder asks for a postmortem and data is lost due to an incident. An external stakeholder is someone who is affected by or has an interest in the service, such as a customer or a partner. If an external stakeholder asks for a postmortem, it means that they are concerned about the impact or root cause of the incident, and they expect an explanation and remediation from the service provider. Therefore, this should trigger a postmortem to address their concerns and improve their satisfaction. Data loss is a serious consequence of an incident that can affect the integrity and reliability of the service. If data is lost due to an incident, it means that there was a failure in the backup or recovery mechanisms, or that there was a corruption or deletion of data. Therefore, this should trigger a postmortem to investigate the cause and impact of the data loss, and to prevent it from happening again.
Question 107
You are implementing a CI'CD pipeline for your application in your company s multi-cloud environment Your application is deployed by using custom Compute Engine images and the equivalent in other cloud providers You need to implement a solution that will enable you to build and deploy the images to your current environment and is adaptable to future changes Which solution stack should you use'?
Explanation:
Cloud Build is a fully managed continuous integration and continuous delivery (CI/CD) service that helps you automate your builds, tests, and deployments. Google Cloud Deploy is a service that automates the deployment of your applications to Google Kubernetes Engine (GKE).
Together, Cloud Build and Google Cloud Deploy can be used to build and deploy your application's custom Compute Engine images to your current environment and to other cloud providers in the future.
Here are the steps involved in using Cloud Build and Google Cloud Deploy to implement a CI/CD pipeline for your application:
Create a Cloud Build trigger that fires whenever a change is made to your application's code.
In the Cloud Build trigger, configure Cloud Build to build your application's Docker image.
Create a Google Cloud Deploy configuration file that specifies how to deploy your application's Docker image to GKE.
In Google Cloud Deploy, create a deployment that uses your configuration file.
Once you have created the Cloud Build trigger and Google Cloud Deploy configuration file, any changes made to your application's code will trigger Cloud Build to build a new Docker image. Google Cloud Deploy will then deploy the new Docker image to GKE.
This solution stack is adaptable to future changes because it uses a cloud-agnostic approach. Cloud Build can be used to build Docker images for any cloud provider, and Google Cloud Deploy can be used to deploy Docker images to any Kubernetes cluster.
The other solution stacks are not as adaptable to future changes. For example, solution stack A (Cloud Build with Packer) is limited to building Docker images for Compute Engine. Solution stack C (Google Kubernetes Engine with Google Cloud Deploy) is limited to deploying Docker images to GKE. Solution stack D (Cloud Build with kpt) is a newer solution that is not yet as mature as Cloud Build and Google Cloud Deploy.
Overall, the best solution stack for implementing a CI/CD pipeline for your application in a multi-cloud environment is Cloud Build with Google Cloud Deploy. This solution stack is fully managed, cloud-agnostic, and adaptable to future changes.
Question 108
Your applications performance in Google Cloud has degraded since the last release You suspect that downstream dependencies might be causing some requests to take longer to complete You need to investigate the issue with your application to determine the cause What should you do?
Explanation:
The best option for investigating the issue with your application's performance in Google Cloud is to configure Cloud Trace in your application. Cloud Trace is a service that allows you to collect and analyze latency data from your application. You can use Cloud Trace to trace requests across different components of your application, such as downstream dependencies, and identify where they take longer to complete. You can also use Cloud Trace to compare latency data across different versions of your application, and detect any performance degradation or improvement. By using Cloud Trace, you can diagnose and troubleshoot performance issues with your application in Google Cloud.
Question 109
You are creating a CI/CD pipeline in Cloud Build to build an application container image The application code is stored in GitHub Your company requires thai production image builds are only run against the main branch and that the change control team approves all pushes to the main branch You want the image build to be as automated as possible What should you do?
Choose 2 answers
Explanation:
The best options for creating a CI/CD pipeline in Cloud Build to build an application container image and ensuring that production image builds are only run against the main branch and that the change control team approves all pushes to the main branch are to create a trigger on the Cloud Build job, set the repository event setting to Push to a branch, and configure a branch protection rule for the main branch on the repository. A trigger is a resource that starts a build when an event occurs, such as a code change. By creating a trigger on the Cloud Build job and setting the repository event setting to Push to a branch, you can ensure that the image build is only run when code is pushed to a specific branch, such as the main branch. A branch protection rule is a rule that enforces certain policies on a branch, such as requiring reviews, status checks, or approvals before merging code. By configuring a branch protection rule for the main branch on the repository, you can ensure that the change control team approves all pushes to the main branch.
Question 110
You built a serverless application by using Cloud Run and deployed the application to your production environment You want to identify the resource utilization of the application for cost optimization What should you do?
Explanation:
The best option for giving developers the ability to test the latest revisions of the service before the service is exposed to customers is to run the gcloud run deploy booking-engine --no-traffic --tag dev command and use the https://dev----booking-engine-abcdef.a.run.app URL for testing. The gcloud run deploy command is a command that deploys a new revision of your service or updates an existing service. By using the --no-traffic flag, you can prevent any traffic from being sent to the new revision. By using the --tag flag, you can assign a tag to the new revision, such as dev. This way, you can create a new revision of your service without affecting your customers. You can also use the tag-based URL (e.g., https://dev----booking-engine-abcdef.a.run.app) to access and test the new revision.
Question