ExamGecko
Home Home / Google / Professional Cloud DevOps Engineer

Google Professional Cloud DevOps Engineer Practice Test - Questions Answers, Page 11

Question list
Search
Search

List of questions

Search

Related questions











You are developing the deployment and testing strategies for your CI/CD pipeline in Google Cloud You must be able to

* Reduce the complexity of release deployments and minimize the duration of deployment rollbacks

* Test real production traffic with a gradual increase in the number of affected users

You want to select a deployment and testing strategy that meets your requirements What should you do?

A.
Recreate deployment and canary testing
A.
Recreate deployment and canary testing
Answers
B.
Blue/green deployment and canary testing
B.
Blue/green deployment and canary testing
Answers
C.
Rolling update deployment and A/B testing
C.
Rolling update deployment and A/B testing
Answers
D.
Rolling update deployment and shadow testing
D.
Rolling update deployment and shadow testing
Answers
Suggested answer: B

Explanation:

The best option for selecting a deployment and testing strategy that meets your requirements is to use blue/green deployment and canary testing. A blue/green deployment is a deployment strategy that involves creating two identical environments, one running the current version of the application (blue) and one running the new version of the application (green). The traffic is switched from blue to green after testing the new version, and if any issues are discovered, the traffic can be switched back to blue instantly. This way, you can reduce the complexity of release deployments and minimize the duration of deployment rollbacks. A canary testing is a testing strategy that involves releasing a new version of an application to a subset of users or servers and monitoring its performance and reliability. This way, you can test real production traffic with a gradual increase in the number of affected users.

You support a user-facing web application When analyzing the application's error budget over the previous six months you notice that the application never consumed more than 5% of its error budget You hold a SLO review with business stakeholders and confirm that the SLO is set appropriately You want your application's reliability to more closely reflect its SLO What steps can you take to further that goal while balancing velocity, reliability, and business needs?

Choose 2 answers

A.
Add more serving capacity to all of your application's zones
A.
Add more serving capacity to all of your application's zones
Answers
B.
Implement and measure all other available SLIs for the application
B.
Implement and measure all other available SLIs for the application
Answers
C.
Announce planned downtime to consume more error budget and ensure that users are not depending on a tighter SLO
C.
Announce planned downtime to consume more error budget and ensure that users are not depending on a tighter SLO
Answers
D.
Have more frequent or potentially risky application releases
D.
Have more frequent or potentially risky application releases
Answers
E.
Tighten the SLO to match the application's observed reliability
E.
Tighten the SLO to match the application's observed reliability
Answers
Suggested answer: D, E

Explanation:

The best options for furthering your application's reliability goal while balancing velocity, reliability, and business needs are to have more frequent or potentially risky application releases and to tighten the SLO to match the application's observed reliability. Having more frequent or potentially risky application releases can help you increase the change velocity and deliver new features faster. However, this also increases the likelihood of consuming more error budget and reducing the reliability of your service. Therefore, you should monitor your error budget consumption and adjust your release policies accordingly. For example, you can freeze or slow down releases when the error budget is low, or accelerate releases when the error budget is high. Tightening the SLO to match the application's observed reliability can help you align your service quality with your users' expectations and business needs. However, this also means that you have less room for error and need to maintain a higher level of reliability. Therefore, you should ensure that your SLO is realistic and achievable, and that you have sufficient engineering resources and processes to meet it.

Your company runs an ecommerce website built with JVM-based applications and microservice architecture in Google Kubernetes Engine (GKE) The application load increases during the day and decreases during the night Your operations team has configured the application to run enough Pods to handle the evening peak load You want to automate scaling by only running enough Pods and nodes for the load What should you do?

A.
Configure the Vertical Pod Autoscaler but keep the node pool size static
A.
Configure the Vertical Pod Autoscaler but keep the node pool size static
Answers
B.
Configure the Vertical Pod Autoscaler and enable the cluster autoscaler
B.
Configure the Vertical Pod Autoscaler and enable the cluster autoscaler
Answers
C.
Configure the Horizontal Pod Autoscaler but keep the node pool size static
C.
Configure the Horizontal Pod Autoscaler but keep the node pool size static
Answers
D.
Configure the Horizontal Pod Autoscaler and enable the cluster autoscaler
D.
Configure the Horizontal Pod Autoscaler and enable the cluster autoscaler
Answers
Suggested answer: D

Explanation:

The best option for automating scaling by only running enough Pods and nodes for the load is to configure the Horizontal Pod Autoscaler and enable the cluster autoscaler. The Horizontal Pod Autoscaler is a feature that automatically adjusts the number of Pods in a deployment or replica set based on observed CPU utilization or custom metrics. The cluster autoscaler is a feature that automatically adjusts the size of a node pool based on the demand for node capacity. By using both features together, you can ensure that your application runs enough Pods to handle the load, and that your cluster runs enough nodes to host the Pods. This way, you can optimize your resource utilization and cost efficiency.

Your organization wants to increase the availability target of an application from 99 9% to 99 99% for an investment of $2 000 The application's current revenue is S1,000,000 You need to determine whether the increase in availability is worth the investment for a single year of usage What should you do?

A.
Calculate the value of improved availability to be $900, and determine that the increase in availability is not worth the investment
A.
Calculate the value of improved availability to be $900, and determine that the increase in availability is not worth the investment
Answers
B.
Calculate the value of improved availability to be $1 000 and determine that the increase in availability is not worth the investment
B.
Calculate the value of improved availability to be $1 000 and determine that the increase in availability is not worth the investment
Answers
C.
Calculate the value of improved availability to be $1 000 and determine that the increase in availability is worth the investment
C.
Calculate the value of improved availability to be $1 000 and determine that the increase in availability is worth the investment
Answers
D.
Calculate the value of improved availability to be $9,000. and determine that the increase in availability is worth the investment
D.
Calculate the value of improved availability to be $9,000. and determine that the increase in availability is worth the investment
Answers
Suggested answer: A

Explanation:

The best option for determining whether the increase in availability is worth the investment for a single year of usage is to calculate the value of improved availability to be $900, and determine that the increase in availability is not worth the investment. To calculate the value of improved availability, we can use the following formula:

Value of improved availability = Revenue * (New availability - Current availability)

Plugging in the given numbers, we get:

Value of improved availability = $1,000,000 * (0.9999 - 0.999) = $900

Since the value of improved availability is less than the investment of $2,000, we can conclude that the increase in availability is not worth the investment.

A third-party application needs to have a service account key to work properly When you try to export the key from your cloud project you receive an error 'The organization policy constraint larn.disableServiceAccountKeyCreation is enforcedM You need to make the third-party application work while following Google-recommended security practices What should you do?

A.
Enable the default service account key. and download the key
A.
Enable the default service account key. and download the key
Answers
B.
Remove the iam.disableServiceAccountKeyCreation policy at the organization level, and create a key.
B.
Remove the iam.disableServiceAccountKeyCreation policy at the organization level, and create a key.
Answers
C.
Disable the service account key creation policy at the project's folder, and download the default key
C.
Disable the service account key creation policy at the project's folder, and download the default key
Answers
D.
Add a rule to set the iam.disableServiceAccountKeyCreation policy to off in your project and create a key.
D.
Add a rule to set the iam.disableServiceAccountKeyCreation policy to off in your project and create a key.
Answers
Suggested answer: D

Explanation:

The best option for making the third-party application work while following Google-recommended security practices is to add a rule to set the iam.disableServiceAccountKeyCreation policy to off in your project and create a key. The iam.disableServiceAccountKeyCreation policy is an organization policy that controls whether service account keys can be created in a project or organization. By default, this policy is set to on, which means that service account keys cannot be created. However, you can override this policy at a lower level, such as a project, by adding a rule to set it to off. This way, you can create a service account key for your project without affecting other projects or organizations. You should also follow the best practices for managing service account keys, such as rotating them regularly, storing them securely, and deleting them when they are no longer needed.

Your team is writing a postmortem after an incident on your external facing application Your team wants to improve the postmortem policy to include triggers that indicate whether an incident requires a postmortem Based on Site Reliability Engineenng (SRE) practices, what triggers should be defined in the postmortem policy?

Choose 2 answers

A.
An external stakeholder asks for a postmortem
A.
An external stakeholder asks for a postmortem
Answers
B.
Data is lost due to an incident
B.
Data is lost due to an incident
Answers
C.
An internal stakeholder requests a postmortem
C.
An internal stakeholder requests a postmortem
Answers
D.
The monitoring system detects that one of the instances for your application has failed
D.
The monitoring system detects that one of the instances for your application has failed
Answers
E.
The CD pipeline detects an issue and rolls back a problematic release.
E.
The CD pipeline detects an issue and rolls back a problematic release.
Answers
Suggested answer: A, C

Explanation:

The best options for defining triggers that indicate whether an incident requires a postmortem based on Site Reliability Engineering (SRE) practices are an external stakeholder asks for a postmortem and data is lost due to an incident. An external stakeholder is someone who is affected by or has an interest in the service, such as a customer or a partner. If an external stakeholder asks for a postmortem, it means that they are concerned about the impact or root cause of the incident, and they expect an explanation and remediation from the service provider. Therefore, this should trigger a postmortem to address their concerns and improve their satisfaction. Data loss is a serious consequence of an incident that can affect the integrity and reliability of the service. If data is lost due to an incident, it means that there was a failure in the backup or recovery mechanisms, or that there was a corruption or deletion of data. Therefore, this should trigger a postmortem to investigate the cause and impact of the data loss, and to prevent it from happening again.

You are implementing a CI'CD pipeline for your application in your company s multi-cloud environment Your application is deployed by using custom Compute Engine images and the equivalent in other cloud providers You need to implement a solution that will enable you to build and deploy the images to your current environment and is adaptable to future changes Which solution stack should you use'?

A.
Cloud Build with Packer
A.
Cloud Build with Packer
Answers
B.
Cloud Build with Google Cloud Deploy
B.
Cloud Build with Google Cloud Deploy
Answers
C.
Google Kubernetes Engine with Google Cloud Deploy
C.
Google Kubernetes Engine with Google Cloud Deploy
Answers
D.
Cloud Build with kpt
D.
Cloud Build with kpt
Answers
Suggested answer: B

Explanation:

Cloud Build is a fully managed continuous integration and continuous delivery (CI/CD) service that helps you automate your builds, tests, and deployments. Google Cloud Deploy is a service that automates the deployment of your applications to Google Kubernetes Engine (GKE).

Together, Cloud Build and Google Cloud Deploy can be used to build and deploy your application's custom Compute Engine images to your current environment and to other cloud providers in the future.

Here are the steps involved in using Cloud Build and Google Cloud Deploy to implement a CI/CD pipeline for your application:

Create a Cloud Build trigger that fires whenever a change is made to your application's code.

In the Cloud Build trigger, configure Cloud Build to build your application's Docker image.

Create a Google Cloud Deploy configuration file that specifies how to deploy your application's Docker image to GKE.

In Google Cloud Deploy, create a deployment that uses your configuration file.

Once you have created the Cloud Build trigger and Google Cloud Deploy configuration file, any changes made to your application's code will trigger Cloud Build to build a new Docker image. Google Cloud Deploy will then deploy the new Docker image to GKE.

This solution stack is adaptable to future changes because it uses a cloud-agnostic approach. Cloud Build can be used to build Docker images for any cloud provider, and Google Cloud Deploy can be used to deploy Docker images to any Kubernetes cluster.

The other solution stacks are not as adaptable to future changes. For example, solution stack A (Cloud Build with Packer) is limited to building Docker images for Compute Engine. Solution stack C (Google Kubernetes Engine with Google Cloud Deploy) is limited to deploying Docker images to GKE. Solution stack D (Cloud Build with kpt) is a newer solution that is not yet as mature as Cloud Build and Google Cloud Deploy.

Overall, the best solution stack for implementing a CI/CD pipeline for your application in a multi-cloud environment is Cloud Build with Google Cloud Deploy. This solution stack is fully managed, cloud-agnostic, and adaptable to future changes.

Your applications performance in Google Cloud has degraded since the last release You suspect that downstream dependencies might be causing some requests to take longer to complete You need to investigate the issue with your application to determine the cause What should you do?

A.
Configure Error Reporting in your application
A.
Configure Error Reporting in your application
Answers
B.
Configure Google Cloud Managed Service for Prometheus in your application
B.
Configure Google Cloud Managed Service for Prometheus in your application
Answers
C.
Configure Cloud Profiler in your application
C.
Configure Cloud Profiler in your application
Answers
D.
Configure Cloud Trace in your application
D.
Configure Cloud Trace in your application
Answers
Suggested answer: D

Explanation:

The best option for investigating the issue with your application's performance in Google Cloud is to configure Cloud Trace in your application. Cloud Trace is a service that allows you to collect and analyze latency data from your application. You can use Cloud Trace to trace requests across different components of your application, such as downstream dependencies, and identify where they take longer to complete. You can also use Cloud Trace to compare latency data across different versions of your application, and detect any performance degradation or improvement. By using Cloud Trace, you can diagnose and troubleshoot performance issues with your application in Google Cloud.

You are creating a CI/CD pipeline in Cloud Build to build an application container image The application code is stored in GitHub Your company requires thai production image builds are only run against the main branch and that the change control team approves all pushes to the main branch You want the image build to be as automated as possible What should you do?

Choose 2 answers

A.
Create a trigger on the Cloud Build job Set the repository event setting to Pull request'
A.
Create a trigger on the Cloud Build job Set the repository event setting to Pull request'
Answers
B.
Add the owners file to the Included files filter on the trigger
B.
Add the owners file to the Included files filter on the trigger
Answers
C.
Create a trigger on the Cloud Build job Set the repository event setting to Push to a branch
C.
Create a trigger on the Cloud Build job Set the repository event setting to Push to a branch
Answers
D.
Configure a branch protection rule for the main branch on the repository
D.
Configure a branch protection rule for the main branch on the repository
Answers
E.
Enable the Approval option on the trigger
E.
Enable the Approval option on the trigger
Answers
Suggested answer: C, D

Explanation:

The best options for creating a CI/CD pipeline in Cloud Build to build an application container image and ensuring that production image builds are only run against the main branch and that the change control team approves all pushes to the main branch are to create a trigger on the Cloud Build job, set the repository event setting to Push to a branch, and configure a branch protection rule for the main branch on the repository. A trigger is a resource that starts a build when an event occurs, such as a code change. By creating a trigger on the Cloud Build job and setting the repository event setting to Push to a branch, you can ensure that the image build is only run when code is pushed to a specific branch, such as the main branch. A branch protection rule is a rule that enforces certain policies on a branch, such as requiring reviews, status checks, or approvals before merging code. By configuring a branch protection rule for the main branch on the repository, you can ensure that the change control team approves all pushes to the main branch.

You built a serverless application by using Cloud Run and deployed the application to your production environment You want to identify the resource utilization of the application for cost optimization What should you do?

A.
Use Cloud Trace with distributed tracing to monitor the resource utilization of the application
A.
Use Cloud Trace with distributed tracing to monitor the resource utilization of the application
Answers
B.
Use Cloud Profiler with Ops Agent to monitor the CPU and memory utilization of the application
B.
Use Cloud Profiler with Ops Agent to monitor the CPU and memory utilization of the application
Answers
C.
Use Cloud Monitoring to monitor the container CPU and memory utilization of the application
C.
Use Cloud Monitoring to monitor the container CPU and memory utilization of the application
Answers
D.
Use Cloud Ops to create logs-based metrics to monitor the resource utilization of the application
D.
Use Cloud Ops to create logs-based metrics to monitor the resource utilization of the application
Answers
Suggested answer: A

Explanation:

The best option for giving developers the ability to test the latest revisions of the service before the service is exposed to customers is to run the gcloud run deploy booking-engine --no-traffic --tag dev command and use the https://dev----booking-engine-abcdef.a.run.app URL for testing. The gcloud run deploy command is a command that deploys a new revision of your service or updates an existing service. By using the --no-traffic flag, you can prevent any traffic from being sent to the new revision. By using the --tag flag, you can assign a tag to the new revision, such as dev. This way, you can create a new revision of your service without affecting your customers. You can also use the tag-based URL (e.g., https://dev----booking-engine-abcdef.a.run.app) to access and test the new revision.

Total 166 questions
Go to page: of 17