ExamGecko
Home Home / Google / Professional Cloud DevOps Engineer

Google Professional Cloud DevOps Engineer Practice Test - Questions Answers, Page 16

Question list
Search
Search

List of questions

Search

Related questions











You are leading a DevOps project for your organization. The DevOps team is responsible for managing the service infrastructure and being on-call for incidents. The Software Development team is responsible for writing, submitting, and reviewing code. Neither team has any published SLOs. You want to design a new joint-ownership model for a service between the DevOps team and the Software Development team. Which responsibilities should be assigned to each team in the new joint-ownership model?

A.
Option A
A.
Option A
Answers
B.
Option B
B.
Option B
Answers
C.
Option C
C.
Option C
Answers
D.
Option D
D.
Option D
Answers
Suggested answer: D

Explanation:

The correct answer is D) Option D)

According to the DevOps best practices, a joint-ownership model for a service between the DevOps team and the Software Development team should follow these principles12:

The DevOps team and the Software Development team should share the responsibility and collaboration for managing the service infrastructure, performing code reviews, and adopting and sharing SLOs for the service.

The DevOps team and the Software Development team should have end-to-end ownership of the service, from design to development to deployment to operation to maintenance.

The DevOps team and the Software Development team should use common tools and processes to facilitate communication, coordination, and feedback.

The DevOps team and the Software Development team should align their goals and incentives with the business outcomes and customer satisfaction.

Option D is the only option that reflects these principles. Option D assigns both teams the responsibilities of managing the service infrastructure, performing code reviews, and adopting and sharing SLOs for the service. Option D also implies that both teams have end-to-end ownership of the service, as they are involved in every stage of the service lifecycle. Option D also encourages both teams to use common tools and processes, such as GitLab3, to collaborate and communicate effectively. Option D also aligns both teams with the business outcomes and customer satisfaction, as they use SLOs to measure and improve the service quality.

The other options are incorrect because they do not follow the DevOps best practices. Option A is incorrect because it assigns only the DevOps team the responsibility of managing the service infrastructure, which creates a silo between the two teams and reduces their collaboration. Option A also does not assign any responsibility for adopting and sharing SLOs for the service, which means that both teams lack a common metric for measuring and improving the service quality. Option B is incorrect because it assigns only the Software Development team the responsibility of performing code reviews, which creates a gap between the two teams and reduces their feedback. Option B also does not assign any responsibility for adopting and sharing SLOs for the service, which means that both teams lack a common metric for measuring and improving the service quality. Option C is incorrect because it assigns both teams the same responsibilities as option A and option B, which combines their drawbacks.

5 key organizational models for DevOps teams | GitLab, 5 key organizational models for DevOps teams | GitLab. Building a Culture of Full-Service Ownership - DevOps.com, Building a Culture of Full-Service Ownership - DevOps.com. GitLab, GitLab.

You are deploying a Cloud Build job that deploys Terraform code when a Git branch is updated. While testing, you noticed that the job fails. You see the following error in the build logs:

Initializing the backend. ..

Error: Failed to get existing workspaces : querying Cloud Storage failed: googleapi : Error

403

You need to resolve the issue by following Google-recommended practices. What should you do?

A.
Change the Terraform code to use local state.
A.
Change the Terraform code to use local state.
Answers
B.
Create a storage bucket with the name specified in the Terraform configuration.
B.
Create a storage bucket with the name specified in the Terraform configuration.
Answers
C.
Grant the roles/ owner Identity and Access Management (IAM) role to the Cloud Build service account on the project.
C.
Grant the roles/ owner Identity and Access Management (IAM) role to the Cloud Build service account on the project.
Answers
D.
Grant the roles/ storage. objectAdmin Identity and Access Management (IAM) role to the Cloud Build service account on the state file bucket.
D.
Grant the roles/ storage. objectAdmin Identity and Access Management (IAM) role to the Cloud Build service account on the state file bucket.
Answers
Suggested answer: D

Explanation:

The correct answer is D) Grant the roles/storage.objectAdmin Identity and Access Management (IAM) role to the Cloud Build service account on the state file bucket.

According to the Google Cloud documentation, Cloud Build is a service that executes your builds on Google Cloud Platform infrastructure1. Cloud Build uses a service account to execute your build steps and access resources, such as Cloud Storage buckets2. Terraform is an open-source tool that allows you to define and provision infrastructure as code3. Terraform uses a state file to store and track the state of your infrastructure4. You can configure Terraform to use a Cloud Storage bucket as a backend to store and share the state file across multiple users or environments5.

The error message indicates that Cloud Build failed to access the Cloud Storage bucket that contains the Terraform state file. This is likely because the Cloud Build service account does not have the necessary permissions to read and write objects in the bucket. To resolve this issue, you need to grant the roles/storage.objectAdmin IAM role to the Cloud Build service account on the state file bucket. This role allows the service account to create, delete, and manage objects in the bucket6. You can use the gcloud command-line tool or the Google Cloud Console to grant this role.

The other options are incorrect because they do not follow Google-recommended practices. Option A is incorrect because it changes the Terraform code to use local state, which is not recommended for production or collaborative environments, as it can cause conflicts, data loss, or inconsistency. Option B is incorrect because it creates a new storage bucket with the name specified in the Terraform configuration, but it does not grant any permissions to the Cloud Build service account on the new bucket. Option C is incorrect because it grants the roles/owner IAM role to the Cloud Build service account on the project, which is too broad and violates the principle of least privilege. The roles/owner role grants full access to all resources in the project, which can pose a security risk if misused or compromised.

Cloud Build Documentation, Overview. Service accounts, Service accounts. Terraform by HashiCorp, Terraform by HashiCorp. State, State. Google Cloud Storage Backend, Google Cloud Storage Backend. Predefined roles, Predefined roles. [Granting roles to service accounts for specific resources], Granting roles to service accounts for specific resources. [Local Backend], Local Backend. [Understanding roles], Understanding roles.

Your company processes IOT data at scale by using Pub/Sub, App Engine standard environment, and an application written in GO. You noticed that the performance inconsistently degrades at peak load. You could not reproduce this issue on your workstation. You need to continuously monitor the application in production to identify slow paths in the code. You want to minimize performance impact and management overhead. What should you do?

A.
Install a continuous profiling tool into Compute Engine. Configure the application to send profiling data to the tool.
A.
Install a continuous profiling tool into Compute Engine. Configure the application to send profiling data to the tool.
Answers
B.
Periodically run the go tool pprof command against the application instance. Analyze the results by using flame graphs.
B.
Periodically run the go tool pprof command against the application instance. Analyze the results by using flame graphs.
Answers
C.
Configure Cloud Profiler, and initialize the [email protected]/go/profiler library in the application.
C.
Configure Cloud Profiler, and initialize the [email protected]/go/profiler library in the application.
Answers
D.
Use Cloud Monitoring to assess the App Engine CPU utilization metric.
D.
Use Cloud Monitoring to assess the App Engine CPU utilization metric.
Answers
Suggested answer: C

Explanation:

The correct answer is C) Configure Cloud Profiler, and initialize the cloud.google.com/go/profiler library in the application.

According to the Google Cloud documentation, Cloud Profiler is a statistical, low-overhead profiler that continuously gathers CPU usage and memory-allocation information from your production applications1. Cloud Profiler can help you identify slow paths in your code and optimize the performance of your applications. Cloud Profiler supports applications written in Go that run on App Engine standard environment2. To use Cloud Profiler, you need to configure it in your Google Cloud project and initialize the cloud.google.com/go/profiler library in your application code3. You can then use the Cloud Profiler interface to analyze the profiling data and visualize the results by using flame graphs4. Cloud Profiler has minimal performance impact and management overhead, as it only samples a small fraction of the application activity and does not require any additional infrastructure or agents.

The other options are incorrect because they do not meet the requirements of minimizing performance impact and management overhead. Option A is incorrect because it requires installing a continuous profiling tool into Compute Engine, which is an additional infrastructure that needs to be managed and maintained. Option B is incorrect because it requires periodically running the go tool pprof command against the application instance, which is a manual and disruptive process that can affect the application performance. Option D is incorrect because it only uses Cloud Monitoring to assess the App Engine CPU utilization metric, which is not enough to identify slow paths in the code or optimize the application performance.

Cloud Profiler documentation, Overview. Profiling Go applications, Supported environments. Profiling Go applications, Using Cloud Profiler. Analyzing data, Analyzing data.

You need to define SLOs for a high-traffic web application. Customers are currently happy with the application performance and availability. Based on current measurement, the 90th percentile Of latency is 160 ms and the 95th percentile of latency is 300 ms over a 28-day window. What latency SLO should you publish?

A.
90th percentile - 150 ms 95th percentile - 290 ms
A.
90th percentile - 150 ms 95th percentile - 290 ms
Answers
B.
90th percentile - 160 ms 95th percentile - 300 ms
B.
90th percentile - 160 ms 95th percentile - 300 ms
Answers
C.
90th percentile - 190 ms 95th percentile - 330 ms
C.
90th percentile - 190 ms 95th percentile - 330 ms
Answers
D.
90th percentile - 300 ms 95th percentile - 450 ms
D.
90th percentile - 300 ms 95th percentile - 450 ms
Answers
Suggested answer: B

Explanation:

a latency SLO is a service level objective that specifies a target level of responsiveness for a web application1.A latency SLO can be expressed as a percentile of latency over a time window, such as the 90th percentile of latency over 28 days2. A percentile of latency is the maximum amount of time that a given percentage of requests take to complete.For example, the 90th percentile of latency is the maximum amount of time that 90% of requests take to complete3.

To define a latency SLO, you need to consider the following factors24:

The expectations and satisfaction of your customers. You want to set a latency SLO that reflects the level of performance that your customers are happy with and willing to pay for.

The current and historical measurements of your latency. You want to set a latency SLO that is based on data and realistic for your web application.

The trade-offs and costs of improving your latency. You want to set a latency SLO that balances the benefits of faster response times with the costs of engineering work, infrastructure, and complexity.

Based on these factors, the best option for defining a latency SLO for your web application is option B. Option B sets the latency SLO to match the current measurement of your latency, which means that you are meeting the expectations and satisfaction of your customers. Option B also sets a realistic and achievable target for your web application, which means that you do not need to invest extra resources or effort to improve your latency.Option B also aligns with the best practice of setting conservative SLOs, which means that you have some buffer or margin for error in case your latency fluctuates or degrades5.

You need to enforce several constraint templates across your Google Kubernetes Engine (GKE) clusters. The constraints include policy parameters, such as restricting the Kubernetes API. You must ensure that the policy parameters are stored in a GitHub repository and automatically applied when changes occur. What should you do?

A.
Set up a GitHub action to trigger Cloud Build when there is a parameter change. In Cloud Build, run a gcloud CLI command to apply the change.
A.
Set up a GitHub action to trigger Cloud Build when there is a parameter change. In Cloud Build, run a gcloud CLI command to apply the change.
Answers
B.
When there is a change in GitHub, use a web hook to send a request to Anthos Service Mesh, and apply the change.
B.
When there is a change in GitHub, use a web hook to send a request to Anthos Service Mesh, and apply the change.
Answers
C.
Configure Anthos Config Management with the GitHub repository. When there is a change in the repository, use Anthos Config Management to apply the change.
C.
Configure Anthos Config Management with the GitHub repository. When there is a change in the repository, use Anthos Config Management to apply the change.
Answers
D.
Configure Config Connector with the GitHub repository. When there is a change in the repository, use Config Connector to apply the change.
D.
Configure Config Connector with the GitHub repository. When there is a change in the repository, use Config Connector to apply the change.
Answers
Suggested answer: C

Explanation:

The correct answer is C) Configure Anthos Config Management with the GitHub repository. When there is a change in the repository, use Anthos Config Management to apply the change.

According to the web search results, Anthos Config Management is a service that lets you manage the configuration of your Google Kubernetes Engine (GKE) clusters from a single source of truth, such as a GitHub repository1. Anthos Config Management can enforce several constraint templates across your GKE clusters by using Policy Controller, which is a feature that integrates the Open Policy Agent (OPA) Constraint Framework into Anthos Config Management2. Policy Controller can apply constraints that include policy parameters, such as restricting the Kubernetes API3. To use Anthos Config Management and Policy Controller, you need to configure them with your GitHub repository and enable the sync mode4. When there is a change in the repository, Anthos Config Management will automatically sync and apply the change to your GKE clusters5.

The other options are incorrect because they do not use Anthos Config Management and Policy Controller. Option A is incorrect because it uses a GitHub action to trigger Cloud Build, which is a service that executes your builds on Google Cloud Platform infrastructure6. Cloud Build can run a gcloud CLI command to apply the change, but it does not use Anthos Config Management or Policy Controller. Option B is incorrect because it uses a web hook to send a request to Anthos Service Mesh, which is a service that provides a uniform way to connect, secure, monitor, and manage microservices on GKE clusters7. Anthos Service Mesh can apply the change, but it does not use Anthos Config Management or Policy Controller. Option D is incorrect because it uses Config Connector, which is a service that lets you manage Google Cloud resources through Kubernetes configuration. Config Connector can apply the change, but it does not use Anthos Config Management or Policy Controller.

Anthos Config Management documentation, Overview. Policy Controller, Policy Controller. Constraint template library, Constraint template library. Installing Anthos Config Management, Installing Anthos Config Management. Syncing configurations, Syncing configurations. Cloud Build documentation, Overview. Anthos Service Mesh documentation, Overview. [Config Connector documentation], Overview.

Your company recently migrated to Google Cloud. You need to design a fast, reliable, and repeatable solution for your company to provision new projects and basic resources in Google Cloud. What should you do?

A.
Use the Google Cloud console to create projects.
A.
Use the Google Cloud console to create projects.
Answers
B.
Write a script by using the gcloud CLI that passes the appropriate parameters from the request. Save the script in a Git repository.
B.
Write a script by using the gcloud CLI that passes the appropriate parameters from the request. Save the script in a Git repository.
Answers
C.
Write a Terraform module and save it in your source control repository. Copy and run the apply command to create the new project.
C.
Write a Terraform module and save it in your source control repository. Copy and run the apply command to create the new project.
Answers
D.
Use the Terraform repositories from the Cloud Foundation Toolkit. Apply the code with appropriate parameters to create the Google Cloud project and related resources.
D.
Use the Terraform repositories from the Cloud Foundation Toolkit. Apply the code with appropriate parameters to create the Google Cloud project and related resources.
Answers
Suggested answer: D

Explanation:

Terraform is an open-source tool that allows you to define and provision infrastructure as code1.Terraform can be used to create and manage Google Cloud resources, such as projects, networks, and services2.The Cloud Foundation Toolkit is a set of open-source Terraform modules and tools that provide best practices and guidance for deploying Google Cloud infrastructure3.The Cloud Foundation Toolkit includes Terraform repositories for creating Google Cloud projects and related resources, such as IAM policies, APIs, service accounts, and billing4. By using the Terraform repositories from the Cloud Foundation Toolkit, you can design a fast, reliable, and repeatable solution for your company to provision new projects and basic resources in Google Cloud. You can also customize the Terraform code to suit your specific needs and preferences.

You are configuring a Cl pipeline. The build step for your Cl pipeline integration testing requires access to APIs inside your private VPC network. Your security team requires that you do not expose API traffic publicly. You need to implement a solution that minimizes management overhead. What should you do?

A.
Use Cloud Build private pools to connect to the private VPC.
A.
Use Cloud Build private pools to connect to the private VPC.
Answers
B.
Use Spinnaker for Google Cloud to connect to the private VPC.
B.
Use Spinnaker for Google Cloud to connect to the private VPC.
Answers
C.
Use Cloud Build as a pipeline runner. Configure Internal HTTP(S) Load Balancing for API access.
C.
Use Cloud Build as a pipeline runner. Configure Internal HTTP(S) Load Balancing for API access.
Answers
D.
Use Cloud Build as a pipeline runner. Configure External HTTP(S) Load Balancing with a Google Cloud Armor policy for API access.
D.
Use Cloud Build as a pipeline runner. Configure External HTTP(S) Load Balancing with a Google Cloud Armor policy for API access.
Answers
Suggested answer: A

Explanation:

Cloud Build is a service that executes your builds on Google Cloud Platform infrastructure1.Cloud Build can be used as a pipeline runner for your CI pipeline, which is a process that automates the integration and testing of your code2.Cloud Build private pools are private, dedicated pools of workers that offer greater customization over the build environment, including the ability to access resources in a private VPC network3. A VPC network is a virtual network that provides connectivity for your Google Cloud resources and services.By using Cloud Build private pools, you can implement a solution that minimizes management overhead, as Cloud Build private pools are hosted and fully-managed by Cloud Build and scale up and down to zero, with no infrastructure to set up, upgrade, or scale3. You can also implement a solution that meets your security requirement, as Cloud Build private pools use network peering to connect into your private VPC network and do not expose API traffic publicly.

Your organization stores all application logs from multiple Google Cloud projects in a central Cloud Logging project. Your security team wants to enforce a rule that each project team can only view their respective logs, and only the operations team can view all the logs. You need to design a solution that meets the security team's requirements, while minimizing costs. What should you do?

A.
Export logs to BigQuery tables for each project team. Grant project teams access to their tables. Grant logs writer access to the operations team in the central logging project.
A.
Export logs to BigQuery tables for each project team. Grant project teams access to their tables. Grant logs writer access to the operations team in the central logging project.
Answers
B.
Create log views for each project team, and only show each project team their application logs. Grant the operations team access to the _ Al Il-jogs View in the central logging project.
B.
Create log views for each project team, and only show each project team their application logs. Grant the operations team access to the _ Al Il-jogs View in the central logging project.
Answers
C.
Grant each project team access to the project _ Default view in the central logging project. Grant logging viewer access to the operations team in the central logging project.
C.
Grant each project team access to the project _ Default view in the central logging project. Grant logging viewer access to the operations team in the central logging project.
Answers
D.
Create Identity and Access Management (IAM) roles for each project team and restrict access to the _ Default log view in their individual Google Cloud project. Grant viewer access to the operations team in the central logging project.
D.
Create Identity and Access Management (IAM) roles for each project team and restrict access to the _ Default log view in their individual Google Cloud project. Grant viewer access to the operations team in the central logging project.
Answers
Suggested answer: B

Explanation:

Create log views for each project team, and only show each project team their application logs.Grant the operations team access to the _AllLogs View in the central logging project1.

This approach aligns with the Google Cloud's recommended methodologies for Professional Cloud DevOps Engineers1. Log views allow you to create and manage access control at a finer granularity for your logs. By creating a separate log view for each project team, you can ensure that they only have access to their respective logs. The operations team, on the other hand, can be granted access to the _AllLogs view in the central logging project, allowing them to view all logs as required.

This solution not only meets the security team's requirements but also minimizes costs as it leverages built-in features of Google Cloud's logging and does not require exporting logs to another service like BigQuery (as suggested in option A), which could incur additional costs1.

Your CTO has asked you to implement a postmortem policy on every incident for internal use. You want to define what a good postmortem is to ensure that the policy is successful at your company. What should you do?

Choose 2 answers

A.
Ensure that all postmortems include what caused the incident, identify the person or team responsible for causing the incident. and how to prevent a future occurrence of the incident.
A.
Ensure that all postmortems include what caused the incident, identify the person or team responsible for causing the incident. and how to prevent a future occurrence of the incident.
Answers
B.
Ensure that all postmortems include what caused the incident, how the incident could have been worse, and how to prevent a future occurrence of the incident.
B.
Ensure that all postmortems include what caused the incident, how the incident could have been worse, and how to prevent a future occurrence of the incident.
Answers
C.
Ensure that all postmortems include the severity of the incident, how to prevent a future occurrence of the incident. and what caused the incident without naming internal system components.
C.
Ensure that all postmortems include the severity of the incident, how to prevent a future occurrence of the incident. and what caused the incident without naming internal system components.
Answers
D.
Ensure that all postmortems include how the incident was resolved and what caused the incident without naming customer information.
D.
Ensure that all postmortems include how the incident was resolved and what caused the incident without naming customer information.
Answers
E.
Ensure that all postmortems include all incident participants in postmortem authoring and share postmortems as widely as possible,
E.
Ensure that all postmortems include all incident participants in postmortem authoring and share postmortems as widely as possible,
Answers
Suggested answer: B, E

Explanation:

The correct answers are B and E.

A good postmortem should include what caused the incident, how the incident could have been worse, and how to prevent a future occurrence of the incident1. This helps to identify the root cause of the problem, the impact of the incident, and the actions to take to mitigate or eliminate the risk of recurrence.

A good postmortem should also include all incident participants in postmortem authoring and share postmortems as widely as possible2. This helps to foster a culture of learning and collaboration, as well as to increase the visibility and accountability of the incident response process.

Answer A is incorrect because it assigns blame to a person or team, which goes against the principle of blameless postmortems2. Blameless postmortems focus on finding solutions rather than pointing fingers, and encourage honest and constructive feedback without fear of punishment.

Answer C is incorrect because it omits how the incident could have been worse, which is an important factor to consider when evaluating the severity and impact of the incident1. It also avoids naming internal system components, which makes it harder to understand the technical details and root cause of the problem.

Answer D is incorrect because it omits how to prevent a future occurrence of the incident, which is the main goal of a postmortem1. It also avoids naming customer information, which may be relevant for understanding the impact and scope of the incident.

Your uses Jenkins running on Google Cloud VM instances for CI/CD. You need to extend the functionality to use infrastructure as code automation by using Terraform. You must ensure that the Terraform Jenkins instance is authorized to create Google Cloud resources. You want to follow Google-recommended practices- What should you do?

A.
Add the auth application-default command as a step in Jenkins before running the Terraform commands.
A.
Add the auth application-default command as a step in Jenkins before running the Terraform commands.
Answers
B.
Create a dedicated service account for the Terraform instance. Download and copy the secret key value to the GOOGLE environment variable on the Jenkins server.
B.
Create a dedicated service account for the Terraform instance. Download and copy the secret key value to the GOOGLE environment variable on the Jenkins server.
Answers
C.
Confirm that the Jenkins VM instance has an attached service account with the appropriate Identity and Access Management (IAM) permissions. use the Terraform module so that Secret Manager can retrieve credentials.
C.
Confirm that the Jenkins VM instance has an attached service account with the appropriate Identity and Access Management (IAM) permissions. use the Terraform module so that Secret Manager can retrieve credentials.
Answers
Suggested answer: C

Explanation:

The correct answer is C.

Confirming that the Jenkins VM instance has an attached service account with the appropriate Identity and Access Management (IAM) permissions is the best way to ensure that the Terraform Jenkins instance is authorized to create Google Cloud resources. This follows the Google-recommended practice of using service accounts to authenticate and authorize applications running on Google Cloud1. Service accounts are associated with private keys that can be used to generate access tokens for Google Cloud APIs2. By attaching a service account to the Jenkins VM instance, Terraform can use the Application Default Credentials (ADC) strategy to automatically find and use the service account credentials3.

Answer A is incorrect because the auth application-default command is used to obtain user credentials, not service account credentials. User credentials are not recommended for applications running on Google Cloud, as they are less secure and less scalable than service account credentials1.

Answer B is incorrect because it involves downloading and copying the secret key value of the service account, which is not a secure or reliable way of managing credentials. The secret key value should be kept private and not exposed to any other system or user2. Moreover, setting the GOOGLE environment variable on the Jenkins server is not a valid way of providing credentials to Terraform. Terraform expects the credentials to be either in a file pointed by the GOOGLE_APPLICATION_CREDENTIALS environment variable, or in a provider block with the credentials argument3.

Answer D is incorrect because it involves using the Terraform module for Secret Manager, which is a service that stores and manages sensitive data such as API keys, passwords, and certificates. While Secret Manager can be used to store and retrieve credentials, it is not necessary or sufficient for authorizing the Terraform Jenkins instance. The Terraform Jenkins instance still needs a service account with the appropriate IAM permissions to access Secret Manager and other Google Cloud resources.

Total 166 questions
Go to page: of 17