Google Professional Cloud DevOps Engineer Practice Test - Questions Answers
List of questions
Related questions
Question 1
You are creating a CI/CD pipeline to perform Terraform deployments of Google Cloud resources Your CI/CD tooling is running in Google Kubernetes Engine (GKE) and uses an ephemeral Pod for each pipeline run You must ensure that the pipelines that run in the Pods have the appropriate Identity and Access Management (1AM) permissions to perform the Terraform deployments You want to follow Google-recommended practices for identity management What should you do?
Choose 2 answers
Explanation:
The best options for ensuring that the pipelines that run in the Pods have the appropriate IAM permissions to perform the Terraform deployments are to create a new Kubernetes service account and assign the service account to the Pods, and to use Workload Identity to authenticate as the Google service account. A Kubernetes service account is an identity that represents an application or a process running in a Pod. A Google service account is an identity that represents a Google Cloud resource or service. Workload Identity is a feature that allows you to bind Kubernetes service accounts to Google service accounts. By using Workload Identity, you can avoid creating and managing JSON service account keys, which are less secure and require more maintenance. You can also assign the appropriate IAM permissions to the Google service account that corresponds to the Kubernetes service account.
Question 2
You are the on-call Site Reliability Engineer for a microservice that is deployed to a Google Kubernetes Engine (GKE) Autopilot cluster. Your company runs an online store that publishes order messages to Pub/Sub and a microservice receives these messages and updates stock information in the warehousing system. A sales event caused an increase in orders, and the stock information is not being updated quickly enough. This is causing a large number of orders to be accepted for products that are out of stock You check the metrics for the microservice and compare them to typical levels.
You need to ensure that the warehouse system accurately reflects product inventory at the time orders are placed and minimize the impact on customers What should you do?
Explanation:
The best option for ensuring that the warehouse system accurately reflects product inventory at the time orders are placed and minimizing the impact on customers is to increase the number of Pod replicas. Increasing the number of Pod replicas will increase the scalability and availability of your microservice, which will allow it to handle more Pub/Sub messages and update stock information faster. This way, you can reduce the backlog of undelivered messages and oldest unacknowledged message age, which are causing delays in updating product inventory. You can use Horizontal Pod Autoscaler or Cloud Monitoring metrics-based autoscaling to automatically adjust the number of Pod replicas based on load or custom metrics.
Question 3
Your team deploys applications to three Google Kubernetes Engine (GKE) environments development staging and production You use GitHub reposrtones as your source of truth You need to ensure that the three environments are consistent You want to follow Google-recommended practices to enforce and install network policies and a logging DaemonSet on all the GKE clusters in those environments What should you do?
Explanation:
The best option for ensuring that the three environments are consistent and following Google-recommended practices is to use Cloud Build to render and deploy the network policies and the DaemonSet, and set up Config Sync to sync the configurations for the three environments. Cloud Build is a service that executes your builds on Google Cloud infrastructure. You can use Cloud Build to render and deploy your network policies and DaemonSet as code using tools like Kustomize, Helm, or kpt. Config Sync is a feature that enables you to manage the configurations of your GKE clusters from a single source of truth, such as a Git repository. You can use Config Sync to sync the configurations for your development, staging, and production environments and ensure that they are consistent.
Question 4
You are using Terraform to manage infrastructure as code within a Cl/CD pipeline You notice that multiple copies of the entire infrastructure stack exist in your Google Cloud project, and a new copy is created each time a change to the existing infrastructure is made You need to optimize your cloud spend by ensuring that only a single instance of your infrastructure stack exists at a time. You want to follow Google-recommended practices What should you do?
Explanation:
The best option for optimizing your cloud spend by ensuring that only a single instance of your infrastructure stack exists at a time is to confirm that the pipeline is storing and retrieving the terraform.tfstate file from Cloud Storage with the Terraform gcs backend. The terraform.tfstate file is a file that Terraform uses to store the current state of your infrastructure. The Terraform gcs backend is a backend type that allows you to store the terraform.tfstate file in a Cloud Storage bucket. By using the Terraform gcs backend, you can ensure that your pipeline has access to the latest state of your infrastructure and avoid creating multiple copies of the entire infrastructure stack.
Question 5
You are creating Cloud Logging sinks to export log entries from Cloud Logging to BigQuery for future analysis Your organization has a Google Cloud folder named Dev that contains development projects and a folder named Prod that contains production projects Log entries for development projects must be exported to dev_dataset. and log entries for production projects must be exported to prod_dataset You need to minimize the number of log sinks created and you want to ensure that the log sinks apply to future projects What should you do?
Explanation:
The best option for minimizing the number of log sinks created and ensuring that the log sinks apply to future projects is to create an aggregated log sink in the Dev and Prod folders. An aggregated log sink is a log sink that collects logs from multiple sources, such as projects, folders, or organizations. By creating an aggregated log sink in each folder, you can export log entries for development projects to dev_dataset and log entries for production projects to prod_dataset. You can also use filters to specify which logs you want to export. Additionally, by creating an aggregated log sink at the folder level, you can ensure that the log sink applies to future projects that are created under that folder.
Question 6
Your company runs services by using multiple globally distributed Google Kubernetes Engine (GKE) clusters Your operations team has set up workload monitoring that uses Prometheus-based tooling for metrics alerts: and generating dashboards This setup does not provide a method to view metrics globally across all clusters You need to implement a scalable solution to support global Prometheus querying and minimize management overhead What should you do?
Explanation:
The best option for implementing a scalable solution to support global Prometheus querying and minimize management overhead is to use Google Cloud Managed Service for Prometheus. Google Cloud Managed Service for Prometheus is a fully managed service that allows you to collect, query, and visualize metrics from your GKE clusters using Prometheus-based tooling. You can use Google Cloud Managed Service for Prometheus to query metrics across multiple clusters and regions using a global view. You can also use Google Cloud Managed Service for Prometheus to integrate with other Google Cloud services, such as Cloud Monitoring, Cloud Logging, and BigQuery. By using Google Cloud Managed Service for Prometheus, you can avoid managing and scaling your own Prometheus servers and focus on your application performance.
Question 7
You need to build a CI/CD pipeline for a containerized application in Google Cloud Your development team uses a central Git repository for trunk-based development You want to run all your tests in the pipeline for any new versions of the application to improve the quality What should you do?
Explanation:
The best option for building a CI/CD pipeline for a containerized application in Google Cloud is to trigger Cloud Build to run unit tests when the code is pushed, if all unit tests are successful, build and push the application container to a central registry, trigger Cloud Build to deploy the container to a testing environment, and run integration tests and acceptance tests, and if all tests are successful, the pipeline deploys the application to the production environment and runs smoke tests. This option follows the best practices for CI/CD pipelines, such as running tests at different stages of the pipeline, using a central registry for storing and managing containers, deploying to different environments, and using Cloud Build as a unified tool for building, testing, and deploying.
Question 8
Your company is developing applications that are deployed on Google Kubernetes Engine (GKE) Each team manages a different application You need to create the development and production environments for each team while you minimize costs Different teams should not be able to access other teams environments You want to follow Google-recommended practices What should you do?
Explanation:
The best option for creating the development and production environments for each team while minimizing costs and ensuring isolation is to create a development and a production GKE cluster in separate projects, in each cluster create a Kubernetes namespace per team, and then configure Kubernetes role-based access control (RBAC) so that each team can only access its own namespace. This option allows you to use fewer clusters and projects than creating one project or cluster per team, which reduces costs and complexity. It also allows you to isolate each team's environment by using namespaces and RBAC, which prevents teams from accessing other teams' environments.
Question 9
The new version of your containerized application has been tested and is ready to be deployed to production on Google Kubernetes Engine (GKE) You could not fully load-test the new version in your pre-production environment and you need to ensure that the application does not have performance problems after deployment Your deployment must be automated What should you do?
Explanation:
The best option for deploying a new version of your containerized application to production on GKE and ensuring that the application does not have performance problems after deployment is to deploy the application through a continuous delivery pipeline by using canary deployments, use Cloud Monitoring to look for performance issues, and ramp up traffic as supported by the metrics. A canary deployment is a deployment strategy that involves releasing a new version of an application to a subset of users or servers and monitoring its performance and reliability. This way, you can test the new version in the production environment with real traffic and load, and gradually increase the traffic as the metrics indicate. You can use Cloud Monitoring to collect and analyze metrics from your application and GKE cluster, such as latency, error rate, CPU utilization, and memory usage. You can also use Cloud Monitoring to set up alerts and dashboards to track the performance of your application.
Question 10
You are managing an application that runs in Compute Engine The application uses a custom HTTP server to expose an API that is accessed by other applications through an internal TCP/UDP load balancer A firewall rule allows access to the API port from 0.0.0-0/0. You need to configure Cloud Logging to log each IP address that accesses the API by using the fewest number of steps What should you do Bret?
Explanation:
The best option for configuring Cloud Logging to log each IP address that accesses the API by using the fewest number of steps is to enable logging on the firewall rule. A firewall rule is a rule that controls the traffic to and from your Compute Engine instances. You can enable logging on a firewall rule to capture information about the traffic that matches the rule, such as source and destination IP addresses, protocols, ports, and actions. You can use Cloud Logging to view and export the firewall logs to other destinations, such as BigQuery, for further analysis.
Question