ExamGecko
Home Home / Google / Professional Cloud DevOps Engineer

Google Professional Cloud DevOps Engineer Practice Test - Questions Answers

Question list
Search
Search

List of questions

Search

Related questions











You are creating a CI/CD pipeline to perform Terraform deployments of Google Cloud resources Your CI/CD tooling is running in Google Kubernetes Engine (GKE) and uses an ephemeral Pod for each pipeline run You must ensure that the pipelines that run in the Pods have the appropriate Identity and Access Management (1AM) permissions to perform the Terraform deployments You want to follow Google-recommended practices for identity management What should you do?

Choose 2 answers

A.
Create a new Kubernetes service account, and assign the service account to the Pods Use Workload Identity to authenticate as the Google service account
A.
Create a new Kubernetes service account, and assign the service account to the Pods Use Workload Identity to authenticate as the Google service account
Answers
B.
Create a new JSON service account key for the Google service account store the key as a Kubernetes secret, inject the key into the Pods, and set the boogle_application_credentials environment variable
B.
Create a new JSON service account key for the Google service account store the key as a Kubernetes secret, inject the key into the Pods, and set the boogle_application_credentials environment variable
Answers
C.
Create a new Google service account, and assign the appropriate 1AM permissions
C.
Create a new Google service account, and assign the appropriate 1AM permissions
Answers
D.
Create a new JSON service account key for the Google service account store the key in the secret management store for the CI/CD tool and configure Terraform to use this key for authentication
D.
Create a new JSON service account key for the Google service account store the key in the secret management store for the CI/CD tool and configure Terraform to use this key for authentication
Answers
E.
Assign the appropriate 1AM permissions to the Google service account associated with the Compute Engine VM instances that run the Pods
E.
Assign the appropriate 1AM permissions to the Google service account associated with the Compute Engine VM instances that run the Pods
Answers
Suggested answer: A, C

Explanation:

The best options for ensuring that the pipelines that run in the Pods have the appropriate IAM permissions to perform the Terraform deployments are to create a new Kubernetes service account and assign the service account to the Pods, and to use Workload Identity to authenticate as the Google service account. A Kubernetes service account is an identity that represents an application or a process running in a Pod. A Google service account is an identity that represents a Google Cloud resource or service. Workload Identity is a feature that allows you to bind Kubernetes service accounts to Google service accounts. By using Workload Identity, you can avoid creating and managing JSON service account keys, which are less secure and require more maintenance. You can also assign the appropriate IAM permissions to the Google service account that corresponds to the Kubernetes service account.

You are the on-call Site Reliability Engineer for a microservice that is deployed to a Google Kubernetes Engine (GKE) Autopilot cluster. Your company runs an online store that publishes order messages to Pub/Sub and a microservice receives these messages and updates stock information in the warehousing system. A sales event caused an increase in orders, and the stock information is not being updated quickly enough. This is causing a large number of orders to be accepted for products that are out of stock You check the metrics for the microservice and compare them to typical levels.

You need to ensure that the warehouse system accurately reflects product inventory at the time orders are placed and minimize the impact on customers What should you do?

A.
Decrease the acknowledgment deadline on the subscription
A.
Decrease the acknowledgment deadline on the subscription
Answers
B.
Add a virtual queue to the online store that allows typical traffic levels
B.
Add a virtual queue to the online store that allows typical traffic levels
Answers
C.
Increase the number of Pod replicas
C.
Increase the number of Pod replicas
Answers
D.
Increase the Pod CPU and memory limits
D.
Increase the Pod CPU and memory limits
Answers
Suggested answer: C

Explanation:

The best option for ensuring that the warehouse system accurately reflects product inventory at the time orders are placed and minimizing the impact on customers is to increase the number of Pod replicas. Increasing the number of Pod replicas will increase the scalability and availability of your microservice, which will allow it to handle more Pub/Sub messages and update stock information faster. This way, you can reduce the backlog of undelivered messages and oldest unacknowledged message age, which are causing delays in updating product inventory. You can use Horizontal Pod Autoscaler or Cloud Monitoring metrics-based autoscaling to automatically adjust the number of Pod replicas based on load or custom metrics.

Your team deploys applications to three Google Kubernetes Engine (GKE) environments development staging and production You use GitHub reposrtones as your source of truth You need to ensure that the three environments are consistent You want to follow Google-recommended practices to enforce and install network policies and a logging DaemonSet on all the GKE clusters in those environments What should you do?

A.
Use Google Cloud Deploy to deploy the network policies and the DaemonSet Use Cloud Monitoring to trigger an alert if the network policies and DaemonSet drift from your source in the repository.
A.
Use Google Cloud Deploy to deploy the network policies and the DaemonSet Use Cloud Monitoring to trigger an alert if the network policies and DaemonSet drift from your source in the repository.
Answers
B.
Use Google Cloud Deploy to deploy the DaemonSet and use Policy Controller to configure the network policies Use Cloud Monitoring to detect drifts from the source in the repository and Cloud Functions to correct the drifts
B.
Use Google Cloud Deploy to deploy the DaemonSet and use Policy Controller to configure the network policies Use Cloud Monitoring to detect drifts from the source in the repository and Cloud Functions to correct the drifts
Answers
C.
Use Cloud Build to render and deploy the network policies and the DaemonSet Set up Config Sync to sync the configurations for the three environments
C.
Use Cloud Build to render and deploy the network policies and the DaemonSet Set up Config Sync to sync the configurations for the three environments
Answers
D.
Use Cloud Build to render and deploy the network policies and the DaemonSet Set up a Policy Controller to enforce the configurations for the three environments
D.
Use Cloud Build to render and deploy the network policies and the DaemonSet Set up a Policy Controller to enforce the configurations for the three environments
Answers
Suggested answer: C

Explanation:

The best option for ensuring that the three environments are consistent and following Google-recommended practices is to use Cloud Build to render and deploy the network policies and the DaemonSet, and set up Config Sync to sync the configurations for the three environments. Cloud Build is a service that executes your builds on Google Cloud infrastructure. You can use Cloud Build to render and deploy your network policies and DaemonSet as code using tools like Kustomize, Helm, or kpt. Config Sync is a feature that enables you to manage the configurations of your GKE clusters from a single source of truth, such as a Git repository. You can use Config Sync to sync the configurations for your development, staging, and production environments and ensure that they are consistent.

You are using Terraform to manage infrastructure as code within a Cl/CD pipeline You notice that multiple copies of the entire infrastructure stack exist in your Google Cloud project, and a new copy is created each time a change to the existing infrastructure is made You need to optimize your cloud spend by ensuring that only a single instance of your infrastructure stack exists at a time. You want to follow Google-recommended practices What should you do?

A.
Create a new pipeline to delete old infrastructure stacks when they are no longer needed
A.
Create a new pipeline to delete old infrastructure stacks when they are no longer needed
Answers
B.
Confirm that the pipeline is storing and retrieving the terraform. if state file from Cloud Storage with the Terraform gcs backend
B.
Confirm that the pipeline is storing and retrieving the terraform. if state file from Cloud Storage with the Terraform gcs backend
Answers
C.
Verify that the pipeline is storing and retrieving the terrafom.tfstat* file from a source control
C.
Verify that the pipeline is storing and retrieving the terrafom.tfstat* file from a source control
Answers
D.
Update the pipeline to remove any existing infrastructure before you apply the latest configuration
D.
Update the pipeline to remove any existing infrastructure before you apply the latest configuration
Answers
Suggested answer: B

Explanation:

The best option for optimizing your cloud spend by ensuring that only a single instance of your infrastructure stack exists at a time is to confirm that the pipeline is storing and retrieving the terraform.tfstate file from Cloud Storage with the Terraform gcs backend. The terraform.tfstate file is a file that Terraform uses to store the current state of your infrastructure. The Terraform gcs backend is a backend type that allows you to store the terraform.tfstate file in a Cloud Storage bucket. By using the Terraform gcs backend, you can ensure that your pipeline has access to the latest state of your infrastructure and avoid creating multiple copies of the entire infrastructure stack.

You are creating Cloud Logging sinks to export log entries from Cloud Logging to BigQuery for future analysis Your organization has a Google Cloud folder named Dev that contains development projects and a folder named Prod that contains production projects Log entries for development projects must be exported to dev_dataset. and log entries for production projects must be exported to prod_dataset You need to minimize the number of log sinks created and you want to ensure that the log sinks apply to future projects What should you do?

A.
Create a single aggregated log sink at the organization level.
A.
Create a single aggregated log sink at the organization level.
Answers
B.
Create a log sink in each project
B.
Create a log sink in each project
Answers
C.
Create two aggregated log sinks at the organization level, and filter by project ID
C.
Create two aggregated log sinks at the organization level, and filter by project ID
Answers
D.
Create an aggregated Iog sink in the Dev and Prod folders
D.
Create an aggregated Iog sink in the Dev and Prod folders
Answers
Suggested answer: D

Explanation:

The best option for minimizing the number of log sinks created and ensuring that the log sinks apply to future projects is to create an aggregated log sink in the Dev and Prod folders. An aggregated log sink is a log sink that collects logs from multiple sources, such as projects, folders, or organizations. By creating an aggregated log sink in each folder, you can export log entries for development projects to dev_dataset and log entries for production projects to prod_dataset. You can also use filters to specify which logs you want to export. Additionally, by creating an aggregated log sink at the folder level, you can ensure that the log sink applies to future projects that are created under that folder.

Your company runs services by using multiple globally distributed Google Kubernetes Engine (GKE) clusters Your operations team has set up workload monitoring that uses Prometheus-based tooling for metrics alerts: and generating dashboards This setup does not provide a method to view metrics globally across all clusters You need to implement a scalable solution to support global Prometheus querying and minimize management overhead What should you do?

A.
Configure Prometheus cross-service federation for centralized data access
A.
Configure Prometheus cross-service federation for centralized data access
Answers
B.
Configure workload metrics within Cloud Operations for GKE
B.
Configure workload metrics within Cloud Operations for GKE
Answers
C.
Configure Prometheus hierarchical federation for centralized data access
C.
Configure Prometheus hierarchical federation for centralized data access
Answers
D.
Configure Google Cloud Managed Service for Prometheus
D.
Configure Google Cloud Managed Service for Prometheus
Answers
Suggested answer: D

Explanation:

The best option for implementing a scalable solution to support global Prometheus querying and minimize management overhead is to use Google Cloud Managed Service for Prometheus. Google Cloud Managed Service for Prometheus is a fully managed service that allows you to collect, query, and visualize metrics from your GKE clusters using Prometheus-based tooling. You can use Google Cloud Managed Service for Prometheus to query metrics across multiple clusters and regions using a global view. You can also use Google Cloud Managed Service for Prometheus to integrate with other Google Cloud services, such as Cloud Monitoring, Cloud Logging, and BigQuery. By using Google Cloud Managed Service for Prometheus, you can avoid managing and scaling your own Prometheus servers and focus on your application performance.

You need to build a CI/CD pipeline for a containerized application in Google Cloud Your development team uses a central Git repository for trunk-based development You want to run all your tests in the pipeline for any new versions of the application to improve the quality What should you do?

A.
1. Install a Git hook to require developers to run unit tests before pushing the code to a central repository 2. Trigger Cloud Build to build the application container Deploy the application container to a testing environment, and run integration tests 3. If the integration tests are successful deploy the application container to your production environment. and run acceptance tests
A.
1. Install a Git hook to require developers to run unit tests before pushing the code to a central repository 2. Trigger Cloud Build to build the application container Deploy the application container to a testing environment, and run integration tests 3. If the integration tests are successful deploy the application container to your production environment. and run acceptance tests
Answers
B.
1. Install a Git hook to require developers to run unit tests before pushing the code to a central repository If all tests are successful build a container 2. Trigger Cloud Build to deploy the application container to a testing environment, and run integration tests and acceptance tests 3. If all tests are successful tag the code as production ready Trigger Cloud Build to build and deploy the application container to the production environment
B.
1. Install a Git hook to require developers to run unit tests before pushing the code to a central repository If all tests are successful build a container 2. Trigger Cloud Build to deploy the application container to a testing environment, and run integration tests and acceptance tests 3. If all tests are successful tag the code as production ready Trigger Cloud Build to build and deploy the application container to the production environment
Answers
C.
1. Trigger Cloud Build to build the application container and run unit tests with the container 2. If unit tests are successful, deploy the application container to a testing environment, and run integration tests 3. If the integration tests are successful the pipeline deploys the application container to the production environment After that, run acceptance tests
C.
1. Trigger Cloud Build to build the application container and run unit tests with the container 2. If unit tests are successful, deploy the application container to a testing environment, and run integration tests 3. If the integration tests are successful the pipeline deploys the application container to the production environment After that, run acceptance tests
Answers
D.
1. Trigger Cloud Build to run unit tests when the code is pushed If all unit tests are successful, build and push the application container to a central registry. 2. Trigger Cloud Build to deploy the container to a testing environment, and run integration tests and acceptance tests 3. If all tests are successful the pipeline deploys the application to the production environment and runs smoke tests
D.
1. Trigger Cloud Build to run unit tests when the code is pushed If all unit tests are successful, build and push the application container to a central registry. 2. Trigger Cloud Build to deploy the container to a testing environment, and run integration tests and acceptance tests 3. If all tests are successful the pipeline deploys the application to the production environment and runs smoke tests
Answers
Suggested answer: D

Explanation:

The best option for building a CI/CD pipeline for a containerized application in Google Cloud is to trigger Cloud Build to run unit tests when the code is pushed, if all unit tests are successful, build and push the application container to a central registry, trigger Cloud Build to deploy the container to a testing environment, and run integration tests and acceptance tests, and if all tests are successful, the pipeline deploys the application to the production environment and runs smoke tests. This option follows the best practices for CI/CD pipelines, such as running tests at different stages of the pipeline, using a central registry for storing and managing containers, deploying to different environments, and using Cloud Build as a unified tool for building, testing, and deploying.

Your company is developing applications that are deployed on Google Kubernetes Engine (GKE) Each team manages a different application You need to create the development and production environments for each team while you minimize costs Different teams should not be able to access other teams environments You want to follow Google-recommended practices What should you do?

A.
Create one Google Cloud project per team In each project create a cluster for development and one for production Grant the teams Identity and Access Management (1AM) access to their respective clusters
A.
Create one Google Cloud project per team In each project create a cluster for development and one for production Grant the teams Identity and Access Management (1AM) access to their respective clusters
Answers
B.
Create one Google Cloud project per team In each project create a cluster with a Kubernetes namespace for development and one for production Grant the teams Identity and Access Management (1AM) access to their respective clusters.
B.
Create one Google Cloud project per team In each project create a cluster with a Kubernetes namespace for development and one for production Grant the teams Identity and Access Management (1AM) access to their respective clusters.
Answers
C.
Create a development and a production GKE cluster in separate projects In each cluster create a Kubernetes namespace per team and then configure Identity-Aware Proxy so that each team can only access its own namespace
C.
Create a development and a production GKE cluster in separate projects In each cluster create a Kubernetes namespace per team and then configure Identity-Aware Proxy so that each team can only access its own namespace
Answers
D.
Create a development and a production GKE cluster in separate projects In each cluster create a Kubernetes namespace per team and then configure Kubernetes role-based access control (RBAC) so that each team can only access its own namespace
D.
Create a development and a production GKE cluster in separate projects In each cluster create a Kubernetes namespace per team and then configure Kubernetes role-based access control (RBAC) so that each team can only access its own namespace
Answers
Suggested answer: D

Explanation:

The best option for creating the development and production environments for each team while minimizing costs and ensuring isolation is to create a development and a production GKE cluster in separate projects, in each cluster create a Kubernetes namespace per team, and then configure Kubernetes role-based access control (RBAC) so that each team can only access its own namespace. This option allows you to use fewer clusters and projects than creating one project or cluster per team, which reduces costs and complexity. It also allows you to isolate each team's environment by using namespaces and RBAC, which prevents teams from accessing other teams' environments.

The new version of your containerized application has been tested and is ready to be deployed to production on Google Kubernetes Engine (GKE) You could not fully load-test the new version in your pre-production environment and you need to ensure that the application does not have performance problems after deployment Your deployment must be automated What should you do?

A.
Deploy the application through a continuous delivery pipeline by using canary deployments Use Cloud Monitoring to look for performance issues, and ramp up traffic as supported by the metrics
A.
Deploy the application through a continuous delivery pipeline by using canary deployments Use Cloud Monitoring to look for performance issues, and ramp up traffic as supported by the metrics
Answers
B.
Deploy the application through a continuous delivery pipeline by using blue/green deployments Migrate traffic to the new version of the application and use Cloud Monitoring to look for performance issues
B.
Deploy the application through a continuous delivery pipeline by using blue/green deployments Migrate traffic to the new version of the application and use Cloud Monitoring to look for performance issues
Answers
C.
Deploy the application by using kubectl and use Config Connector to slowly ramp up traffic between versions. Use Cloud Monitoring to look for performance issues
C.
Deploy the application by using kubectl and use Config Connector to slowly ramp up traffic between versions. Use Cloud Monitoring to look for performance issues
Answers
D.
Deploy the application by using kubectl and set the spec. updatestrategy. type field to RollingUpdate Use Cloud Monitoring to look for performance issues, and run the kubectl rollback command if there are any issues.
D.
Deploy the application by using kubectl and set the spec. updatestrategy. type field to RollingUpdate Use Cloud Monitoring to look for performance issues, and run the kubectl rollback command if there are any issues.
Answers
Suggested answer: A

Explanation:

The best option for deploying a new version of your containerized application to production on GKE and ensuring that the application does not have performance problems after deployment is to deploy the application through a continuous delivery pipeline by using canary deployments, use Cloud Monitoring to look for performance issues, and ramp up traffic as supported by the metrics. A canary deployment is a deployment strategy that involves releasing a new version of an application to a subset of users or servers and monitoring its performance and reliability. This way, you can test the new version in the production environment with real traffic and load, and gradually increase the traffic as the metrics indicate. You can use Cloud Monitoring to collect and analyze metrics from your application and GKE cluster, such as latency, error rate, CPU utilization, and memory usage. You can also use Cloud Monitoring to set up alerts and dashboards to track the performance of your application.

You are managing an application that runs in Compute Engine The application uses a custom HTTP server to expose an API that is accessed by other applications through an internal TCP/UDP load balancer A firewall rule allows access to the API port from 0.0.0-0/0. You need to configure Cloud Logging to log each IP address that accesses the API by using the fewest number of steps What should you do Bret?

A.
Enable Packet Mirroring on the VPC
A.
Enable Packet Mirroring on the VPC
Answers
B.
Install the Ops Agent on the Compute Engine instances.
B.
Install the Ops Agent on the Compute Engine instances.
Answers
C.
Enable logging on the firewall rule
C.
Enable logging on the firewall rule
Answers
D.
Enable VPC Flow Logs on the subnet
D.
Enable VPC Flow Logs on the subnet
Answers
Suggested answer: C

Explanation:

The best option for configuring Cloud Logging to log each IP address that accesses the API by using the fewest number of steps is to enable logging on the firewall rule. A firewall rule is a rule that controls the traffic to and from your Compute Engine instances. You can enable logging on a firewall rule to capture information about the traffic that matches the rule, such as source and destination IP addresses, protocols, ports, and actions. You can use Cloud Logging to view and export the firewall logs to other destinations, such as BigQuery, for further analysis.

Total 166 questions
Go to page: of 17