ExamGecko
Home Home / Google / Professional Cloud DevOps Engineer

Google Professional Cloud DevOps Engineer Practice Test - Questions Answers, Page 12

Question list
Search
Search

List of questions

Search

Related questions











Your company is using HTTPS requests to trigger a public Cloud Run-hosted service accessible at the https://booking-engine-abcdef .a.run.app URL You need to give developers the ability to test the latest revisions of the service before the service is exposed to customers What should you do?

A.
Runthegcioud run deploy booking-engine ---no-traffic ----ag dev command Use the https://dev----booking-engine-abcdef. a. run. app URL for testing
A.
Runthegcioud run deploy booking-engine ---no-traffic ----ag dev command Use the https://dev----booking-engine-abcdef. a. run. app URL for testing
Answers
B.
Runthegcioud run services update-traffic booking-engine ---to-revisions LATEST*! command Use the ht tps: //booking-engine-abcdef. a. run. ape URL for testing
B.
Runthegcioud run services update-traffic booking-engine ---to-revisions LATEST*! command Use the ht tps: //booking-engine-abcdef. a. run. ape URL for testing
Answers
C.
Pass the curl -K 'Authorization: Hearer S(gclcud auth print-identity-token)' auth token Use the https: / /booking-engine-abcdef. a. run. app URL to test privately
C.
Pass the curl -K 'Authorization: Hearer S(gclcud auth print-identity-token)' auth token Use the https: / /booking-engine-abcdef. a. run. app URL to test privately
Answers
D.
Grant the roles/run. invoker role to the developers testing the booking-engine service Use the https: //booking-engine-abcdef. private. run. app URL for testing
D.
Grant the roles/run. invoker role to the developers testing the booking-engine service Use the https: //booking-engine-abcdef. private. run. app URL for testing
Answers
Suggested answer: B

Explanation:

The best option for securing the CI/CD deployment pipeline is to configure vulnerability analysis with Artifact Registry and Binary Authorization. Vulnerability analysis is a feature that allows you to scan container images for known vulnerabilities and security issues. You can use vulnerability analysis with Artifact Registry, which is a service that allows you to store and manage container images and other artifacts. By using vulnerability analysis with Artifact Registry, you can ensure that your container images are scanned for vulnerabilities before they are deployed. Binary Authorization is a feature that allows you to enforce signature-based validation when deploying container images. You can use Binary Authorization with Cloud Build, which is a service that allows you to build and deploy container images. By using Binary Authorization with Cloud Build, you can ensure that only authorized and verified container images are deployed to your environment.

You are configuring connectivity across Google Kubernetes Engine (GKE) clusters in different VPCs You notice that the nodes in Cluster A are unable to access the nodes in Cluster B You suspect that the workload access issue is due to the network configuration You need to troubleshoot the issue but do not have execute access to workloads and nodes You want to identify the layer at which the network connectivity is broken What should you do?

A.
Install a toolbox container on the node in Cluster A Confirm that the routes to Cluster B are configured appropriately
A.
Install a toolbox container on the node in Cluster A Confirm that the routes to Cluster B are configured appropriately
Answers
B.
Use Network Connectivity Center to perform a Connectivity Test from Cluster A to Cluster
B.
Use Network Connectivity Center to perform a Connectivity Test from Cluster A to Cluster
Answers
C.
Use a debug container to run the traceroute command from Cluster A to Cluster B and from Cluster B to Cluster A Identify the common failure point
C.
Use a debug container to run the traceroute command from Cluster A to Cluster B and from Cluster B to Cluster A Identify the common failure point
Answers
D.
Enable VPC Flow Logs in both VPCs and monitor packet drops
D.
Enable VPC Flow Logs in both VPCs and monitor packet drops
Answers
Suggested answer: B

Explanation:

The best option for troubleshooting the issue without having execute access to workloads and nodes is to use Network Connectivity Center to perform a Connectivity Test from Cluster A to Cluster B. Network Connectivity Center is a service that allows you to create, manage, and monitor network connectivity across Google Cloud, hybrid, and multi-cloud environments. You can use Network Connectivity Center to perform a Connectivity Test, which is a feature that allows you to test the reachability and latency between two endpoints, such as GKE clusters, VM instances, or IP addresses. By using Network Connectivity Center to perform a Connectivity Test from Cluster A to Cluster B, you can identify the layer at which the network connectivity is broken, such as the firewall, routing, or load balancing.

You manage an application that runs in Google Kubernetes Engine (GKE) and uses the blue/green deployment methodology Extracts of the Kubernetes manifests are shown below

The Deployment app-green was updated to use the new version of the application During post-deployment monitoring you notice that the majority of user requests are failing You did not observe this behavior in the testing environment You need to mitigate the incident impact on users and enable the developers to troubleshoot the issue What should you do?

A.
Update the Deployment app-blue to use the new version of the application
A.
Update the Deployment app-blue to use the new version of the application
Answers
B.
Update the Deployment ape-green to use the previous version of the application
B.
Update the Deployment ape-green to use the previous version of the application
Answers
C.
Change the selector on the Service app-2vc to app: my-app.
C.
Change the selector on the Service app-2vc to app: my-app.
Answers
D.
Change the selector on the Service app-svc to app: my-app, version: blue
D.
Change the selector on the Service app-svc to app: my-app, version: blue
Answers
Suggested answer: D

Explanation:

The best option for mitigating the incident impact on users and enabling the developers to troubleshoot the issue is to change the selector on the Service app-svc to app: my-app, version: blue. A Service is a resource that defines how to access a set of Pods. A selector is a field that specifies which Pods are selected by the Service. By changing the selector on the Service app-svc to app: my-app, version: blue, you can ensure that the Service only routes traffic to the Pods that have both labels app: my-app and version: blue. These Pods belong to the Deployment app-blue, which uses the previous version of the application. This way, you can mitigate the incident impact on users by switching back to the working version of the application. You can also enable the developers to troubleshoot the issue with the new version of the application in the Deployment app-green without affecting users.

You are running a web application deployed to a Compute Engine managed instance group Ops Agent is installed on all instances You recently noticed suspicious activity from a specific IP address You need to configure Cloud Monitoring to view the number of requests from that specific IP address with minimal operational overhead. What should you do?

A.
Configure the Ops Agent with a logging receiver Create a logs-based metric
A.
Configure the Ops Agent with a logging receiver Create a logs-based metric
Answers
B.
Create a script to scrape the web server log Export the IP address request metrics to the Cloud Monitoring API
B.
Create a script to scrape the web server log Export the IP address request metrics to the Cloud Monitoring API
Answers
C.
Update the application to export the IP address request metrics to the Cloud Monitoring API
C.
Update the application to export the IP address request metrics to the Cloud Monitoring API
Answers
D.
Configure the Ops Agent with a metrics receiver
D.
Configure the Ops Agent with a metrics receiver
Answers
Suggested answer: A

Explanation:

The best option for configuring Cloud Monitoring to view the number of requests from a specific IP address with minimal operational overhead is to configure the Ops Agent with a logging receiver and create a logs-based metric. The Ops Agent is an agent that collects system metrics and logs from your VM instances and sends them to Cloud Monitoring and Cloud Logging. A logging receiver is a configuration that specifies which logs are collected by the Ops Agent and how they are processed. You can use a logging receiver to collect web server logs from your VM instances and send them to Cloud Logging. A logs-based metric is a metric that is extracted from log entries in Cloud Logging. You can use a logs-based metric to count the number of requests from a specific IP address by using a filter expression. You can then use Cloud Monitoring to view and analyze the logs-based metric.

As part of your company's initiative to shift left on security, the infoSec team is asking all teams to implement guard rails on all the Google Kubernetes Engine (GKE) clusters to only allow the deployment of trusted and approved images You need to determine how to satisfy the InfoSec teams goal of shifting left on security. What should you do?

A.
Deploy Falco or Twistlock on GKE to monitor for vulnerabilities on your running Pods
A.
Deploy Falco or Twistlock on GKE to monitor for vulnerabilities on your running Pods
Answers
B.
Configure Identity and Access Management (1AM) policies to create a least privilege model on your GKE clusters
B.
Configure Identity and Access Management (1AM) policies to create a least privilege model on your GKE clusters
Answers
C.
Use Binary Authorization to attest images during your CI CD pipeline
C.
Use Binary Authorization to attest images during your CI CD pipeline
Answers
D.
Enable Container Analysis in Artifact Registry, and check for common vulnerabilities and exposures (CVEs) in your container images
D.
Enable Container Analysis in Artifact Registry, and check for common vulnerabilities and exposures (CVEs) in your container images
Answers
Suggested answer: C

Explanation:

The best option for implementing guard rails on all GKE clusters to only allow the deployment of trusted and approved images is to use Binary Authorization to attest images during your CI/CD pipeline. Binary Authorization is a feature that allows you to enforce signature-based validation when deploying container images. You can use Binary Authorization to create policies that specify which images are allowed or denied in your GKE clusters. You can also use Binary Authorization to attest images during your CI/CD pipeline by using tools such as Container Analysis or third-party integrations. An attestation is a digital signature that certifies that an image meets certain criteria, such as passing vulnerability scans or code reviews. By using Binary Authorization to attest images during your CI/CD pipeline, you can ensure that only trusted and approved images are deployed to your GKE clusters.

You have an application that runs in Google Kubernetes Engine (GKE). The application consists of several microservices that are deployed to GKE by using Deployments and Services One of the microservices is experiencing an issue where a Pod returns 403 errors after the Pod has been running for more than five hours Your development team is working on a solution but the issue will not be resolved for a month You need to ensure continued operations until the microservice is fixed You want to follow Google-recommended practices and use the fewest number of steps What should you do?

A.
Create a cron job to terminate any Pods that have been running for more than five hours
A.
Create a cron job to terminate any Pods that have been running for more than five hours
Answers
B.
Add a HTTP liveness probe to the microservice s deployment
B.
Add a HTTP liveness probe to the microservice s deployment
Answers
C.
Monitor the Pods and terminate any Pods that have been running for more than five hours
C.
Monitor the Pods and terminate any Pods that have been running for more than five hours
Answers
D.
Configure an alert to notify you whenever a Pod returns 403 errors
D.
Configure an alert to notify you whenever a Pod returns 403 errors
Answers
Suggested answer: B

Explanation:

The best option for ensuring continued operations until the microservice is fixed is to add a HTTP liveness probe to the microservice's deployment. A HTTP liveness probe is a type of probe that checks if a Pod is alive by sending an HTTP request and expecting a success response code. If the probe fails, Kubernetes will restart the Pod. You can add a HTTP liveness probe to your microservice's deployment by using a livenessProbe field in your Pod spec. This way, you can ensure that any Pod that returns 403 errors after running for more than five hours will be restarted automatically and resume normal operations.

You want to share a Cloud Monitoring custom dashboard with a partner team What should you do?

A.
Provide the partner team with the dashboard URL to enable the partner team to create a copy of the dashboard
A.
Provide the partner team with the dashboard URL to enable the partner team to create a copy of the dashboard
Answers
B.
Export the metrics to BigQuery Use Looker Studio to create a dashboard, and share the dashboard with the partner team
B.
Export the metrics to BigQuery Use Looker Studio to create a dashboard, and share the dashboard with the partner team
Answers
C.
Copy the Monitoring Query Language (MQL) query from the dashboard; and send the MQL query to the partner team
C.
Copy the Monitoring Query Language (MQL) query from the dashboard; and send the MQL query to the partner team
Answers
D.
Download the JSON definition of the dashboard, and send the JSON file to the partner team
D.
Download the JSON definition of the dashboard, and send the JSON file to the partner team
Answers
Suggested answer: A

Explanation:

The best option for sharing a Cloud Monitoring custom dashboard with a partner team is to provide the partner team with the dashboard URL to enable the partner team to create a copy of the dashboard. A Cloud Monitoring custom dashboard is a dashboard that allows you to create and customize charts and widgets to display metrics, logs, and traces from your Google Cloud resources and applications. You can share a custom dashboard with a partner team by providing them with the dashboard URL, which is a link that allows them to view the dashboard in their browser. The partner team can then create a copy of the dashboard in their own project by using the Copy Dashboard option. This way, they can access and modify the dashboard without affecting the original one.

You are building an application that runs on Cloud Run The application needs to access a third-party API by using an API key You need to determine a secure way to store and use the API key in your application by following Google-recommended practices What should you do?

A.
Save the API key in Secret Manager as a secret Reference the secret as an environment variable in the Cloud Run application
A.
Save the API key in Secret Manager as a secret Reference the secret as an environment variable in the Cloud Run application
Answers
B.
Save the API key in Secret Manager as a secret key Mount the secret key under the /sys/api_key directory and decrypt the key in the Cloud Run application
B.
Save the API key in Secret Manager as a secret key Mount the secret key under the /sys/api_key directory and decrypt the key in the Cloud Run application
Answers
C.
Save the API key in Cloud Key Management Service (Cloud KMS) as a key Reference the key as an environment variable in the Cloud Run application
C.
Save the API key in Cloud Key Management Service (Cloud KMS) as a key Reference the key as an environment variable in the Cloud Run application
Answers
D.
Encrypt the API key by using Cloud Key Management Service (Cloud KMS) and pass the key to Cloud Run as an environment variable Decrypt and use the key in Cloud Run
D.
Encrypt the API key by using Cloud Key Management Service (Cloud KMS) and pass the key to Cloud Run as an environment variable Decrypt and use the key in Cloud Run
Answers
Suggested answer: A

Explanation:

The best option for storing and using the API key in your application by following Google-recommended practices is to save the API key in Secret Manager as a secret and reference the secret as an environment variable in the Cloud Run application. Secret Manager is a service that allows you to store and manage sensitive data, such as API keys, passwords, and certificates, in Google Cloud. A secret is a resource that represents a logical secret, such as an API key. You can save the API key in Secret Manager as a secret and use IAM policies to control who can access it. You can also reference the secret as an environment variable in the Cloud Run application by using the ${SECRET_NAME} syntax. This way, you can securely store and use the API key in your application without exposing it in your code or configuration files.

You are currently planning how to display Cloud Monitoring metrics for your organization's Google Cloud projects. Your organization has three folders and six projects:

You want to configure Cloud Monitoring dashboards lo only display metrics from the projects within one folder You need to ensure that the dashboards do not display metrics from projects in the other folders You want to follow Google-recommended practices What should you do?

A.
Create a single new scoping project
A.
Create a single new scoping project
Answers
B.
Create new scoping projects for each folder
B.
Create new scoping projects for each folder
Answers
C.
Use the current app-one-prod project as the scoping project
C.
Use the current app-one-prod project as the scoping project
Answers
D.
Use the current app-one-dev, app-one-staging and app-one-prod projects as the scoping project for each folder
D.
Use the current app-one-dev, app-one-staging and app-one-prod projects as the scoping project for each folder
Answers
Suggested answer: B

Explanation:

The best option for configuring Cloud Monitoring dashboards to only display metrics from the projects within one folder is to create new scoping projects for each folder. A scoping project is a project that defines which resources are monitored by Cloud Monitoring. You can create new scoping projects for each folder by using the gcloud monitoring register-project command. This way, you can associate each scoping project with a folder and only monitor the resources within that folder. You can then configure Cloud Monitoring dashboards to use the scoping projects as data sources and only display metrics from the projects within one folder.

Your company's security team needs to have read-only access to Data Access audit logs in the _Required bucket You want to provide your security team with the necessary permissions following the principle of least privilege and Google-recommended practices. What should you do?

A.
Assign the roles/logging, viewer role to each member of the security team
A.
Assign the roles/logging, viewer role to each member of the security team
Answers
B.
Assign the roles/logging. viewer role to a group with all the security team members
B.
Assign the roles/logging. viewer role to a group with all the security team members
Answers
C.
Assign the roles/logging.privateLogViewer role to each member of the security team
C.
Assign the roles/logging.privateLogViewer role to each member of the security team
Answers
D.
Assign the roles/logging.privateLogviewer role to a group with all the security team members
D.
Assign the roles/logging.privateLogviewer role to a group with all the security team members
Answers
Suggested answer: D

Explanation:

The best option for providing your security team with the necessary permissions following the principle of least privilege and Google-recommended practices is to assign the roles/logging.privateLogViewer role to a group with all the security team members. The roles/logging.privateLogViewer role is a predefined role that grants read-only access to Data Access audit logs and other private logs in Cloud Logging. A group is a collection of users that can be assigned roles and permissions as a single unit. You can assign the roles/logging.privateLogViewer role to a group with all the security team members by using IAM policies. This way, you can provide your security team with the minimum level of access they need to view Data Access audit logs in the _Required bucket.

Total 166 questions
Go to page: of 17