ExamGecko
Home Home / Google / Professional Cloud DevOps Engineer

Google Professional Cloud DevOps Engineer Practice Test - Questions Answers, Page 13

Question list
Search
Search

List of questions

Search

Related questions











Your organization is using Helm to package containerized applications Your applications reference both public and private charts Your security team flagged that using a public Helm repository as a dependency is a risk You want to manage all charts uniformly, with native access control and VPC Service Controls What should you do?

A.
Store public and private charts in OCI format by using Artifact Registry
A.
Store public and private charts in OCI format by using Artifact Registry
Answers
B.
Store public and private charts by using GitHub Enterprise with Google Workspace as the identity provider
B.
Store public and private charts by using GitHub Enterprise with Google Workspace as the identity provider
Answers
C.
Store public and private charts by using Git repository Configure Cloud Build to synchronize contents of the repository into a Cloud Storage bucket Connect Helm to the bucket by using https: // [bucket] .srorage.googleapis.com/ [holnchart] as the Helm repository
C.
Store public and private charts by using Git repository Configure Cloud Build to synchronize contents of the repository into a Cloud Storage bucket Connect Helm to the bucket by using https: // [bucket] .srorage.googleapis.com/ [holnchart] as the Helm repository
Answers
D.
Configure a Helm chart repository server to run in Google Kubernetes Engine (GKE) with Cloud Storage bucket as the storage backend
D.
Configure a Helm chart repository server to run in Google Kubernetes Engine (GKE) with Cloud Storage bucket as the storage backend
Answers
Suggested answer: A

Explanation:

The best option for managing all charts uniformly, with native access control and VPC Service Controls is to store public and private charts in OCI format by using Artifact Registry. Artifact Registry is a service that allows you to store and manage container images and other artifacts in Google Cloud. Artifact Registry supports OCI format, which is an open standard for storing container images and other artifacts such as Helm charts. You can use Artifact Registry to store public and private charts in OCI format and manage them uniformly. You can also use Artifact Registry's native access control features, such as IAM policies and VPC Service Controls, to secure your charts and control who can access them.

You use Terraform to manage an application deployed to a Google Cloud environment The application runs on instances deployed by a managed instance group The Terraform code is deployed by using a CI/CD pipeline When you change the machine type on the instance template used by the managed instance group, the pipeline fails at the terraform apply stage with the following error message

You need to update the instance template and minimize disruption to the application and the number of pipeline runs What should you do?

A.
Delete the managed instance group and recreate it after updating the instance template
A.
Delete the managed instance group and recreate it after updating the instance template
Answers
B.
Add a new instance template update the managed instance group to use the new instance template and delete the old instance template
B.
Add a new instance template update the managed instance group to use the new instance template and delete the old instance template
Answers
C.
Remove the managed instance group from the Terraform state file update the instance template and reimport the managed instance group.
C.
Remove the managed instance group from the Terraform state file update the instance template and reimport the managed instance group.
Answers
D.
Set the create_bef ore_destroy meta-argument to true in the lifecycle block on the instance template
D.
Set the create_bef ore_destroy meta-argument to true in the lifecycle block on the instance template
Answers
Suggested answer: D

Explanation:

The best option for updating the instance template and minimizing disruption to the application and the number of pipeline runs is to set the create_before_destroy meta-argument to true in the lifecycle block on the instance template. The create_before_destroy meta-argument is a Terraform feature that specifies that a new resource should be created before destroying an existing one during an update. This way, you can avoid downtime and errors when updating a resource that is in use by another resource, such as an instance template that is used by a managed instance group. By setting the create_before_destroy meta-argument to true in the lifecycle block on the instance template, you can ensure that Terraform creates a new instance template with the updated machine type, updates the managed instance group to use the new instance template, and then deletes the old instance template.

You are performing a semi-annual capacity planning exercise for your flagship service You expect a service user growth rate of 10% month-over-month for the next six months Your service is fully containerized and runs on a Google Kubemetes Engine (GKE) standard cluster across three zones with cluster autoscaling enabled You currently consume about 30% of your total deployed CPU capacity and you require resilience against the failure of a zone. You want to ensure that your users experience minimal negative impact as a result of this growth o' as a result of zone failure while you avoid unnecessary costs How should you prepare to handle the predicted growth?

A.
Verify the maximum node pool size enable a Horizontal Pod Autoscaler and then perform a load lest to verify your expected resource needs
A.
Verify the maximum node pool size enable a Horizontal Pod Autoscaler and then perform a load lest to verify your expected resource needs
Answers
B.
Because you deployed the service on GKE and are using a cluster autoscaler your GKE cluster will scale automatically regardless of growth rate
B.
Because you deployed the service on GKE and are using a cluster autoscaler your GKE cluster will scale automatically regardless of growth rate
Answers
C.
Because you are only using 30% of deployed CPU capacity there is significant headroom and you do not need to add any additional capacity for this rate of growth
C.
Because you are only using 30% of deployed CPU capacity there is significant headroom and you do not need to add any additional capacity for this rate of growth
Answers
D.
Proactively add 80% more node capacity to account for six months of 10% growth rate and then perform a load test to ensure that you have enough capacity
D.
Proactively add 80% more node capacity to account for six months of 10% growth rate and then perform a load test to ensure that you have enough capacity
Answers
Suggested answer: A

Explanation:

The best option for preparing to handle the predicted growth is to verify the maximum node pool size, enable a Horizontal Pod Autoscaler, and then perform a load test to verify your expected resource needs. The maximum node pool size is a parameter that specifies the maximum number of nodes that can be added to a node pool by the cluster autoscaler. You should verify that the maximum node pool size is sufficient to accommodate your expected growth rate and avoid hitting any quota limits. The Horizontal Pod Autoscaler is a feature that automatically adjusts the number of Pods in a deployment or replica set based on observed CPU utilization or custom metrics. You should enable a Horizontal Pod Autoscaler for your application to ensure that it runs enough Pods to handle the load. A load test is a test that simulates high user traffic and measures the performance and reliability of your application. You should perform a load test to verify your expected resource needs and identify any bottlenecks or issues.

Your company operates in a highly regulated domain that requires you to store all organization logs for seven years You want to minimize logging infrastructure complexity by using managed services You need to avoid any future loss of log capture or stored logs due to misconfiguration or human error What should you do?

A.
Use Cloud Logging to configure an aggregated sink at the organization level to export all logs into a BigQuery dataset
A.
Use Cloud Logging to configure an aggregated sink at the organization level to export all logs into a BigQuery dataset
Answers
B.
Use Cloud Logging to configure an aggregated sink at the organization level to export all logs into Cloud Storage with a seven-year retention policy and Bucket Lock
B.
Use Cloud Logging to configure an aggregated sink at the organization level to export all logs into Cloud Storage with a seven-year retention policy and Bucket Lock
Answers
C.
Use Cloud Logging to configure an export sink at each project level to export all logs into a BigQuery dataset
C.
Use Cloud Logging to configure an export sink at each project level to export all logs into a BigQuery dataset
Answers
D.
Use Cloud Logging to configure an export sink at each project level to export all logs into Cloud Storage with a seven-year retention policy and Bucket Lock
D.
Use Cloud Logging to configure an export sink at each project level to export all logs into Cloud Storage with a seven-year retention policy and Bucket Lock
Answers
Suggested answer: B

Explanation:

The best option for storing all organization logs for seven years and avoiding any future loss of log capture or stored logs due to misconfiguration or human error is to use Cloud Logging to configure an aggregated sink at the organization level to export all logs into Cloud Storage with a seven-year retention policy and Bucket Lock. Cloud Logging is a service that allows you to collect and manage logs from your Google Cloud resources and applications. An aggregated sink is a sink that collects logs from multiple sources, such as projects, folders, or organizations. You can use Cloud Logging to configure an aggregated sink at the organization level to export all logs into Cloud Storage, which is a service that allows you to store and access data in Google Cloud. A retention policy is a policy that specifies how long objects in a bucket are retained before they are deleted. Bucket Lock is a feature that allows you to lock a retention policy on a bucket and prevent it from being reduced or removed. You can use Cloud Storage with a seven-year retention policy and Bucket Lock to ensure that your logs are stored for seven years and protected from accidental or malicious deletion.

You are building the Cl/CD pipeline for an application deployed to Google Kubernetes Engine (GKE) The application is deployed by using a Kubernetes Deployment, Service, and Ingress The application team asked you to deploy the application by using the blue'green deployment methodology You need to implement the rollback actions What should you do?

A.
Run the kubectl rollout undo command
A.
Run the kubectl rollout undo command
Answers
B.
Delete the new container image, and delete the running Pods
B.
Delete the new container image, and delete the running Pods
Answers
C.
Update the Kubernetes Service to point to the previous Kubernetes Deployment
C.
Update the Kubernetes Service to point to the previous Kubernetes Deployment
Answers
D.
Scale the new Kubernetes Deployment to zero
D.
Scale the new Kubernetes Deployment to zero
Answers
Suggested answer: C

Explanation:

The best option for implementing the rollback actions is to update the Kubernetes Service to point to the previous Kubernetes Deployment. A Kubernetes Service is a resource that defines how to access a set of Pods. A Kubernetes Deployment is a resource that manages the creation and update of Pods. By using the blue/green deployment methodology, you can create two Deployments, one for the current version (blue) and one for the new version (green), and use a Service to switch traffic between them. If you need to rollback, you can update the Service to point to the previous Deployment (blue) and stop sending traffic to the new Deployment (green).

You are building and running client applications in Cloud Run and Cloud Functions Your client requires that all logs must be available for one year so that the client can import the logs into their logging service You must minimize required code changes What should you do?

A.
Update all images in Cloud Run and all functions in Cloud Functions to send logs to both Cloud Logging and the client's logging service Ensure that all the ports required to send logs are open in the VPC firewall
A.
Update all images in Cloud Run and all functions in Cloud Functions to send logs to both Cloud Logging and the client's logging service Ensure that all the ports required to send logs are open in the VPC firewall
Answers
B.
Create a Pub/Sub topic subscription and logging sink Configure the logging sink to send all logs into the topic Give your client access to the topic to retrieve the logs
B.
Create a Pub/Sub topic subscription and logging sink Configure the logging sink to send all logs into the topic Give your client access to the topic to retrieve the logs
Answers
C.
Create a storage bucket and appropriate VPC firewall rules Update all images in Cloud Run and all functions in Cloud Functions to send logs to a file within the storage bucket
C.
Create a storage bucket and appropriate VPC firewall rules Update all images in Cloud Run and all functions in Cloud Functions to send logs to a file within the storage bucket
Answers
D.
Create a logs bucket and logging sink. Set the retention on the logs bucket to 365 days Configure the logging sink to send logs to the bucket Give your client access to the bucket to retrieve the logs
D.
Create a logs bucket and logging sink. Set the retention on the logs bucket to 365 days Configure the logging sink to send logs to the bucket Give your client access to the bucket to retrieve the logs
Answers
Suggested answer: D

Explanation:

The best option for storing all logs for one year and minimizing required code changes is to create a logs bucket and logging sink, set the retention on the logs bucket to 365 days, configure the logging sink to send logs to the bucket, and give your client access to the bucket to retrieve the logs. A logs bucket is a Cloud Storage bucket that is used to store logs from Cloud Logging. A logging sink is a resource that defines where log entries are sent, such as a logs bucket, BigQuery dataset, or Pub/Sub topic. You can create a logs bucket and logging sink in Cloud Logging and set the retention on the logs bucket to 365 days. This way, you can ensure that all logs are stored for one year and protected from deletion. You can also configure the logging sink to send logs from Cloud Run and Cloud Functions to the logs bucket without any code changes. You can then give your client access to the logs bucket by using IAM policies or signed URLs.

Your team is building a service that performs compute-heavy processing on batches of data The data is processed faster based on the speed and number of CPUs on the machine These batches of data vary in size and may arrive at any time from multiple third-party sources You need to ensure that third parties are able to upload their data securely. You want to minimize costs while ensuring that the data is processed as quickly as possible What should you do?

A.
* Provide a secure file transfer protocol (SFTP) server on a Compute Engine instance so that third parties can upload batches of data and provide appropriate credentials to the server * Create a Cloud Function with a google.storage, object, finalize Cloud Storage trigger Write code so that the function can scale up a Compute Engine autoscaling managed instance group * Use an image pre-loaded with the data processing software that terminates the instances when processing completes
A.
* Provide a secure file transfer protocol (SFTP) server on a Compute Engine instance so that third parties can upload batches of data and provide appropriate credentials to the server * Create a Cloud Function with a google.storage, object, finalize Cloud Storage trigger Write code so that the function can scale up a Compute Engine autoscaling managed instance group * Use an image pre-loaded with the data processing software that terminates the instances when processing completes
Answers
B.
* Provide a Cloud Storage bucket so that third parties can upload batches of data, and provide appropriate Identity and Access Management (1AM) access to the bucket * Use a standard Google Kubernetes Engine (GKE) cluster and maintain two services one that processes the batches of data and one that monitors Cloud Storage for new batches of data * Stop the processing service when there are no batches of data to process
B.
* Provide a Cloud Storage bucket so that third parties can upload batches of data, and provide appropriate Identity and Access Management (1AM) access to the bucket * Use a standard Google Kubernetes Engine (GKE) cluster and maintain two services one that processes the batches of data and one that monitors Cloud Storage for new batches of data * Stop the processing service when there are no batches of data to process
Answers
C.
* Provide a Cloud Storage bucket so that third parties can upload batches of data, and provide appropriate identity and Access Management (1AM) access to the bucket * Create a Cloud Function with a google, storage, object .finalise Cloud Storage trigger Write code so that the function can scale up a Compute Engine autoscaling managed instance group * Use an image pre-loaded with the data processing software that terminates the instances when processing completes
C.
* Provide a Cloud Storage bucket so that third parties can upload batches of data, and provide appropriate identity and Access Management (1AM) access to the bucket * Create a Cloud Function with a google, storage, object .finalise Cloud Storage trigger Write code so that the function can scale up a Compute Engine autoscaling managed instance group * Use an image pre-loaded with the data processing software that terminates the instances when processing completes
Answers
D.
* Provide a Cloud Storage bucket so that third parties can upload batches of data, and provide appropriate Identity and Access Management (1AM) access to the bucket * Use Cloud Monitoring to detect new batches of data in the bucket and trigger a Cloud Function that processes the data * Set a Cloud Function to use the largest CPU possible to minimize the runtime of the processing
D.
* Provide a Cloud Storage bucket so that third parties can upload batches of data, and provide appropriate Identity and Access Management (1AM) access to the bucket * Use Cloud Monitoring to detect new batches of data in the bucket and trigger a Cloud Function that processes the data * Set a Cloud Function to use the largest CPU possible to minimize the runtime of the processing
Answers
Suggested answer: C

Explanation:

The best option for ensuring that third parties are able to upload their data securely and minimizing costs while ensuring that the data is processed as quickly as possible is to provide a Cloud Storage bucket so that third parties can upload batches of data, and provide appropriate Identity and Access Management (IAM) access to the bucket; create a Cloud Function with a google.storage.object.finalize Cloud Storage trigger; write code so that the function can scale up a Compute Engine autoscaling managed instance group; use an image pre-loaded with the data processing software that terminates the instances when processing completes. A Cloud Storage bucket is a resource that allows you to store and access data in Google Cloud. You can provide a Cloud Storage bucket so that third parties can upload batches of data securely and conveniently. You can also provide appropriate IAM access to the bucket by using roles and policies to control who can read or write data to the bucket. A Cloud Function is a serverless function that executes code in response to an event, such as a change in a Cloud Storage bucket. A google.storage.object.finalize trigger is a type of trigger that fires when a new object is created or an existing object is overwritten in a Cloud Storage bucket. You can create a Cloud Function with a google.storage.object.finalize trigger so that the function runs whenever a new batch of data is uploaded to the bucket. You can write code so that the function can scale up a Compute Engine autoscaling managed instance group, which is a group of VM instances that automatically adjusts its size based on load or custom metrics. You can use an image pre-loaded with the data processing software that terminates the instances when processing completes, which means that the instances only run when there is data to process and stop when they are done. This way, you can minimize costs while ensuring that the data is processed as quickly as possible.

You are reviewing your deployment pipeline in Google Cloud Deploy You must reduce toil in the pipeline and you want to minimize the amount of time it takes to complete an end-to-end deployment What should you do?

Choose 2 answers

A.
Create a trigger to notify the required team to complete the next step when manual intervention is required
A.
Create a trigger to notify the required team to complete the next step when manual intervention is required
Answers
B.
Divide the automation steps into smaller tasks
B.
Divide the automation steps into smaller tasks
Answers
C.
Use a script to automate the creation of the deployment pipeline in Google Cloud Deploy
C.
Use a script to automate the creation of the deployment pipeline in Google Cloud Deploy
Answers
D.
Add more engineers to finish the manual steps.
D.
Add more engineers to finish the manual steps.
Answers
E.
Automate promotion approvals from the development environment to the test environment
E.
Automate promotion approvals from the development environment to the test environment
Answers
Suggested answer: A, E

Explanation:

The best options for reducing toil in the pipeline and minimizing the amount of time it takes to complete an end-to-end deployment are to create a trigger to notify the required team to complete the next step when manual intervention is required and to automate promotion approvals from the development environment to the test environment. A trigger is a resource that initiates a deployment when an event occurs, such as a code change, a schedule, or a manual request. You can create a trigger to notify the required team to complete the next step when manual intervention is required by using Cloud Build or Cloud Functions. This way, you can reduce the waiting time and human errors in the pipeline. A promotion approval is a process that allows you to approve or reject a deployment from one environment to another, such as from development to test. You can automate promotion approvals from the development environment to the test environment by using Google Cloud Deploy or Cloud Build. This way, you can speed up the deployment process and avoid manual steps.

You work for a global organization and are running a monolithic application on Compute Engine You need to select the machine type for the application to use that optimizes CPU utilization by using the fewest number of steps You want to use historical system metncs to identify the machine type for the application to use You want to follow Google-recommended practices What should you do?

A.
Use the Recommender API and apply the suggested recommendations
A.
Use the Recommender API and apply the suggested recommendations
Answers
B.
Create an Agent Policy to automatically install Ops Agent in all VMs
B.
Create an Agent Policy to automatically install Ops Agent in all VMs
Answers
C.
Install the Ops Agent in a fleet of VMs by using the gcloud CLI
C.
Install the Ops Agent in a fleet of VMs by using the gcloud CLI
Answers
D.
Review the Cloud Monitoring dashboard for the VM and choose the machine type with the lowest CPU utilization
D.
Review the Cloud Monitoring dashboard for the VM and choose the machine type with the lowest CPU utilization
Answers
Suggested answer: A

Explanation:

The best option for selecting the machine type for the application to use that optimizes CPU utilization by using the fewest number of steps is to use the Recommender API and apply the suggested recommendations. The Recommender API is a service that provides recommendations for optimizing your Google Cloud resources, such as Compute Engine instances, disks, and firewalls. You can use the Recommender API to get recommendations for changing the machine type of your Compute Engine instances based on historical system metrics, such as CPU utilization. You can also apply the suggested recommendations by using the Recommender API or Cloud Console. This way, you can optimize CPU utilization by using the most suitable machine type for your application with minimal effort.


You are configuring Cloud Logging for a new application that runs on a Compute Engine instance with a public IP address. A user-managed service account is attached to the instance. You confirmed that the necessary agents are running on the instance but you cannot see any log entries from the instance in Cloud Logging. You want to resolve the issue by following Google-recommended practices. What should you do?

A.
Add the Logs Writer role to the service account.
A.
Add the Logs Writer role to the service account.
Answers
B.
Enable Private Google Access on the subnet that the instance is in.
B.
Enable Private Google Access on the subnet that the instance is in.
Answers
C.
Update the instance to use the default Compute Engine service account.
C.
Update the instance to use the default Compute Engine service account.
Answers
D.
Export the service account key and configure the agents to use the key.
D.
Export the service account key and configure the agents to use the key.
Answers
Suggested answer: A

Explanation:

The correct answer is

A) Add the Logs Writer role to the service account.

To use Cloud Logging, the service account attached to the Compute Engine instance must have the necessary permissions to write log entries. The Logs Writer role (roles/logging.logWriter) provides this permission. You can grant this role to the user-managed service account at the project, folder, or organization level1.

Private Google Access is not required for Cloud Logging, as it allows instances without external IP addresses to access Google APIs and services2. The default Compute Engine service account already has the Logs Writer role, but it is not a recommended practice to use it for user applications3. Exporting the service account key and configuring the agents to use the key is not a secure way of authenticating the service account, as it exposes the key to potential compromise4.

1: Access control with IAM | Cloud Logging | Google Cloud

2: Private Google Access overview | VPC | Google Cloud

3: Service accounts | Compute Engine Documentation | Google Cloud

4: Best practices for securing service accounts | IAM Documentation | Google Cloud

Total 166 questions
Go to page: of 17