Google Professional Cloud DevOps Engineer Practice Test - Questions Answers, Page 13
List of questions
Question 121

Your organization is using Helm to package containerized applications Your applications reference both public and private charts Your security team flagged that using a public Helm repository as a dependency is a risk You want to manage all charts uniformly, with native access control and VPC Service Controls What should you do?
Explanation:
The best option for managing all charts uniformly, with native access control and VPC Service Controls is to store public and private charts in OCI format by using Artifact Registry. Artifact Registry is a service that allows you to store and manage container images and other artifacts in Google Cloud. Artifact Registry supports OCI format, which is an open standard for storing container images and other artifacts such as Helm charts. You can use Artifact Registry to store public and private charts in OCI format and manage them uniformly. You can also use Artifact Registry's native access control features, such as IAM policies and VPC Service Controls, to secure your charts and control who can access them.
Question 122

You use Terraform to manage an application deployed to a Google Cloud environment The application runs on instances deployed by a managed instance group The Terraform code is deployed by using a CI/CD pipeline When you change the machine type on the instance template used by the managed instance group, the pipeline fails at the terraform apply stage with the following error message
You need to update the instance template and minimize disruption to the application and the number of pipeline runs What should you do?
Explanation:
The best option for updating the instance template and minimizing disruption to the application and the number of pipeline runs is to set the create_before_destroy meta-argument to true in the lifecycle block on the instance template. The create_before_destroy meta-argument is a Terraform feature that specifies that a new resource should be created before destroying an existing one during an update. This way, you can avoid downtime and errors when updating a resource that is in use by another resource, such as an instance template that is used by a managed instance group. By setting the create_before_destroy meta-argument to true in the lifecycle block on the instance template, you can ensure that Terraform creates a new instance template with the updated machine type, updates the managed instance group to use the new instance template, and then deletes the old instance template.
Question 123

You are performing a semi-annual capacity planning exercise for your flagship service You expect a service user growth rate of 10% month-over-month for the next six months Your service is fully containerized and runs on a Google Kubemetes Engine (GKE) standard cluster across three zones with cluster autoscaling enabled You currently consume about 30% of your total deployed CPU capacity and you require resilience against the failure of a zone. You want to ensure that your users experience minimal negative impact as a result of this growth o' as a result of zone failure while you avoid unnecessary costs How should you prepare to handle the predicted growth?
Explanation:
The best option for preparing to handle the predicted growth is to verify the maximum node pool size, enable a Horizontal Pod Autoscaler, and then perform a load test to verify your expected resource needs. The maximum node pool size is a parameter that specifies the maximum number of nodes that can be added to a node pool by the cluster autoscaler. You should verify that the maximum node pool size is sufficient to accommodate your expected growth rate and avoid hitting any quota limits. The Horizontal Pod Autoscaler is a feature that automatically adjusts the number of Pods in a deployment or replica set based on observed CPU utilization or custom metrics. You should enable a Horizontal Pod Autoscaler for your application to ensure that it runs enough Pods to handle the load. A load test is a test that simulates high user traffic and measures the performance and reliability of your application. You should perform a load test to verify your expected resource needs and identify any bottlenecks or issues.
Question 124

Your company operates in a highly regulated domain that requires you to store all organization logs for seven years You want to minimize logging infrastructure complexity by using managed services You need to avoid any future loss of log capture or stored logs due to misconfiguration or human error What should you do?
Question 125

You are building the Cl/CD pipeline for an application deployed to Google Kubernetes Engine (GKE) The application is deployed by using a Kubernetes Deployment, Service, and Ingress The application team asked you to deploy the application by using the blue'green deployment methodology You need to implement the rollback actions What should you do?
Question 126

You are building and running client applications in Cloud Run and Cloud Functions Your client requires that all logs must be available for one year so that the client can import the logs into their logging service You must minimize required code changes What should you do?
Question 127

Your team is building a service that performs compute-heavy processing on batches of data The data is processed faster based on the speed and number of CPUs on the machine These batches of data vary in size and may arrive at any time from multiple third-party sources You need to ensure that third parties are able to upload their data securely. You want to minimize costs while ensuring that the data is processed as quickly as possible What should you do?
Question 128

You are reviewing your deployment pipeline in Google Cloud Deploy You must reduce toil in the pipeline and you want to minimize the amount of time it takes to complete an end-to-end deployment What should you do?
Choose 2 answers
Question 129

You work for a global organization and are running a monolithic application on Compute Engine You need to select the machine type for the application to use that optimizes CPU utilization by using the fewest number of steps You want to use historical system metncs to identify the machine type for the application to use You want to follow Google-recommended practices What should you do?
Question 130

You are configuring Cloud Logging for a new application that runs on a Compute Engine instance with a public IP address. A user-managed service account is attached to the instance. You confirmed that the necessary agents are running on the instance but you cannot see any log entries from the instance in Cloud Logging. You want to resolve the issue by following Google-recommended practices. What should you do?
Question