ExamGecko
Home Home / Google / Professional Cloud Developer

Google Professional Cloud Developer Practice Test - Questions Answers, Page 23

Question list
Search
Search

List of questions

Search

Related questions











You have a container deployed on Google Kubernetes Engine. The container can sometimes be slow to launch, so you have implemented a liveness probe. You notice that the liveness probe occasionally fails on launch. What should you do?

A.
Add a startup probe.
A.
Add a startup probe.
Answers
B.
Increase the initial delay for the liveness probe.
B.
Increase the initial delay for the liveness probe.
Answers
C.
Increase the CPU limit for the container.
C.
Increase the CPU limit for the container.
Answers
D.
Add a readiness probe.
D.
Add a readiness probe.
Answers
Suggested answer: B

Explanation:

https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes

You work for an organization that manages an ecommerce site. Your application is deployed behind a global HTTP(S) load balancer. You need to test a new product recommendation algorithm. You plan to use A/B testing to determine the new algorithm's effect on sales in a randomized way. How should you test this feature?

A.
Split traffic between versions using weights.
A.
Split traffic between versions using weights.
Answers
B.
Enable the new recommendation feature flag on a single instance.
B.
Enable the new recommendation feature flag on a single instance.
Answers
C.
Mirror traffic to the new version of your application.
C.
Mirror traffic to the new version of your application.
Answers
D.
Use HTTP header-based routing.
D.
Use HTTP header-based routing.
Answers
Suggested answer: C

Explanation:

https://cloud.google.com/load-balancing/docs/https/traffic-management-global#traffic_actions_weight-based_traffic_splitting

Deploying a new version of an existing production service generally incurs some risk. Even if your tests pass in staging, you probably don't want to subject 100% of your users to the new version immediately. With traffic management, you can define percentage-based traffic splits across multiple backend services.

For example, you can send 95% of the traffic to the previous version of your service and 5% to the new version of your service. After you've validated that the new production version works as expected, you can gradually shift the percentages until 100% of the traffic reaches the new version of your service. Traffic splitting is typically used for deploying new versions, A/B testing, service migration, and similar processes.

https://cloud.google.com/traffic-director/docs/advanced-traffic-management#weight-based_traffic_splitting_for_safer_deployments

https://cloud.google.com/architecture/implementing-deployment-and-testing-strategies-on-gke#split_the_traffic_2

https://cloud.google.com/load-balancing/docs/https/traffic-management-global#traffic_actions_weight-based_traffic_splitting

You plan to deploy a new application revision with a Deployment resource to Google Kubernetes Engine (GKE) in production. The container might not work correctly. You want to minimize risk in case there are issues after deploying the revision. You want to follow Google-recommended best practices. What should you do?

A.
Perform a rolling update with a PodDisruptionBudget of 80%.
A.
Perform a rolling update with a PodDisruptionBudget of 80%.
Answers
B.
Perform a rolling update with a HorizontalPodAutoscaler scale-down policy value of 0.
B.
Perform a rolling update with a HorizontalPodAutoscaler scale-down policy value of 0.
Answers
C.
Convert the Deployment to a StatefulSet, and perform a rolling update with a PodDisruptionBudget of 80%.
C.
Convert the Deployment to a StatefulSet, and perform a rolling update with a PodDisruptionBudget of 80%.
Answers
D.
Convert the Deployment to a StatefulSet, and perform a rolling update with a HorizontalPodAutoscaler scale-down policy value of 0.
D.
Convert the Deployment to a StatefulSet, and perform a rolling update with a HorizontalPodAutoscaler scale-down policy value of 0.
Answers
Suggested answer: A

Explanation:

https://cloud.google.com/blog/products/containers-kubernetes/ensuring-reliability-and-uptime-for-your-gke-cluster

Setting PodDisruptionBudget ensures that your workloads have a sufficient number of replicas, even during maintenance. Using the PDB, you can define a number (or percentage) of pods that can be terminated, even if terminating them brings the current replica count below the desired value. With PDB configured, Kubernetes will drain a node following the configured disruption schedule. New pods will be deployed on other available nodes. This approach ensures Kubernetes schedules workloads in an optimal way while controlling the disruption based on the PDB configuration.

https://blog.knoldus.com/how-to-avoid-outages-in-your-kubernetes-cluster-using-pdb/

Before promoting your new application code to production, you want to conduct testing across a variety of different users. Although this plan is risky, you want to test the new version of the application with production users and you want to control which users are forwarded to the new version of the application based on their operating system. If bugs are discovered in the new version, you want to roll back the newly deployed version of the application as quickly as possible.

What should you do?

A.
Deploy your application on Cloud Run. Use traffic splitting to direct a subset of user traffic to the new version based on the revision tag.
A.
Deploy your application on Cloud Run. Use traffic splitting to direct a subset of user traffic to the new version based on the revision tag.
Answers
B.
Deploy your application on Google Kubernetes Engine with Anthos Service Mesh. Use traffic splitting to direct a subset of user traffic to the new version based on the user-agent header.
B.
Deploy your application on Google Kubernetes Engine with Anthos Service Mesh. Use traffic splitting to direct a subset of user traffic to the new version based on the user-agent header.
Answers
C.
Deploy your application on App Engine. Use traffic splitting to direct a subset of user traffic to the new version based on the IP address.
C.
Deploy your application on App Engine. Use traffic splitting to direct a subset of user traffic to the new version based on the IP address.
Answers
D.
Deploy your application on Compute Engine. Use Traffic Director to direct a subset of user traffic to the new version based on predefined weights.
D.
Deploy your application on Compute Engine. Use Traffic Director to direct a subset of user traffic to the new version based on predefined weights.
Answers
Suggested answer: B

Your team is writing a backend application to implement the business logic for an interactive voice response (IVR) system that will support a payroll application. The IVR system has the following technical characteristics:

* Each customer phone call is associated with a unique IVR session.

* The IVR system creates a separate persistent gRPC connection to the backend for each session.

* If the connection is interrupted, the IVR system establishes a new connection, causing a slight latency for that call.

You need to determine which compute environment should be used to deploy the backend application. Using current call data, you determine that:

* Call duration ranges from 1 to 30 minutes.

* Calls are typically made during business hours.

* There are significant spikes of calls around certain known dates (e.g., pay days), or when large payroll changes occur.

You want to minimize cost, effort, and operational overhead. Where should you deploy the backend application?

A.
Compute Engine
A.
Compute Engine
Answers
B.
Google Kubernetes Engine cluster in Standard mode
B.
Google Kubernetes Engine cluster in Standard mode
Answers
C.
Cloud Functions
C.
Cloud Functions
Answers
D.
Cloud Run
D.
Cloud Run
Answers
Suggested answer: D

Explanation:

This page shows Cloud Run-specific details for developers who want to use gRPC to connect a Cloud Run service with other services, for example, to provide simple, high performance communication between internal microservices. You can use all gRPC types, streaming or unary, with Cloud Run.

Possible use cases include:

Communication between internal microservices.

High loads of data (gRPC uses protocol buffers, which are up to seven times faster than REST calls).

Only a simple service definition is needed, you don't want to write a full client library.

Use streaming gRPCs in your gRPC server to build more responsive applications and APIs.

https://cloud.google.com/run/docs/tutorials/secure-services#:~:text=The%20backend%20service%20is%20private,Google%20Cloud%20except%20where%20necessary.

You are developing an application hosted on Google Cloud that uses a MySQL relational database schema. The application will have a large volume of reads and writes to the database and will require backups and ongoing capacity planning. Your team does not have time to fully manage the database but can take on small administrative tasks. How should you host the database?

A.
Configure Cloud SQL to host the database, and import the schema into Cloud SQL.
A.
Configure Cloud SQL to host the database, and import the schema into Cloud SQL.
Answers
B.
Deploy MySQL from the Google Cloud Marketplace to the database using a client, and import the schema.
B.
Deploy MySQL from the Google Cloud Marketplace to the database using a client, and import the schema.
Answers
C.
Configure Bigtable to host the database, and import the data into Bigtable.
C.
Configure Bigtable to host the database, and import the data into Bigtable.
Answers
D.
Configure Cloud Spanner to host the database, and import the schema into Cloud Spanner.
D.
Configure Cloud Spanner to host the database, and import the schema into Cloud Spanner.
Answers
E.
Configure Firestore to host the database, and import the data into Firestore.
E.
Configure Firestore to host the database, and import the data into Firestore.
Answers
Suggested answer: A

Explanation:

https://cloud.google.com/spanner/docs/migrating-mysql-to-spanner#migration-process

Cloud SQL: Cloud SQL is a web service that allows you to create, configure, and use relational databases that live in Google's cloud. It is a fully-managed service that maintains, manages, and administers your databases, allowing you to focus on your applications and services.

https://cloud.google.com/sql/docs/mysql Cloud SQL for MySQL is a fully-managed database service that helps you set up, maintain, manage, and administer your MySQL relational databases on Google Cloud Platform.

You are developing a new web application using Cloud Run and committing code to Cloud Source Repositories. You want to deploy new code in the most efficient way possible. You have already created a Cloud Build YAML file that builds a container and runs the following command: gcloud run deploy. What should you do next?

A.
Create a Pub/Sub topic to be notified when code is pushed to the repository. Create a Pub/Sub trigger that runs the build file when an event is published to the topic.
A.
Create a Pub/Sub topic to be notified when code is pushed to the repository. Create a Pub/Sub trigger that runs the build file when an event is published to the topic.
Answers
B.
Create a build trigger that runs the build file in response to a repository code being pushed to the development branch.
B.
Create a build trigger that runs the build file in response to a repository code being pushed to the development branch.
Answers
C.
Create a webhook build trigger that runs the build file in response to HTTP POST calls to the webhook URL.
C.
Create a webhook build trigger that runs the build file in response to HTTP POST calls to the webhook URL.
Answers
D.
Create a Cron job that runs the following command every 24 hours: gcloud builds submit.
D.
Create a Cron job that runs the following command every 24 hours: gcloud builds submit.
Answers
Suggested answer: B

Explanation:

https://cloud.google.com/build/docs/triggers

Cloud Build uses build triggers to enable CI/CD automation. You can configure triggers to listen for incoming events, such as when a new commit is pushed to a repository or when a pull request is initiated, and then automatically execute a build when new events come in. You can also configure triggers to build code on any changes to your source repository or only on changes that match certain criteria.

Your team has created an application that is hosted on a Google Kubernetes Engine (GKE) cluster You need to connect the application to a legacy REST service that is deployed in two GKE clusters in two different regions. You want to connect your application to the legacy service in a way that is resilient and requires the fewest number of steps You also want to be able to run probe-based health checks on the legacy service on a separate port How should you set up the connection?

A.
Use Traffic Director with a sidecar proxy to connect the application to the service.
A.
Use Traffic Director with a sidecar proxy to connect the application to the service.
Answers
B.
Use a proxyless Traffic Director configuration to connect the application to the service.
B.
Use a proxyless Traffic Director configuration to connect the application to the service.
Answers
C.
Configure the legacy service's firewall to allow health checks originating from the proxy.
C.
Configure the legacy service's firewall to allow health checks originating from the proxy.
Answers
D.
Configure the legacy service's firewall to allow health checks originating from the application.
D.
Configure the legacy service's firewall to allow health checks originating from the application.
Answers
E.
Configure the legacy service's firewall to allow health checks originating from the Traffic Director control plane.
E.
Configure the legacy service's firewall to allow health checks originating from the Traffic Director control plane.
Answers
Suggested answer: A, C

Explanation:

https://cloud.google.com/traffic-director/docs/advanced-setup#routing-rule-maps https://cloud.google.com/traffic-director/docs/advanced-setup

A) Using Traffic Director with a sidecar proxy can provide resilience for your application by allowing for failover to the secondary region in the event of an outage. The sidecar proxy can route traffic to the legacy service in either of the two GKE clusters, ensuring high availability. C. Configuring the legacy service's firewall to allow health checks originating from the proxy allows the proxy to periodically check the health of the legacy service and ensure that it is functioning properly. This helps to ensure that traffic is only routed to healthy instances of the legacy service, further improving the resilience of the setup.

You work for a financial services company that has a container-first approach. Your team develops microservices applications You have a Cloud Build pipeline that creates a container image, runs regression tests, and publishes the image to Artifact Registry You need to ensure that only containers that have passed the regression tests are deployed to Google Kubernetes Engine (GKE) clusters You have already enabled Binary Authorization on the GKE clusters What should you do next?

A.
Deploy Voucher Server and Voucher Client Components. After a container image has passed the regression tests, run Voucher Client as a step in the Cloud Build pipeline.
A.
Deploy Voucher Server and Voucher Client Components. After a container image has passed the regression tests, run Voucher Client as a step in the Cloud Build pipeline.
Answers
B.
Set the Pod Security Standard level to Restricted for the relevant namespaces Digitally sign the container images that have passed the regression tests as a step in the Cloud Build pipeline.
B.
Set the Pod Security Standard level to Restricted for the relevant namespaces Digitally sign the container images that have passed the regression tests as a step in the Cloud Build pipeline.
Answers
C.
Create an attestor and a policy. Create an attestation for the container images that have passed the regression tests as a step in the Cloud Build pipeline.
C.
Create an attestor and a policy. Create an attestation for the container images that have passed the regression tests as a step in the Cloud Build pipeline.
Answers
D.
Create an attestor and a policy Run a vulnerability scan to create an attestation for the container image as a step in the Cloud Build pipeline.
D.
Create an attestor and a policy Run a vulnerability scan to create an attestation for the container image as a step in the Cloud Build pipeline.
Answers
Suggested answer: C

You have an ecommerce application hosted in Google Kubernetes Engine (GKE) that receives external requests and forwards them to third-party APIs external to Google Cloud. The third-party APIs are responsible for credit card processing, shipping, and inventory management using the process shown in the diagram.

Your customers are reporting that the ecommerce application is running slowly at unpredictable times. The application doesn't report any metrics You need to determine the cause of the inconsistent performance What should you do?

A.
Install the Ops Agent inside your container and configure it to gather application metrics.
A.
Install the Ops Agent inside your container and configure it to gather application metrics.
Answers
B.
Install the OpenTelemetry library for your respective language, and instrument your application.
B.
Install the OpenTelemetry library for your respective language, and instrument your application.
Answers
C.
Modify your application to read and forward the x-Cloud-Trace-context header when it calls the downstream services
C.
Modify your application to read and forward the x-Cloud-Trace-context header when it calls the downstream services
Answers
D.
Enable Managed Service for Prometheus on the GKE cluster to gather application metrics.
D.
Enable Managed Service for Prometheus on the GKE cluster to gather application metrics.
Answers
Suggested answer: B
Total 265 questions
Go to page: of 27