ExamGecko
Home Home / Google / Professional Cloud Developer

Google Professional Cloud Developer Practice Test - Questions Answers, Page 13

Question list
Search
Search

List of questions

Search

Related questions











You want to create ''fully baked'' or ''golden'' Compute Engine images for your application. You need to bootstrap your application to connect to the appropriate database according to the environment the application is running on (test, staging, production). What should you do?

A.
Embed the appropriate database connection string in the image. Create a different image for each environment.
A.
Embed the appropriate database connection string in the image. Create a different image for each environment.
Answers
B.
When creating the Compute Engine instance, add a tag with the name of the database to be connected. In your application, query the Compute Engine API to pull the tags for the current instance, and use the tag to construct the appropriate database connection string.
B.
When creating the Compute Engine instance, add a tag with the name of the database to be connected. In your application, query the Compute Engine API to pull the tags for the current instance, and use the tag to construct the appropriate database connection string.
Answers
C.
When creating the Compute Engine instance, create a metadata item with a key of ''DATABASE'' and a value for the appropriate database connection string. In your application, read the ''DATABASE'' environment variable, and use the value to connect to the appropriate database.
C.
When creating the Compute Engine instance, create a metadata item with a key of ''DATABASE'' and a value for the appropriate database connection string. In your application, read the ''DATABASE'' environment variable, and use the value to connect to the appropriate database.
Answers
D.
When creating the Compute Engine instance, create a metadata item with a key of ''DATABASE'' and a value for the appropriate database connection string. In your application, query the metadata server for the ''DATABASE'' value, and use the value to connect to the appropriate database.
D.
When creating the Compute Engine instance, create a metadata item with a key of ''DATABASE'' and a value for the appropriate database connection string. In your application, query the metadata server for the ''DATABASE'' value, and use the value to connect to the appropriate database.
Answers
Suggested answer: D

You are developing a microservice-based application that will be deployed on a Google Kubernetes Engine cluster. The application needs to read and write to a Spanner database. You want to follow security best practices while minimizing code changes. How should you configure your application to retrieve Spanner credentials?

A.
Configure the appropriate service accounts, and use Workload Identity to run the pods.
A.
Configure the appropriate service accounts, and use Workload Identity to run the pods.
Answers
B.
Store the application credentials as Kubernetes Secrets, and expose them as environment variables.
B.
Store the application credentials as Kubernetes Secrets, and expose them as environment variables.
Answers
C.
Configure the appropriate routing rules, and use a VPC-native cluster to directly connect to the database.
C.
Configure the appropriate routing rules, and use a VPC-native cluster to directly connect to the database.
Answers
D.
Store the application credentials using Cloud Key Management Service, and retrieve them whenever a database connection is made.
D.
Store the application credentials using Cloud Key Management Service, and retrieve them whenever a database connection is made.
Answers
Suggested answer: A

Explanation:

https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity

You are deploying your application on a Compute Engine instance that communicates with Cloud SQL. You will use Cloud SQL Proxy to allow your application to communicate to the database using the service account associated with the application's instance. You want to follow the Google-recommended best practice of providing minimum access for the role assigned to the service account. What should you do?

A.
Assign the Project Editor role.
A.
Assign the Project Editor role.
Answers
B.
Assign the Project Owner role.
B.
Assign the Project Owner role.
Answers
C.
Assign the Cloud SQL Client role.
C.
Assign the Cloud SQL Client role.
Answers
D.
Assign the Cloud SQL Editor role.
D.
Assign the Cloud SQL Editor role.
Answers
Suggested answer: C

Your team develops stateless services that run on Google Kubernetes Engine (GKE). You need to deploy a new service that will only be accessed by other services running in the GKE cluster. The service will need to scale as quickly as possible to respond to changing load. What should you do?

A.
Use a Vertical Pod Autoscaler to scale the containers, and expose them via a ClusterIP Service.
A.
Use a Vertical Pod Autoscaler to scale the containers, and expose them via a ClusterIP Service.
Answers
B.
Use a Vertical Pod Autoscaler to scale the containers, and expose them via a NodePort Service.
B.
Use a Vertical Pod Autoscaler to scale the containers, and expose them via a NodePort Service.
Answers
C.
Use a Horizontal Pod Autoscaler to scale the containers, and expose them via a ClusterIP Service.
C.
Use a Horizontal Pod Autoscaler to scale the containers, and expose them via a ClusterIP Service.
Answers
D.
Use a Horizontal Pod Autoscaler to scale the containers, and expose them via a NodePort Service.
D.
Use a Horizontal Pod Autoscaler to scale the containers, and expose them via a NodePort Service.
Answers
Suggested answer: C

Explanation:

https://cloud.google.com/kubernetes-engine/docs/concepts/service

You recently migrated a monolithic application to Google Cloud by breaking it down into microservices. One of the microservices is deployed using Cloud Functions. As you modernize the application, you make a change to the API of the service that is backward-incompatible. You need to support both existing callers who use the original API and new callers who use the new API. What should you do?

A.
Leave the original Cloud Function as-is and deploy a second Cloud Function with the new API. Use a load balancer to distribute calls between the versions.
A.
Leave the original Cloud Function as-is and deploy a second Cloud Function with the new API. Use a load balancer to distribute calls between the versions.
Answers
B.
Leave the original Cloud Function as-is and deploy a second Cloud Function that includes only the changed API. Calls are automatically routed to the correct function.
B.
Leave the original Cloud Function as-is and deploy a second Cloud Function that includes only the changed API. Calls are automatically routed to the correct function.
Answers
C.
Leave the original Cloud Function as-is and deploy a second Cloud Function with the new API. Use Cloud Endpoints to provide an API gateway that exposes a versioned API.
C.
Leave the original Cloud Function as-is and deploy a second Cloud Function with the new API. Use Cloud Endpoints to provide an API gateway that exposes a versioned API.
Answers
D.
Re-deploy the Cloud Function after making code changes to support the new API. Requests for both versions of the API are fulfilled based on a version identifier included in the call.
D.
Re-deploy the Cloud Function after making code changes to support the new API. Requests for both versions of the API are fulfilled based on a version identifier included in the call.
Answers
Suggested answer: D

Your company just experienced a Google Kubernetes Engine (GKE) API outage due to a zone failure. You want to deploy a highly available GKE architecture that minimizes service interruption to users in the event of a future zone failure. What should you do?

A.
Deploy Zonal clusters
A.
Deploy Zonal clusters
Answers
B.
Deploy Regional clusters
B.
Deploy Regional clusters
Answers
C.
Deploy Multi-Zone clusters
C.
Deploy Multi-Zone clusters
Answers
D.
Deploy GKE on-premises clusters
D.
Deploy GKE on-premises clusters
Answers
Suggested answer: B

Explanation:

https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters#regional_clusters

A regional cluster has multiple replicas of the control plane, running in multiple zones within a given region. Nodes in a regional cluster can run in multiple zones or a single zone depending on the configured node locations. By default, GKE replicates each node pool across three zones of the control plane's region. When you create a cluster or when you add a new node pool, you can change the default configuration by specifying the zone(s) in which the cluster's nodes run. All zones must be within the same region as the control plane.

Your team develops services that run on Google Cloud. You want to process messages sent to a Pub/Sub topic, and then store them. Each message must be processed exactly once to avoid duplication of data and any data conflicts. You need to use the cheapest and most simple solution. What should you do?

A.
Process the messages with a Dataproc job, and write the output to storage.
A.
Process the messages with a Dataproc job, and write the output to storage.
Answers
B.
Process the messages with a Dataflow streaming pipeline using Apache Beam's PubSubIO package, and write the output to storage.
B.
Process the messages with a Dataflow streaming pipeline using Apache Beam's PubSubIO package, and write the output to storage.
Answers
C.
Process the messages with a Cloud Function, and write the results to a BigQuery location where you can run a job to deduplicate the data.
C.
Process the messages with a Cloud Function, and write the results to a BigQuery location where you can run a job to deduplicate the data.
Answers
D.
Retrieve the messages with a Dataflow streaming pipeline, store them in Cloud Bigtable, and use another Dataflow streaming pipeline to deduplicate messages.
D.
Retrieve the messages with a Dataflow streaming pipeline, store them in Cloud Bigtable, and use another Dataflow streaming pipeline to deduplicate messages.
Answers
Suggested answer: B

Explanation:

https://cloud.google.com/dataflow/docs/concepts/streaming-with-cloud-pubsub

You are running a containerized application on Google Kubernetes Engine. Your container images are stored in Container Registry. Your team uses CI/CD practices. You need to prevent the deployment of containers with known critical vulnerabilities. What should you do?

A.
* Use Web Security Scanner to automatically crawl your application * Review your application logs for scan results, and provide an attestation that the container is free of known critical vulnerabilities * Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed
A.
* Use Web Security Scanner to automatically crawl your application * Review your application logs for scan results, and provide an attestation that the container is free of known critical vulnerabilities * Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed
Answers
B.
* Use Web Security Scanner to automatically crawl your application * Review the scan results in the scan details page in the Cloud Console, and provide an attestation that the container is free of known critical vulnerabilities * Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed
B.
* Use Web Security Scanner to automatically crawl your application * Review the scan results in the scan details page in the Cloud Console, and provide an attestation that the container is free of known critical vulnerabilities * Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed
Answers
C.
* Enable the Container Scanning API to perform vulnerability scanning * Review vulnerability reporting in Container Registry in the Cloud Console, and provide an attestation that the container is free of known critical vulnerabilities * Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed
C.
* Enable the Container Scanning API to perform vulnerability scanning * Review vulnerability reporting in Container Registry in the Cloud Console, and provide an attestation that the container is free of known critical vulnerabilities * Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed
Answers
D.
* Enable the Container Scanning API to perform vulnerability scanning * Programmatically review vulnerability reporting through the Container Scanning API, and provide an attestation that the container is free of known critical vulnerabilities * Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed
D.
* Enable the Container Scanning API to perform vulnerability scanning * Programmatically review vulnerability reporting through the Container Scanning API, and provide an attestation that the container is free of known critical vulnerabilities * Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed
Answers
Suggested answer: D

Explanation:

https://cloud.google.com/binary-authorization/docs/creating-attestations-kritis

https://cloud.google.com/container-analysis/docs/os-overview

You have an on-premises application that authenticates to the Cloud Storage API using a user-managed service account with a user-managed key. The application connects to Cloud Storage using Private Google Access over a Dedicated Interconnect link. You discover that requests from the application to access objects in the Cloud Storage bucket are failing with a 403 Permission Denied error code. What is the likely cause of this issue?

A.
The folder structure inside the bucket and object paths have changed.
A.
The folder structure inside the bucket and object paths have changed.
Answers
B.
The permissions of the service account's predefined role have changed.
B.
The permissions of the service account's predefined role have changed.
Answers
C.
The service account key has been rotated but not updated on the application server.
C.
The service account key has been rotated but not updated on the application server.
Answers
D.
The Interconnect link from the on-premises data center to Google Cloud is experiencing a temporary outage.
D.
The Interconnect link from the on-premises data center to Google Cloud is experiencing a temporary outage.
Answers
Suggested answer: C

You are using the Cloud Client Library to upload an image in your application to Cloud Storage. Users of the application report that occasionally the upload does not complete and the client library reports an HTTP 504 Gateway Timeout error. You want to make the application more resilient to errors. What changes to the application should you make?

A.
Write an exponential backoff process around the client library call.
A.
Write an exponential backoff process around the client library call.
Answers
B.
Write a one-second wait time backoff process around the client library call.
B.
Write a one-second wait time backoff process around the client library call.
Answers
C.
Design a retry button in the application and ask users to click if the error occurs.
C.
Design a retry button in the application and ask users to click if the error occurs.
Answers
D.
Create a queue for the object and inform the users that the application will try again in 10 minutes.
D.
Create a queue for the object and inform the users that the application will try again in 10 minutes.
Answers
Suggested answer: A
Total 265 questions
Go to page: of 27