ExamGecko
Home Home / Google / Professional Cloud Developer

Google Professional Cloud Developer Practice Test - Questions Answers, Page 15

Question list
Search
Search

List of questions

Search

Related questions











You manage an application that runs in a Compute Engine instance. You also have multiple backend services executing in stand-alone Docker containers running in Compute Engine instances. The Compute Engine instances supporting the backend services are scaled by managed instance groups in multiple regions. You want your calling application to be loosely coupled. You need to be able to invoke distinct service implementations that are chosen based on the value of an HTTP header found in the request. Which Google Cloud feature should you use to invoke the backend services?

A.
Traffic Director
A.
Traffic Director
Answers
B.
Service Directory
B.
Service Directory
Answers
C.
Anthos Service Mesh
C.
Anthos Service Mesh
Answers
D.
Internal HTTP(S) Load Balancing
D.
Internal HTTP(S) Load Balancing
Answers
Suggested answer: D

Your team is developing an ecommerce platform for your company. Users will log in to the website and add items to their shopping cart. Users will be automatically logged out after 30minutes of inactivity. When users log back in, their shopping cart should be saved. How should you store users' session and shopping cart information while following Google-recommended best practices?

A.
Store the session information in Pub/Sub, and store the shopping cart information in Cloud SQL.
A.
Store the session information in Pub/Sub, and store the shopping cart information in Cloud SQL.
Answers
B.
Store the shopping cart information in a file on Cloud Storage where the filename is the SESSION ID.
B.
Store the shopping cart information in a file on Cloud Storage where the filename is the SESSION ID.
Answers
C.
Store the session and shopping cart information in a MySQL database running on multiple Compute Engine instances.
C.
Store the session and shopping cart information in a MySQL database running on multiple Compute Engine instances.
Answers
D.
Store the session information in Memorystore for Redis or Memorystore for Memcached, and store the shopping cart information in Firestore.
D.
Store the session information in Memorystore for Redis or Memorystore for Memcached, and store the shopping cart information in Firestore.
Answers
Suggested answer: D

You are designing a resource-sharing policy for applications used by different teams in a Google Kubernetes Engine cluster. You need to ensure that all applications can access the resources needed to run. What should you do? (Choose two.)

A.
Specify the resource limits and requests in the object specifications.
A.
Specify the resource limits and requests in the object specifications.
Answers
B.
Create a namespace for each team, and attach resource quotas to each namespace.
B.
Create a namespace for each team, and attach resource quotas to each namespace.
Answers
C.
Create a LimitRange to specify the default compute resource requirements for each namespace.
C.
Create a LimitRange to specify the default compute resource requirements for each namespace.
Answers
D.
Create a Kubernetes service account (KSA) for each application, and assign each KSA to the namespace.
D.
Create a Kubernetes service account (KSA) for each application, and assign each KSA to the namespace.
Answers
E.
Use the Anthos Policy Controller to enforce label annotations on all namespaces. Use taints and tolerations to allow resource sharing for namespaces.
E.
Use the Anthos Policy Controller to enforce label annotations on all namespaces. Use taints and tolerations to allow resource sharing for namespaces.
Answers
Suggested answer: B, C

Explanation:

https://kubernetes.io/docs/concepts/policy/resource-quotas/

https://kubernetes.io/docs/concepts/policy/limit-range/

https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-resource-requests-and-limits

You are developing a new application that has the following design requirements:

Creation and changes to the application infrastructure are versioned and auditable.

The application and deployment infrastructure uses Google-managed services as much as possible.

The application runs on a serverless compute platform.

How should you design the application's architecture?

A.
1. Store the application and infrastructure source code in a Git repository. 2. Use Cloud Build to deploy the application infrastructure with Terraform. 3. Deploy the application to a Cloud Function as a pipeline step.
A.
1. Store the application and infrastructure source code in a Git repository. 2. Use Cloud Build to deploy the application infrastructure with Terraform. 3. Deploy the application to a Cloud Function as a pipeline step.
Answers
B.
1. Deploy Jenkins from the Google Cloud Marketplace, and define a continuous integration pipeline in Jenkins. 2. Configure a pipeline step to pull the application source code from a Git repository. 3. Deploy the application source code to App Engine as a pipeline step.
B.
1. Deploy Jenkins from the Google Cloud Marketplace, and define a continuous integration pipeline in Jenkins. 2. Configure a pipeline step to pull the application source code from a Git repository. 3. Deploy the application source code to App Engine as a pipeline step.
Answers
C.
1. Create a continuous integration pipeline on Cloud Build, and configure the pipeline to deploy the application infrastructure using Deployment Manager templates. 2. Configure a pipeline step to create a container with the latest application source code. 3. Deploy the container to a Compute Engine instance as a pipeline step.
C.
1. Create a continuous integration pipeline on Cloud Build, and configure the pipeline to deploy the application infrastructure using Deployment Manager templates. 2. Configure a pipeline step to create a container with the latest application source code. 3. Deploy the container to a Compute Engine instance as a pipeline step.
Answers
D.
1. Deploy the application infrastructure using gcloud commands. 2. Use Cloud Build to define a continuous integration pipeline for changes to the application source code. 3. Configure a pipeline step to pull the application source code from a Git repository, and create a containerized application. 4. Deploy the new container on Cloud Run as a pipeline step.
D.
1. Deploy the application infrastructure using gcloud commands. 2. Use Cloud Build to define a continuous integration pipeline for changes to the application source code. 3. Configure a pipeline step to pull the application source code from a Git repository, and create a containerized application. 4. Deploy the new container on Cloud Run as a pipeline step.
Answers
Suggested answer: D

You are creating and running containers across different projects in Google Cloud. The application you are developing needs to access Google Cloud services from within Google Kubernetes Engine (GKE).

What should you do?

A.
Assign a Google service account to the GKE nodes.
A.
Assign a Google service account to the GKE nodes.
Answers
B.
Use a Google service account to run the Pod with Workload Identity.
B.
Use a Google service account to run the Pod with Workload Identity.
Answers
C.
Store the Google service account credentials as a Kubernetes Secret.
C.
Store the Google service account credentials as a Kubernetes Secret.
Answers
D.
Use a Google service account with GKE role-based access control (RBAC).
D.
Use a Google service account with GKE role-based access control (RBAC).
Answers
Suggested answer: B

Explanation:

https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity

You have containerized a legacy application that stores its configuration on an NFS share. You need to deploy this application to Google Kubernetes Engine (GKE) and do not want the application serving traffic until after the configuration has been retrieved. What should you do?

A.
Use the gsutil utility to copy files from within the Docker container at startup, and start the service using an ENTRYPOINT script.
A.
Use the gsutil utility to copy files from within the Docker container at startup, and start the service using an ENTRYPOINT script.
Answers
B.
Create a PersistentVolumeClaim on the GKE cluster. Access the configuration files from the volume, and start the service using an ENTRYPOINT script.
B.
Create a PersistentVolumeClaim on the GKE cluster. Access the configuration files from the volume, and start the service using an ENTRYPOINT script.
Answers
C.
Use the COPY statement in the Dockerfile to load the configuration into the container image. Verify that the configuration is available, and start the service using an ENTRYPOINT script.
C.
Use the COPY statement in the Dockerfile to load the configuration into the container image. Verify that the configuration is available, and start the service using an ENTRYPOINT script.
Answers
D.
Add a startup script to the GKE instance group to mount the NFS share at node startup. Copy the configuration files into the container, and start the service using an ENTRYPOINT script.
D.
Add a startup script to the GKE instance group to mount the NFS share at node startup. Copy the configuration files into the container, and start the service using an ENTRYPOINT script.
Answers
Suggested answer: B

Your team is developing a new application using a PostgreSQL database and Cloud Run. You are responsible for ensuring that all traffic is kept private on Google Cloud. You want to use managed services and follow Google-recommended best practices. What should you do?

A.
1. Enable Cloud SQL and Cloud Run in the same project. 2. Configure a private IP address for Cloud SQL. Enable private services access. 3. Create a Serverless VPC Access connector. 4. Configure Cloud Run to use the connector to connect to Cloud SQL.
A.
1. Enable Cloud SQL and Cloud Run in the same project. 2. Configure a private IP address for Cloud SQL. Enable private services access. 3. Create a Serverless VPC Access connector. 4. Configure Cloud Run to use the connector to connect to Cloud SQL.
Answers
B.
1. Install PostgreSQL on a Compute Engine virtual machine (VM), and enable Cloud Run in the same project. 2. Configure a private IP address for the VM. Enable private services access. 3. Create a Serverless VPC Access connector. 4. Configure Cloud Run to use the connector to connect to the VM hosting PostgreSQL.
B.
1. Install PostgreSQL on a Compute Engine virtual machine (VM), and enable Cloud Run in the same project. 2. Configure a private IP address for the VM. Enable private services access. 3. Create a Serverless VPC Access connector. 4. Configure Cloud Run to use the connector to connect to the VM hosting PostgreSQL.
Answers
C.
1. Use Cloud SQL and Cloud Run in different projects. 2. Configure a private IP address for Cloud SQL. Enable private services access. 3. Create a Serverless VPC Access connector. 4. Set up a VPN connection between the two projects. Configure Cloud Run to use the connector to connect to Cloud SQL.
C.
1. Use Cloud SQL and Cloud Run in different projects. 2. Configure a private IP address for Cloud SQL. Enable private services access. 3. Create a Serverless VPC Access connector. 4. Set up a VPN connection between the two projects. Configure Cloud Run to use the connector to connect to Cloud SQL.
Answers
D.
1. Install PostgreSQL on a Compute Engine VM, and enable Cloud Run in different projects. 2. Configure a private IP address for the VM. Enable private services access. 3. Create a Serverless VPC Access connector. 4. Set up a VPN connection between the two projects. Configure Cloud Run to use the connector to access the VM hosting PostgreSQL
D.
1. Install PostgreSQL on a Compute Engine VM, and enable Cloud Run in different projects. 2. Configure a private IP address for the VM. Enable private services access. 3. Create a Serverless VPC Access connector. 4. Set up a VPN connection between the two projects. Configure Cloud Run to use the connector to access the VM hosting PostgreSQL
Answers
Suggested answer: A

Explanation:

https://cloud.google.com/sql/docs/postgres/connect-run#private-ip

You are developing an application that will allow clients to download a file from your website for a specific period of time. How should you design the application to complete this task while following Google-recommended best practices?

A.
Configure the application to send the file to the client as an email attachment.
A.
Configure the application to send the file to the client as an email attachment.
Answers
B.
Generate and assign a Cloud Storage-signed URL for the file. Make the URL available for the client to download.
B.
Generate and assign a Cloud Storage-signed URL for the file. Make the URL available for the client to download.
Answers
C.
Create a temporary Cloud Storage bucket with time expiration specified, and give download permissions to the bucket. Copy the file, and send it to the client.
C.
Create a temporary Cloud Storage bucket with time expiration specified, and give download permissions to the bucket. Copy the file, and send it to the client.
Answers
D.
Generate the HTTP cookies with time expiration specified. If the time is valid, copy the file from the Cloud Storage bucket, and make the file available for the client to download.
D.
Generate the HTTP cookies with time expiration specified. If the time is valid, copy the file from the Cloud Storage bucket, and make the file available for the client to download.
Answers
Suggested answer: B

Your development team has been asked to refactor an existing monolithic application into a set of composable microservices. Which design aspects should you implement for the new application? (Choose two.)

A.
Develop the microservice code in the same programming language used by the microservice caller.
A.
Develop the microservice code in the same programming language used by the microservice caller.
Answers
B.
Create an API contract agreement between the microservice implementation and microservice caller.
B.
Create an API contract agreement between the microservice implementation and microservice caller.
Answers
C.
Require asynchronous communications between all microservice implementations and microservice callers.
C.
Require asynchronous communications between all microservice implementations and microservice callers.
Answers
D.
Ensure that sufficient instances of the microservice are running to accommodate the performance requirements.
D.
Ensure that sufficient instances of the microservice are running to accommodate the performance requirements.
Answers
E.
Implement a versioning scheme to permit future changes that could be incompatible with the current interface.
E.
Implement a versioning scheme to permit future changes that could be incompatible with the current interface.
Answers
Suggested answer: B, E

You deployed a new application to Google Kubernetes Engine and are experiencing some performance degradation. Your logs are being written to Cloud Logging, and you are using a Prometheus sidecar model for capturing metrics. You need to correlate the metrics and data from the logs to troubleshoot the performance issue and send real-time alerts while minimizing costs. What should you do?

A.
Create custom metrics from the Cloud Logging logs, and use Prometheus to import the results using the Cloud Monitoring REST API.
A.
Create custom metrics from the Cloud Logging logs, and use Prometheus to import the results using the Cloud Monitoring REST API.
Answers
B.
Export the Cloud Logging logs and the Prometheus metrics to Cloud Bigtable. Run a query to join the results, and analyze in Google Data Studio.
B.
Export the Cloud Logging logs and the Prometheus metrics to Cloud Bigtable. Run a query to join the results, and analyze in Google Data Studio.
Answers
C.
Export the Cloud Logging logs and stream the Prometheus metrics to BigQuery. Run a recurring query to join the results, and send notifications using Cloud Tasks.
C.
Export the Cloud Logging logs and stream the Prometheus metrics to BigQuery. Run a recurring query to join the results, and send notifications using Cloud Tasks.
Answers
D.
Export the Prometheus metrics and use Cloud Monitoring to view them as external metrics. Configure Cloud Monitoring to create log-based metrics from the logs, and correlate them with the Prometheus data.
D.
Export the Prometheus metrics and use Cloud Monitoring to view them as external metrics. Configure Cloud Monitoring to create log-based metrics from the logs, and correlate them with the Prometheus data.
Answers
Suggested answer: D
Total 265 questions
Go to page: of 27