ExamGecko
Home Home / Google / Professional Cloud Developer

Google Professional Cloud Developer Practice Test - Questions Answers, Page 22

Question list
Search
Search

List of questions

Search

Related questions











You recently joined a new team that has a Cloud Spanner database instance running in production. Your manager has asked you to optimize the Spanner instance to reduce cost while maintaining high reliability and availability of the database. What should you do?

A.
Use Cloud Logging to check for error logs, and reduce Spanner processing units by small increments until you find the minimum capacity required.
A.
Use Cloud Logging to check for error logs, and reduce Spanner processing units by small increments until you find the minimum capacity required.
Answers
B.
Use Cloud Trace to monitor the requests per sec of incoming requests to Spanner, and reduce Spanner processing units by small increments until you find the minimum capacity required.
B.
Use Cloud Trace to monitor the requests per sec of incoming requests to Spanner, and reduce Spanner processing units by small increments until you find the minimum capacity required.
Answers
C.
Use Cloud Monitoring to monitor the CPU utilization, and reduce Spanner processing units by small increments until you find the minimum capacity required.
C.
Use Cloud Monitoring to monitor the CPU utilization, and reduce Spanner processing units by small increments until you find the minimum capacity required.
Answers
D.
Use Snapshot Debugger to check for application errors, and reduce Spanner processing units by small increments until you find the minimum capacity required.
D.
Use Snapshot Debugger to check for application errors, and reduce Spanner processing units by small increments until you find the minimum capacity required.
Answers
Suggested answer: C

Explanation:

https://cloud.google.com/spanner/docs/compute-capacity#increasing_and_decreasing_compute_capacity

You recently deployed a Go application on Google Kubernetes Engine (GKE). The operations team has noticed that the application's CPU usage is high even when there is low production traffic. The operations team has asked you to optimize your application's CPU resource consumption. You want to determine which Go functions consume the largest amount of CPU. What should you do?

A.
Deploy a Fluent Bit daemonset on the GKE cluster to log data in Cloud Logging. Analyze the logs to get insights into your application code's performance.
A.
Deploy a Fluent Bit daemonset on the GKE cluster to log data in Cloud Logging. Analyze the logs to get insights into your application code's performance.
Answers
B.
Create a custom dashboard in Cloud Monitoring to evaluate the CPU performance metrics of your application.
B.
Create a custom dashboard in Cloud Monitoring to evaluate the CPU performance metrics of your application.
Answers
C.
Connect to your GKE nodes using SSH. Run the top command on the shell to extract the CPU utilization of your application.
C.
Connect to your GKE nodes using SSH. Run the top command on the shell to extract the CPU utilization of your application.
Answers
D.
Modify your Go application to capture profiling data. Analyze the CPU metrics of your application in flame graphs in Profiler.
D.
Modify your Go application to capture profiling data. Analyze the CPU metrics of your application in flame graphs in Profiler.
Answers
Suggested answer: D

Explanation:

https://cloud.google.com/profiler/docs/about-profiler

Cloud Profiler is a statistical, low-overhead profiler that continuously gathers CPU usage and memory-allocation information from your production applications. It attributes that information to the source code that generated it, helping you identify the parts of your application that are consuming the most resources, and otherwise illuminating your applications performance characteristics.

https://cloud.google.com/profiler/docs

Your team manages a Google Kubernetes Engine (GKE) cluster where an application is running. A different team is planning to integrate with this application. Before they start the integration, you need to ensure that the other team cannot make changes to your application, but they can deploy the integration on GKE. What should you do?

A.
Using Identity and Access Management (IAM), grant the Viewer IAM role on the cluster project to the other team.
A.
Using Identity and Access Management (IAM), grant the Viewer IAM role on the cluster project to the other team.
Answers
B.
Create a new GKE cluster. Using Identity and Access Management (IAM), grant the Editor role on the cluster project to the other team.
B.
Create a new GKE cluster. Using Identity and Access Management (IAM), grant the Editor role on the cluster project to the other team.
Answers
C.
Create a new namespace in the existing cluster. Using Identity and Access Management (IAM), grant the Editor role on the cluster project to the other team.
C.
Create a new namespace in the existing cluster. Using Identity and Access Management (IAM), grant the Editor role on the cluster project to the other team.
Answers
D.
Create a new namespace in the existing cluster. Using Kubernetes role-based access control (RBAC), grant the Admin role on the new namespace to the other team.
D.
Create a new namespace in the existing cluster. Using Kubernetes role-based access control (RBAC), grant the Admin role on the new namespace to the other team.
Answers
Suggested answer: D

You have recently instrumented a new application with OpenTelemetry, and you want to check the latency of your application requests in Trace. You want to ensure that a specific request is always traced. What should you do?

A.
Wait 10 minutes, then verify that Trace captures those types of requests automatically.
A.
Wait 10 minutes, then verify that Trace captures those types of requests automatically.
Answers
B.
Write a custom script that sends this type of request repeatedly from your dev project.
B.
Write a custom script that sends this type of request repeatedly from your dev project.
Answers
C.
Use the Trace API to apply custom attributes to the trace.
C.
Use the Trace API to apply custom attributes to the trace.
Answers
D.
Add the X-Cloud-Trace-Context header to the request with the appropriate parameters.
D.
Add the X-Cloud-Trace-Context header to the request with the appropriate parameters.
Answers
Suggested answer: D

Explanation:

https://cloud.google.com/trace/docs/setup#force-trace

Cloud Trace doesn't sample every request. To force a specific request to be traced, add an X-Cloud-Trace-Context header to the request.

You are trying to connect to your Google Kubernetes Engine (GKE) cluster using kubectl from Cloud Shell. You have deployed your GKE cluster with a public endpoint. From Cloud Shell, you run the following command:

You notice that the kubectl commands time out without returning an error message. What is the most likely cause of this issue?

A.
Your user account does not have privileges to interact with the cluster using kubectl.
A.
Your user account does not have privileges to interact with the cluster using kubectl.
Answers
B.
Your Cloud Shell external IP address is not part of the authorized networks of the cluster.
B.
Your Cloud Shell external IP address is not part of the authorized networks of the cluster.
Answers
C.
The Cloud Shell is not part of the same VPC as the GKE cluster.
C.
The Cloud Shell is not part of the same VPC as the GKE cluster.
Answers
D.
A VPC firewall is blocking access to the cluster's endpoint.
D.
A VPC firewall is blocking access to the cluster's endpoint.
Answers
Suggested answer: B

Explanation:

https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#cloud_shell

If you want to use Cloud Shell to access the cluster, you must add the public IP address of your Cloud Shell to the cluster's list of authorized networks.

You are developing a web application that contains private images and videos stored in a Cloud Storage bucket. Your users are anonymous and do not have Google Accounts. You want to use your application-specific logic to control access to the images and videos. How should you configure access?

A.
Cache each web application user's IP address to create a named IP table using Google Cloud Armor. Create a Google Cloud Armor security policy that allows users to access the backend bucket.
A.
Cache each web application user's IP address to create a named IP table using Google Cloud Armor. Create a Google Cloud Armor security policy that allows users to access the backend bucket.
Answers
B.
Grant the Storage Object Viewer IAM role to allUsers. Allow users to access the bucket after authenticating through your web application.
B.
Grant the Storage Object Viewer IAM role to allUsers. Allow users to access the bucket after authenticating through your web application.
Answers
C.
Configure Identity-Aware Proxy (IAP) to authenticate users into the web application. Allow users to access the bucket after authenticating through IAP.
C.
Configure Identity-Aware Proxy (IAP) to authenticate users into the web application. Allow users to access the bucket after authenticating through IAP.
Answers
D.
Generate a signed URL that grants read access to the bucket. Allow users to access the URL after authenticating through your web application.
D.
Generate a signed URL that grants read access to the bucket. Allow users to access the URL after authenticating through your web application.
Answers
Suggested answer: D

Explanation:

https://cloud.google.com/storage/docs/access-control/signed-urls#should-you-use

In some scenarios, you might not want to require your users to have a Google account in order to access Cloud Storage, but you still want to control access using your application-specific logic. The typical way to address this use case is to provide a signed URL to a user, which gives the user read, write, or delete access to that resource for a limited time. You specify an expiration time when you create the signed URL. Anyone who knows the URL can access the resource until the expiration time for the URL is reached or the key used to sign the URL is rotated.

You need to configure a Deployment on Google Kubernetes Engine (GKE). You want to include a check that verifies that the containers can connect to the database. If the Pod is failing to connect, you want a script on the container to run to complete a graceful shutdown. How should you configure the Deployment?

A.
Create two jobs: one that checks whether the container can connect to the database, and another that runs the shutdown script if the Pod is failing.
A.
Create two jobs: one that checks whether the container can connect to the database, and another that runs the shutdown script if the Pod is failing.
Answers
B.
Create the Deployment with a livenessProbe for the container that will fail if the container can't connect to the database. Configure a Prestop lifecycle handler that runs the shutdown script if the container is failing.
B.
Create the Deployment with a livenessProbe for the container that will fail if the container can't connect to the database. Configure a Prestop lifecycle handler that runs the shutdown script if the container is failing.
Answers
C.
Create the Deployment with a PostStart lifecycle handler that checks the service availability. Configure a PreStop lifecycle handler that runs the shutdown script if the container is failing.
C.
Create the Deployment with a PostStart lifecycle handler that checks the service availability. Configure a PreStop lifecycle handler that runs the shutdown script if the container is failing.
Answers
D.
Create the Deployment with an initContainer that checks the service availability. Configure a Prestop lifecycle handler that runs the shutdown script if the Pod is failing.
D.
Create the Deployment with an initContainer that checks the service availability. Configure a Prestop lifecycle handler that runs the shutdown script if the Pod is failing.
Answers
Suggested answer: B

Explanation:

https://cloud.google.com/architecture/best-practices-for-running-cost-effective-kubernetes-applications-on-gke#make_sure_your_applications_are_shutting_down_in_accordance_with_kubernetes_expectations

You are responsible for deploying a new API. That API will have three different URL paths:

* https://yourcompany.com/students

* https://yourcompany.com/teachers

* https://yourcompany.com/classes

You need to configure each API URL path to invoke a different function in your code. What should you do?

A.
Create one Cloud Function as a backend service exposed using an HTTPS load balancer.
A.
Create one Cloud Function as a backend service exposed using an HTTPS load balancer.
Answers
B.
Create three Cloud Functions exposed directly.
B.
Create three Cloud Functions exposed directly.
Answers
C.
Create one Cloud Function exposed directly.
C.
Create one Cloud Function exposed directly.
Answers
D.
Create three Cloud Functions as three backend services exposed using an HTTPS load balancer.
D.
Create three Cloud Functions as three backend services exposed using an HTTPS load balancer.
Answers
Suggested answer: D

Explanation:

https://cloud.google.com/load-balancing/docs/https/setup-global-ext-https-serverless

You work for an organization that manages an online ecommerce website. Your company plans to expand across the world; however, the estore currently serves one specific region. You need to select a SQL database and configure a schema that will scale as your organization grows. You want to create a table that stores all customer transactions and ensure that the customer (CustomerId) and the transaction (TransactionId) are unique. What should you do?

A.
Create a Cloud SQL table that has TransactionId and CustomerId configured as primary keys. Use an incremental number for the TransactionId.
A.
Create a Cloud SQL table that has TransactionId and CustomerId configured as primary keys. Use an incremental number for the TransactionId.
Answers
B.
Create a Cloud SQL table that has TransactionId and CustomerId configured as primary keys. Use a random string (UUID) for the Transactionid.
B.
Create a Cloud SQL table that has TransactionId and CustomerId configured as primary keys. Use a random string (UUID) for the Transactionid.
Answers
C.
Create a Cloud Spanner table that has TransactionId and CustomerId configured as primary keys. Use a random string (UUID) for the TransactionId.
C.
Create a Cloud Spanner table that has TransactionId and CustomerId configured as primary keys. Use a random string (UUID) for the TransactionId.
Answers
D.
Create a Cloud Spanner table that has TransactionId and CustomerId configured as primary keys. Use an incremental number for the TransactionId.
D.
Create a Cloud Spanner table that has TransactionId and CustomerId configured as primary keys. Use an incremental number for the TransactionId.
Answers
Suggested answer: C

You are monitoring a web application that is written in Go and deployed in Google Kubernetes Engine. You notice an increase in CPU and memory utilization. You need to determine which source code is consuming the most CPU and memory resources. What should you do?

A.
Download, install, and start the Snapshot Debugger agent in your VM. Take debug snapshots of the functions that take the longest time. Review the call stack frame, and identify the local variables at that level in the stack.
A.
Download, install, and start the Snapshot Debugger agent in your VM. Take debug snapshots of the functions that take the longest time. Review the call stack frame, and identify the local variables at that level in the stack.
Answers
B.
Import the Cloud Profiler package into your application, and initialize the Profiler agent. Review the generated flame graph in the Google Cloud console to identify time-intensive functions.
B.
Import the Cloud Profiler package into your application, and initialize the Profiler agent. Review the generated flame graph in the Google Cloud console to identify time-intensive functions.
Answers
C.
Import OpenTelemetry and Trace export packages into your application, and create the trace provider. Review the latency data for your application on the Trace overview page, and identify where bottlenecks are occurring.
C.
Import OpenTelemetry and Trace export packages into your application, and create the trace provider. Review the latency data for your application on the Trace overview page, and identify where bottlenecks are occurring.
Answers
D.
Create a Cloud Logging query that gathers the web application's logs. Write a Python script that calculates the difference between the timestamps from the beginning and the end of the application's longest functions to identity time-intensive functions.
D.
Create a Cloud Logging query that gathers the web application's logs. Write a Python script that calculates the difference between the timestamps from the beginning and the end of the application's longest functions to identity time-intensive functions.
Answers
Suggested answer: B
Total 265 questions
Go to page: of 27