ExamGecko
Home / Google / Professional Cloud Architect / List of questions
Ask Question

Google Professional Cloud Architect Practice Test - Questions Answers, Page 12

List of questions

Question 111

Report
Export
Collapse

You need to evaluate your team readiness for a new GCP project. You must perform the evaluation and create a skills gap plan which incorporates the business goal of cost optimization. Your team has deployed two GCP projects successfully to date. What should you do?

Allocate budget for team training. Set a deadline for the new GCP project.
Allocate budget for team training. Set a deadline for the new GCP project.
Allocate budget for team training. Create a roadmap for your team to achieve Google Cloud certification based on job role.
Allocate budget for team training. Create a roadmap for your team to achieve Google Cloud certification based on job role.
Allocate budget to hire skilled external consultants. Set a deadline for the new GCP project.
Allocate budget to hire skilled external consultants. Set a deadline for the new GCP project.
Allocate budget to hire skilled external consultants. Create a roadmap for your team to achieve Google Cloud certification based on job role.
Allocate budget to hire skilled external consultants. Create a roadmap for your team to achieve Google Cloud certification based on job role.
Suggested answer: A
asked 18/09/2024
Swen Leuning
49 questions

Question 112

Report
Export
Collapse

You are designing an application for use only during business hours. For the minimum viable product release, you'd like to use a managed product that automatically "scales to zero" so you don't incur costs when there is no activity. Which primary compute resource should you choose?

Cloud Functions
Cloud Functions
Compute Engine
Compute Engine
Google Kubernetes Engine
Google Kubernetes Engine
AppEngine flexible environment
AppEngine flexible environment
Suggested answer: A
asked 18/09/2024
Parker Perry
39 questions

Question 113

Report
Export
Collapse

You are creating an App Engine application that uses Cloud Datastore as its persistence layer. You need to retrieve several root entities for which you have the identifiers. You want to minimize the overhead in operations performed by

Cloud Datastore. What should you do?

Create the Key object for each Entity and run a batch get operation
Create the Key object for each Entity and run a batch get operation
Create the Key object for each Entity and run multiple get operations, one operation for each entity
Create the Key object for each Entity and run multiple get operations, one operation for each entity
Use the identifiers to create a query filter and run a batch query operation
Use the identifiers to create a query filter and run a batch query operation
Use the identifiers to create a query filter and run multiple query operations, one operation for each entity
Use the identifiers to create a query filter and run multiple query operations, one operation for each entity
Suggested answer: A

Explanation:

Section: [none]

asked 18/09/2024
Tiziano Riezzo
47 questions

Question 114

Report
Export
Collapse

You need to upload files from your on-premises environment to Cloud Storage. You want the files to be encrypted on Cloud Storage using customer-supplied encryption keys. What should you do?

Supply the encryption key in a .boto configuration file. Use gsutil to upload the files.
Supply the encryption key in a .boto configuration file. Use gsutil to upload the files.
Supply the encryption key using gcloud config. Use gsutil to upload the files to that bucket.
Supply the encryption key using gcloud config. Use gsutil to upload the files to that bucket.
Use gsutil to upload the files, and use the flag --encryption-key to supply the encryption key.
Use gsutil to upload the files, and use the flag --encryption-key to supply the encryption key.
Use gsutil to create a bucket, and use the flag --encryption-key to supply the encryption key. Use gsutil to upload the files to that bucket.
Use gsutil to create a bucket, and use the flag --encryption-key to supply the encryption key. Use gsutil to upload the files to that bucket.
Suggested answer: A
asked 18/09/2024
Farid Tannouch
34 questions

Question 115

Report
Export
Collapse

Your customer wants to capture multiple GBs of aggregate real-time key performance indicators (KPIs) from their game servers running on Google Cloud Platform and monitor the KPIs with low latency. How should they capture the KPIs?

Store time-series data from the game servers in Google Bigtable, and view it using Google Data Studio.
Store time-series data from the game servers in Google Bigtable, and view it using Google Data Studio.
Output custom metrics to Stackdriver from the game servers, and create a Dashboard in Stackdriver Monitoring Console to view them.
Output custom metrics to Stackdriver from the game servers, and create a Dashboard in Stackdriver Monitoring Console to view them.
Schedule BigQuery load jobs to ingest analytics files uploaded to Cloud Storage every ten minutes, and visualize the results in Google Data Studio.
Schedule BigQuery load jobs to ingest analytics files uploaded to Cloud Storage every ten minutes, and visualize the results in Google Data Studio.
Insert the KPIs into Cloud Datastore entities, and run ad hoc analysis and visualizations of them in Cloud Datalab.
Insert the KPIs into Cloud Datastore entities, and run ad hoc analysis and visualizations of them in Cloud Datalab.
Suggested answer: A

Explanation:

Reference:https://cloud.google.com/solutions/data-lifecycle-cloud-platform

asked 18/09/2024
Jean-Gaetan Roche
31 questions

Question 116

Report
Export
Collapse

You have a Python web application with many dependencies that requires 0.1 CPU cores and 128 MB of memory to operate in production. You want to monitor and maximize machine utilization. You also want to reliably deploy new versions of the application. Which set of steps should you take?

Perform the following: 1) Create a managed instance group with f1-micro type machines. 2) Use a startup script to clone the repository, check out the production branch, install the dependencies, and start the Python app. 3) Restart the instances to automatically deploy new production releases.
Perform the following: 1) Create a managed instance group with f1-micro type machines. 2) Use a startup script to clone the repository, check out the production branch, install the dependencies, and start the Python app. 3) Restart the instances to automatically deploy new production releases.
Perform the following: 1) Create a managed instance group with n1-standard-1 type machines. 2) Build a Compute Engine image from the production branch that contains all of the dependencies and automatically starts the Python app. 3) Rebuild the Compute Engine image, and update the instance template to deploy new production releases.
Perform the following: 1) Create a managed instance group with n1-standard-1 type machines. 2) Build a Compute Engine image from the production branch that contains all of the dependencies and automatically starts the Python app. 3) Rebuild the Compute Engine image, and update the instance template to deploy new production releases.
Perform the following: 1) Create a Kubernetes Engine cluster with n1-standard-1 type machines. 2) Build a Docker image from the production branch with all of the dependencies, and tag it with the 3) Create a Kubernetes Deployment with the imagePullPolicy set to ''IfNotPresent'' in the staging namespace, and then promote it to the production namespace after testing.
Perform the following: 1) Create a Kubernetes Engine cluster with n1-standard-1 type machines. 2) Build a Docker image from the production branch with all of the dependencies, and tag it with the 3) Create a Kubernetes Deployment with the imagePullPolicy set to ''IfNotPresent'' in the staging namespace, and then promote it to the production namespace after testing.
Perform the following: 1) Create a Kubernetes Engine (GKE) cluster with n1-standard-4 type machines. 2) Build a Docker image from the master branch will all of the dependencies, and tag it with ''latest''. 3) Create a Kubernetes Deployment in the default namespace with the imagePullPolicy set to ''Always''. Restart the pods to utomatically deploy new production releases.
Perform the following: 1) Create a Kubernetes Engine (GKE) cluster with n1-standard-4 type machines. 2) Build a Docker image from the master branch will all of the dependencies, and tag it with ''latest''. 3) Create a Kubernetes Deployment in the default namespace with the imagePullPolicy set to ''Always''. Restart the pods to utomatically deploy new production releases.
Suggested answer: B
asked 18/09/2024
Levente Mikofalvi
29 questions

Question 117

Report
Export
Collapse

Your company wants to start using Google Cloud resources but wants to retain their on-premises Active Directory domain controller for identity management. What should you do?

Use the Admin Directory API to authenticate against the Active Directory domain controller.
Use the Admin Directory API to authenticate against the Active Directory domain controller.
Use Google Cloud Directory Sync to synchronize Active Directory usernames with cloud identities and configure SAML SSO.
Use Google Cloud Directory Sync to synchronize Active Directory usernames with cloud identities and configure SAML SSO.
Use Cloud Identity-Aware Proxy configured to use the on-premises Active Directory domain controller as an identity provider.
Use Cloud Identity-Aware Proxy configured to use the on-premises Active Directory domain controller as an identity provider.
Use Compute Engine to create an Active Directory (AD) domain controller that is a replica of the on-premises AD domain controller using Google Cloud Directory Sync.
Use Compute Engine to create an Active Directory (AD) domain controller that is a replica of the on-premises AD domain controller using Google Cloud Directory Sync.
Suggested answer: B
asked 18/09/2024
Kabi Bashala
32 questions

Question 118

Report
Export
Collapse

You are running a cluster on Kubernetes Engine (GKE) to serve a web application. Users are reporting that a specific part of the application is not responding anymore. You notice that all pods of your deployment keep restarting after 2 seconds.

The application writes logs to standard output. You want to inspect the logs to find the cause of the issue. Which approach can you take?

Review the Stackdriver logs for each Compute Engine instance that is serving as a node in the cluster.
Review the Stackdriver logs for each Compute Engine instance that is serving as a node in the cluster.
Review the Stackdriver logs for the specific GKE container that is serving the unresponsive part of the application.
Review the Stackdriver logs for the specific GKE container that is serving the unresponsive part of the application.
Connect to the cluster using gcloud credentials and connect to a container in one of the pods to read the logs.
Connect to the cluster using gcloud credentials and connect to a container in one of the pods to read the logs.
Review the Serial Port logs for each Compute Engine instance that is serving as a node in the cluster.
Review the Serial Port logs for each Compute Engine instance that is serving as a node in the cluster.
Suggested answer: B
asked 18/09/2024
MARTIN WEAVER
35 questions

Question 119

Report
Export
Collapse

You are using a single Cloud SQL instance to serve your application from a specific zone. You want to introduce high availability. What should you do?

Create a read replica instance in a different region
Create a read replica instance in a different region
Create a failover replica instance in a different region
Create a failover replica instance in a different region
Create a read replica instance in the same region, but in a different zone
Create a read replica instance in the same region, but in a different zone
Create a failover replica instance in the same region, but in a different zone
Create a failover replica instance in the same region, but in a different zone
Suggested answer: D
asked 18/09/2024
Mogamat Davids
38 questions

Question 120

Report
Export
Collapse

Your company is running a stateless application on a Compute Engine instance. The application is used heavily during regular business hours and lightly outside of business hours. Users are reporting that the application is slow during peak hours. You need to optimize the application's performance. What should you do?

Create a snapshot of the existing disk. Create an instance template from the snapshot. Create an autoscaled managed instance group from the instance template.
Create a snapshot of the existing disk. Create an instance template from the snapshot. Create an autoscaled managed instance group from the instance template.
Create a snapshot of the existing disk. Create a custom image from the snapshot. Create an autoscaled managed instance group from the custom image.
Create a snapshot of the existing disk. Create a custom image from the snapshot. Create an autoscaled managed instance group from the custom image.
Create a custom image from the existing disk. Create an instance template from the custom image. Create an autoscaled managed instance group from the instance template.
Create a custom image from the existing disk. Create an instance template from the custom image. Create an autoscaled managed instance group from the instance template.
Create an instance template from the existing disk. Create a custom image from the instance template. Create an autoscaled managed instance group from the custom image.
Create an instance template from the existing disk. Create a custom image from the instance template. Create an autoscaled managed instance group from the custom image.
Suggested answer: C
asked 18/09/2024
HW Yan
45 questions
Total 285 questions
Go to page: of 29
Search

Related questions