ExamGecko
Home Home / Google / Professional Cloud Architect

Google Professional Cloud Architect Practice Test - Questions Answers, Page 20

Question list
Search
Search

List of questions

Search

Related questions











An application development team has come to you for advice. They are planning to write and deploy an HTTP(S) API using Go 1.12. The API will have a very unpredictable workload and must remain reliable during peaks in traffic. They want to minimize operational overhead for this application. Which approach should you recommend?

A.
Develop the application with containers, and deploy to Google Kubernetes Engine.
A.
Develop the application with containers, and deploy to Google Kubernetes Engine.
Answers
B.
Develop the application for App Engine standard environment.
B.
Develop the application for App Engine standard environment.
Answers
C.
Use a Managed Instance Group when deploying to Compute Engine.
C.
Use a Managed Instance Group when deploying to Compute Engine.
Answers
D.
Develop the application for App Engine flexible environment, using a custom runtime.
D.
Develop the application for App Engine flexible environment, using a custom runtime.
Answers
Suggested answer: C

Your company is designing its data lake on Google Cloud and wants to develop different ingestion pipelines to collect unstructured data from different sources. After the data is stored in Google Cloud, it will be processed in several data pipelines to build a recommendation engine for end users on the website. The structure of the data retrieved from the source systems can change at any time. The data must be stored exactly as it was retrieved for reprocessing purposes in case the data structure is incompatible with the current processing pipelines. You need to design an architecture to support the use case after you retrieve the data. What should you do?

A.
Send the data through the processing pipeline, and then store the processed data in a BigQuery table for reprocessing.
A.
Send the data through the processing pipeline, and then store the processed data in a BigQuery table for reprocessing.
Answers
B.
Store the data in a BigQuery table. Design the processing pipelines to retrieve the data from the table.
B.
Store the data in a BigQuery table. Design the processing pipelines to retrieve the data from the table.
Answers
C.
Send the data through the processing pipeline, and then store the processed data in a Cloud Storage bucket for reprocessing.
C.
Send the data through the processing pipeline, and then store the processed data in a Cloud Storage bucket for reprocessing.
Answers
D.
Store the data in a Cloud Storage bucket. Design the processing pipelines to retrieve the data from the bucket.
D.
Store the data in a Cloud Storage bucket. Design the processing pipelines to retrieve the data from the bucket.
Answers
Suggested answer: D

You are responsible for the Google Cloud environment in your company. Multiple departments need access to their own projects, and the members within each department will have the same project responsibilities. You want to structure your Google Cloud environment for minimal maintenance and maximum overview of IAM permissions as each department's projects start and end. You want to follow Google-recommended practices. What should you do?

A.
Grant all department members the required IAM permissions for their respective projects.
A.
Grant all department members the required IAM permissions for their respective projects.
Answers
B.
Create a Google Group per department and add all department members to their respective groups. Create a folder per department and grant the respective group the required IAM permissions at the folder level. Add the projects under the respective folders.
B.
Create a Google Group per department and add all department members to their respective groups. Create a folder per department and grant the respective group the required IAM permissions at the folder level. Add the projects under the respective folders.
Answers
C.
Create a folder per department and grant the respective members of the department the required IAM permissions at the folder level. Structure all projects for each department under the respective folders.
C.
Create a folder per department and grant the respective members of the department the required IAM permissions at the folder level. Structure all projects for each department under the respective folders.
Answers
D.
Create a Google Group per department and add all department members to their respective groups. Grant each group the required IAM permissions for their respective projects.
D.
Create a Google Group per department and add all department members to their respective groups. Grant each group the required IAM permissions for their respective projects.
Answers
Suggested answer: B

Your company has an application running as a Deployment in a Google Kubernetes Engine (GKE) cluster. You have separate clusters for development, staging, and production. You have discovered that the team is able to deploy a Docker image to the production cluster without first testing the deployment in development and then staging. You want to allow the team to have autonomy but want to prevent this from happening. You want a Google Cloud solution that can be implemented quickly with minimal effort. What should you do?

A.
Configure a Kubernetes lifecycle hook to prevent the container from starting if it is not approved for usage in the given environment.
A.
Configure a Kubernetes lifecycle hook to prevent the container from starting if it is not approved for usage in the given environment.
Answers
B.
Implement a corporate policy to prevent teams from deploying Docker images to an environment unless the Docker image was tested in an earlier environment.
B.
Implement a corporate policy to prevent teams from deploying Docker images to an environment unless the Docker image was tested in an earlier environment.
Answers
C.
Configure binary authorization policies for the development, staging, and production clusters. Create attestations as part of the continuous integration pipeline.
C.
Configure binary authorization policies for the development, staging, and production clusters. Create attestations as part of the continuous integration pipeline.
Answers
D.
Create a Kubernetes admissions controller to prevent the container from starting if it is not approved for usage in the given environment.
D.
Create a Kubernetes admissions controller to prevent the container from starting if it is not approved for usage in the given environment.
Answers
Suggested answer: C

Your company wants to migrate their 10-TB on-premises database export into Cloud Storage. You want to minimize the time it takes to complete this activity, the overall cost, and database load. The bandwidth between the on-premises environment and Google Cloud is 1 Gbps. You want to follow Google-recommended practices. What should you do?

A.
Develop a Dataflow job to read data directly from the database and write it into Cloud Storage.
A.
Develop a Dataflow job to read data directly from the database and write it into Cloud Storage.
Answers
B.
Use the Data Transfer appliance to perform an offline migration.
B.
Use the Data Transfer appliance to perform an offline migration.
Answers
C.
Use a commercial partner ETL solution to extract the data from the on-premises database and upload it into Cloud Storage.
C.
Use a commercial partner ETL solution to extract the data from the on-premises database and upload it into Cloud Storage.
Answers
D.
Compress the data and upload it with gsutil -m to enable multi-threaded copy.
D.
Compress the data and upload it with gsutil -m to enable multi-threaded copy.
Answers
Suggested answer: A

Your company has an enterprise application running on Compute Engine that requires high availability and high performance. The application has been deployed on two instances in two zones in the same region in active-passive mode.

The application writes data to a persistent disk. In the case of a single zone outage, that data should be immediately made available to the other instance in the other zone. You want to maximize performance while minimizing downtime and data loss. What should you do?

A.
1. Attach a persistent SSD disk to the first instance.
A.
1. Attach a persistent SSD disk to the first instance.
Answers
B.
Create a snapshot every hour.
B.
Create a snapshot every hour.
Answers
C.
In case of a zone outage, recreate a persistent SSD disk in the second instance where data is coming from the created snapshot.
C.
In case of a zone outage, recreate a persistent SSD disk in the second instance where data is coming from the created snapshot.
Answers
D.
1. Create a Cloud Storage bucket.
D.
1. Create a Cloud Storage bucket.
Answers
E.
Mount the bucket into the first instance with gcs-fuse.
E.
Mount the bucket into the first instance with gcs-fuse.
Answers
F.
In case of a zone outage, mount the Cloud Storage bucket to the second instance with gcs-fuse.
F.
In case of a zone outage, mount the Cloud Storage bucket to the second instance with gcs-fuse.
Answers
G.
1. Attach a regional SSD persistent disk to the first instance.
G.
1. Attach a regional SSD persistent disk to the first instance.
Answers
H.
In case of a zone outage, force-attach the disk to the other instance.
H.
In case of a zone outage, force-attach the disk to the other instance.
Answers
I.
1. Attach a local SSD to the first instance disk.
I.
1. Attach a local SSD to the first instance disk.
Answers
J.
Execute an rsync command every hour where the target is a persistent SSD disk attached to the second instance.
J.
Execute an rsync command every hour where the target is a persistent SSD disk attached to the second instance.
Answers
K.
In case of a zone outage, use the second instance.
K.
In case of a zone outage, use the second instance.
Answers
Suggested answer: C

You are designing a Data Warehouse on Google Cloud and want to store sensitive data in BigQuery. Your company requires you to generate the encryption keys outside of Google Cloud. You need to implement a solution. What should you do?

A.
Generate a new key in Cloud Key Management Service (Cloud KMS). Store all data in Cloud Storage using the customer-managed key option and select the created key. Set up a Dataflow pipeline to decrypt the data and to store it in a new BigQuery dataset.
A.
Generate a new key in Cloud Key Management Service (Cloud KMS). Store all data in Cloud Storage using the customer-managed key option and select the created key. Set up a Dataflow pipeline to decrypt the data and to store it in a new BigQuery dataset.
Answers
B.
Generate a new key in Cloud KMS. Create a dataset in BigQuery using the customer-managed key option and select the created key.
B.
Generate a new key in Cloud KMS. Create a dataset in BigQuery using the customer-managed key option and select the created key.
Answers
C.
Import a key in Cloud KMS. Store all data in Cloud Storage using the customer-managed key option and select the created key. Set up a Dataflow pipeline to decrypt the data and to store it in a new BigQuery dataset.
C.
Import a key in Cloud KMS. Store all data in Cloud Storage using the customer-managed key option and select the created key. Set up a Dataflow pipeline to decrypt the data and to store it in a new BigQuery dataset.
Answers
D.
Import a key in Cloud KMS. Create a dataset in BigQuery using the customer-supplied key option and select the created key.
D.
Import a key in Cloud KMS. Create a dataset in BigQuery using the customer-supplied key option and select the created key.
Answers
Suggested answer: D

Your organization has stored sensitive data in a Cloud Storage bucket. For regulatory reasons, your company must be able to rotate the encryption key used to encrypt the data in the bucket. The data will be processed in Dataproc. You want to follow Google-recommended practices for security. What should you do?

A.
Create a key with Cloud Key Management Service (KMS). Encrypt the data using the encrypt method of Cloud KMS.
A.
Create a key with Cloud Key Management Service (KMS). Encrypt the data using the encrypt method of Cloud KMS.
Answers
B.
Create a key with Cloud Key Management Service (KMS). Set the encryption key on the bucket to the Cloud KMS key.
B.
Create a key with Cloud Key Management Service (KMS). Set the encryption key on the bucket to the Cloud KMS key.
Answers
C.
Generate a GPG key pair. Encrypt the data using the GPG key. Upload the encrypted data to the bucket.
C.
Generate a GPG key pair. Encrypt the data using the GPG key. Upload the encrypted data to the bucket.
Answers
D.
Generate an AES-256 encryption key. Encrypt the data in the bucket using the customer-supplied encryption keys feature.
D.
Generate an AES-256 encryption key. Encrypt the data in the bucket using the customer-supplied encryption keys feature.
Answers
Suggested answer: D

Your team needs to create a Google Kubernetes Engine (GKE) cluster to host a newly built application that requires access to third-party services on the internet. Your company does not allow any Compute Engine instance to have a public

IP address on Google Cloud. You need to create a deployment strategy that adheres to these guidelines. What should you do?

A.
Configure the GKE cluster as a private cluster, and configure Cloud NAT Gateway for the cluster subnet.
A.
Configure the GKE cluster as a private cluster, and configure Cloud NAT Gateway for the cluster subnet.
Answers
B.
Configure the GKE cluster as a private cluster. Configure Private Google Access on the Virtual Private Cloud (VPC).
B.
Configure the GKE cluster as a private cluster. Configure Private Google Access on the Virtual Private Cloud (VPC).
Answers
C.
Configure the GKE cluster as a route-based cluster. Configure Private Google Access on the Virtual Private Cloud (VPC).
C.
Configure the GKE cluster as a route-based cluster. Configure Private Google Access on the Virtual Private Cloud (VPC).
Answers
D.
Create a Compute Engine instance, and install a NAT Proxy on the instance. Configure all workloads on GKE to pass through this proxy to access third-party services on the Internet.
D.
Create a Compute Engine instance, and install a NAT Proxy on the instance. Configure all workloads on GKE to pass through this proxy to access third-party services on the Internet.
Answers
Suggested answer: B

Explanation:

Reference: https://cloud.google.com/architecture/prep-kubernetes-engine-for-prod

Your company has a support ticketing solution that uses App Engine Standard. The project that contains the App Engine application already has a Virtual Private Cloud (VPC) network fully connected to the company's on-premises environment through a Cloud VPN tunnel. You want to enable the App Engine application to communicate with a database that is running in the company's on-premises environment. What should you do?

A.
Configure private Google access for on-premises hosts only.
A.
Configure private Google access for on-premises hosts only.
Answers
B.
Configure private Google access.
B.
Configure private Google access.
Answers
C.
Configure private services access.
C.
Configure private services access.
Answers
D.
Configure serverless VPC access.
D.
Configure serverless VPC access.
Answers
Suggested answer: B
Total 285 questions
Go to page: of 29