ExamGecko
Home Home / Google / Professional Cloud Architect

Google Professional Cloud Architect Practice Test - Questions Answers, Page 24

Question list
Search
Search

List of questions

Search

Related questions











Your company has an application running as a Deployment in a Google Kubernetes Engine (GKE) cluster When releasing new versions of the application via a rolling deployment, the team has been causing outages The root cause of the outages is misconfigurations with parameters that are only used in production You want to put preventive measures for this in the platform to prevent outages What should you do?

A.
Configure liveness and readiness probes in the Pod specification
A.
Configure liveness and readiness probes in the Pod specification
Answers
B.
Configure an uptime alert in Cloud Monitoring
B.
Configure an uptime alert in Cloud Monitoring
Answers
C.
Create a Scheduled Task to check whether the application is available
C.
Create a Scheduled Task to check whether the application is available
Answers
D.
Configure health checks on the managed instance group
D.
Configure health checks on the managed instance group
Answers
Suggested answer: A

Explanation:

This option can help prevent outages caused by misconfigurations with parameters that are only used in production. Liveness and readiness probes are mechanisms to check the health and availability of the Pods and containers in a GKE cluster. Liveness probes determine if a container is still running, and if not, restart it. Readiness probes determine if a container is ready to serve requests, and if not, remove it from the load balancer. By configuring liveness and readiness probes in the Pod specification, you can ensure that your application can handle traffic and recover from failures gracefully during a rolling update. The other options are not optimal for this scenario, because they either do not prevent outages, but only alert or monitor them (B, C), or do not apply to GKE clusters, but to Compute Engine instances (D).

Reference:

https://cloud.google.com/kubernetes-engine/docs/how-to/updating-apps

https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-setting-up-health-checks-with-readiness-and-liveness-probes

Your company has a Google Cloud project that uses BlgQuery for data warehousing There are some tables that contain personally identifiable information (PI!) Only the compliance team may access the PH. The other information in the tables must be available to the data science team. You want to minimize cost and the time it takes to assign appropriate access to the tables What should you do?

A.
1 From the dataset where you have the source data, create views of tables that you want to share, excluding Pll 2 Assign an appropriate project-level IAM role to the members of the data science team 3 Assign access controls to the dataset that contains the view
A.
1 From the dataset where you have the source data, create views of tables that you want to share, excluding Pll 2 Assign an appropriate project-level IAM role to the members of the data science team 3 Assign access controls to the dataset that contains the view
Answers
B.
1 From the dataset where you have the source data, create materialized views of tables that you want to share excluding Pll 2 Assign an appropriate project-level IAM role to the members of the data science team 3. Assign access controls to the dataset that contains the view.
B.
1 From the dataset where you have the source data, create materialized views of tables that you want to share excluding Pll 2 Assign an appropriate project-level IAM role to the members of the data science team 3. Assign access controls to the dataset that contains the view.
Answers
C.
1 Create a dataset for the data science team 2 Create views of tables that you want to share excluding Pll 3 Assign an appropriate project-level IAM role to the members of the data science team 4 Assign access controls to the dataset that contains the view 5 Authorize the view to access the source dataset
C.
1 Create a dataset for the data science team 2 Create views of tables that you want to share excluding Pll 3 Assign an appropriate project-level IAM role to the members of the data science team 4 Assign access controls to the dataset that contains the view 5 Authorize the view to access the source dataset
Answers
D.
1. Create a dataset for the data science team. 2. Create materialized views of tables that you want to share, excluding Pll 3. Assign an appropriate project-level IAM role to the members of the data science team 4 Assign access controls to the dataset that contains the view 5 Authorize the view to access the source dataset
D.
1. Create a dataset for the data science team. 2. Create materialized views of tables that you want to share, excluding Pll 3. Assign an appropriate project-level IAM role to the members of the data science team 4 Assign access controls to the dataset that contains the view 5 Authorize the view to access the source dataset
Answers
Suggested answer: C

Explanation:

This option can help minimize cost and time by using views and authorized datasets. Views are virtual tables defined by a SQL query that can exclude PII columns from the source tables. Views do not incur storage costs and do not duplicate data. Authorized datasets are datasets that have access to another dataset's data without granting direct access to individual users or groups. By creating a dataset for the data science team and creating views of tables that exclude PII, you can share only the relevant information with the team. By assigning an appropriate project-level IAM role to the members of the data science team, you can grant them access to the BigQuery service and resources. By assigning access controls to the dataset that contains the view, you can grant them access to query the views. By authorizing the view to access the source dataset, you can enable the view to read data from the source tables without exposing PII. The other options are not optimal for this scenario, because they either use materialized views instead of views, which incur storage costs and duplicate data (B, D), or do not create a separate dataset for the data science team, which makes it harder to manage access controls (A).

Reference:

https://cloud.google.com/bigquery/docs/views

https://cloud.google.com/bigquery/docs/authorized-datasets

Your company has a Google Workspace account and Google Cloud Organization Some developers in the company have created Google Cloud projects outside of the Google Cloud Organization

You want to create an Organization structure that allows developers to create projects, but prevents them from modifying production projects You want to manage policies for all projects centrally and be able to set more restrictive policies for production projects

You want to minimize disruption to users and developers when business needs change in the future You want to follow Google-recommended practices How should you design the Organization structure?

A.
1 Create a second Google Workspace account and Organization 2 Grant all developers the Project Creator IAM role on the new Organization 3 Move the developer projects into the new Organization 4 Set the policies for all projects on both Organizations. 5 Additionally set the production policies on the original Organization
A.
1 Create a second Google Workspace account and Organization 2 Grant all developers the Project Creator IAM role on the new Organization 3 Move the developer projects into the new Organization 4 Set the policies for all projects on both Organizations. 5 Additionally set the production policies on the original Organization
Answers
B.
1 Create a folder under the Organization resource named 'Production ' 2 Grant all developers the Project Creator IAM role on the Organization 3. Move the developer projects into the Organization 4 Set the policies for all projects on the Organization 5 Additionally set the production policies on the 'Production' folder
B.
1 Create a folder under the Organization resource named 'Production ' 2 Grant all developers the Project Creator IAM role on the Organization 3. Move the developer projects into the Organization 4 Set the policies for all projects on the Organization 5 Additionally set the production policies on the 'Production' folder
Answers
C.
1 Create folders under the Organization resource named 'Development' and Production' 2 Grant all developers the Project Creator IAM role on the ''Development1 folder 3. Move the developer projects into the 'Development' folder 4 Set the policies for all projects on the Organization 5 Additionally set the production policies on the 'Production' folder
C.
1 Create folders under the Organization resource named 'Development' and Production' 2 Grant all developers the Project Creator IAM role on the ''Development1 folder 3. Move the developer projects into the 'Development' folder 4 Set the policies for all projects on the Organization 5 Additionally set the production policies on the 'Production' folder
Answers
D.
1 Designate the Organization for production projects only 2 Ensure that developers do not have the Project Creator IAM role on the Organization 3 Create development projects outside of the Organization using the developer Google Workspace accounts 4 Set the policies for all projects on the Organization 5 Additionally set the production policies on the individual production projects
D.
1 Designate the Organization for production projects only 2 Ensure that developers do not have the Project Creator IAM role on the Organization 3 Create development projects outside of the Organization using the developer Google Workspace accounts 4 Set the policies for all projects on the Organization 5 Additionally set the production policies on the individual production projects
Answers
Suggested answer: C

Explanation:

This option can help create an organization structure that allows developers to create projects, but prevents them from modifying production projects. Folders are containers for projects and other folders within Google Cloud organizations. Folders allow resources to be structured hierarchically and inherit policies from their parent resources. By creating folders under the organization resource named ''Development'' and ''Production'', you can organize your projects by environment and apply different policies to them. By granting all developers the Project Creator IAM role on the ''Development'' folder, you can allow them to create projects under that folder, but not under the ''Production'' folder. By moving the developer projects into the ''Development'' folder, you can ensure that they are subject to the policies set on that folder. By setting the policies for all projects on the organization, you can manage policies centrally and efficiently. By additionally setting the production policies on the ''Production'' folder, you can enforce more restrictive policies for production projects and prevent developers from modifying them. The other options are not optimal for this scenario, because they either create a second Google Workspace account and organization, which increases complexity and cost (A), or do not use folders to organize projects by environment, which makes it harder to manage policies and permissions (B, D).

Reference:

https://cloud.google.com/resource-manager/docs/creating-managing-folders

https://cloud.google.com/architecture/framework/system-design

For this question, refer to the TerramEarth case study. You are building a microservice-based application for TerramEarth. The application is based on Docker containers. You want to follow Google-recommended practices to build the application continuously and store the build artifacts. What should you do?

A.
1. Configure a trigger in Cloud Build for new source changes. 2. Invoke Cloud Build to build one container image, and tag the image with the label 'latest.' 3. Push the image to the Artifact Registry.
A.
1. Configure a trigger in Cloud Build for new source changes. 2. Invoke Cloud Build to build one container image, and tag the image with the label 'latest.' 3. Push the image to the Artifact Registry.
Answers
B.
1. Configure a trigger in Cloud Build for new source changes. 2. Invoke Cloud Build to build container images for each microservice, and tag them using the code commit hash. 3. Push the images to the Artifact Registry.
B.
1. Configure a trigger in Cloud Build for new source changes. 2. Invoke Cloud Build to build container images for each microservice, and tag them using the code commit hash. 3. Push the images to the Artifact Registry.
Answers
C.
1 Create a Scheduler job to check the repo every minute. 2. For any new change, invoke Cloud Build to build container images for the microservices. 3. Tag the images using the current timestamp, and push them to the Artifact Registry.
C.
1 Create a Scheduler job to check the repo every minute. 2. For any new change, invoke Cloud Build to build container images for the microservices. 3. Tag the images using the current timestamp, and push them to the Artifact Registry.
Answers
D.
1. Configure a trigger in Cloud Build for new source changes. 2. The trigger invokes build jobs and build container images for the microservices. 3. Tag the images with a version number, and push them to Cloud Storage.
D.
1. Configure a trigger in Cloud Build for new source changes. 2. The trigger invokes build jobs and build container images for the microservices. 3. Tag the images with a version number, and push them to Cloud Storage.
Answers
Suggested answer: C

Your company has a Google Cloud project that uses BigQuery for data warehousing on a pay-per-use basis. You want to monitor queries in real time to discover the most costly queries and which users spend the most. What should you do?

A.
1. Create a Cloud Logging sink to export BigQuery data access logs to Cloud Storage. 2. Develop a Dataflow pipeline to compute the cost of queries split by users.
A.
1. Create a Cloud Logging sink to export BigQuery data access logs to Cloud Storage. 2. Develop a Dataflow pipeline to compute the cost of queries split by users.
Answers
B.
1. Create a Cloud Logging sink to export BigQuery data access logs to BigQuery. 2. Perform a BigQuery query on the generated table to extract the information you need.
B.
1. Create a Cloud Logging sink to export BigQuery data access logs to BigQuery. 2. Perform a BigQuery query on the generated table to extract the information you need.
Answers
C.
1. Activate billing export into BigQuery. 2. Perform a BigQuery query on the billing table to extract the information you need.
C.
1. Activate billing export into BigQuery. 2. Perform a BigQuery query on the billing table to extract the information you need.
Answers
D.
1. In the BigQuery dataset that contains all the tables to be queried, add a label for each user that can launch a query. 2. Open the Billing page of the project. 3. Select Reports. 4. Select BigQuery as the product and filter by the user you want to check.
D.
1. In the BigQuery dataset that contains all the tables to be queried, add a label for each user that can launch a query. 2. Open the Billing page of the project. 3. Select Reports. 4. Select BigQuery as the product and filter by the user you want to check.
Answers
Suggested answer: C

Explanation:

https://cloud.google.com/blog/products/data-analytics/taking-a-practical-approach-to-bigquery-cost-monitoring

Your company has an application that is running on multiple instances of Compute Engine. It generates 1 TB per day of logs. For compliance reasons, the logs need to be kept for at least two years. The logs need to be available for active query for 30 days. After that, they just need to be retained for audit purposes. You want to implement a storage solution that is compliant, minimizes costs, and follows Google-recommended practices. What should you do?

A.
1. Install the Cloud Ops agent on all instances. 2. Create a sink to export logs into a partitioned BigQuery table. 3. Set a time_partitioning_expiration of 30 days.
A.
1. Install the Cloud Ops agent on all instances. 2. Create a sink to export logs into a partitioned BigQuery table. 3. Set a time_partitioning_expiration of 30 days.
Answers
B.
1. Install the Cloud Ops agent on all instances. 2. Create a sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month. 4. Configure a retention policy at the bucket level to create a lock.
B.
1. Install the Cloud Ops agent on all instances. 2. Create a sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month. 4. Configure a retention policy at the bucket level to create a lock.
Answers
C.
1. Create a daily cron job, running on all instances, that uploads logs into a partitioned BigQuery table. 2. Set a time_partitioning_expiration of 30 days.
C.
1. Create a daily cron job, running on all instances, that uploads logs into a partitioned BigQuery table. 2. Set a time_partitioning_expiration of 30 days.
Answers
D.
1. Write a daily cron job, running on all instances, that uploads logs into a Cloud Storage bucket. 2. Create a sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month.
D.
1. Write a daily cron job, running on all instances, that uploads logs into a Cloud Storage bucket. 2. Create a sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month.
Answers
Suggested answer: B

Explanation:

The practice for managing logs generated on Compute Engine on Google Cloud is to install the Cloud Logging agent and send them to Cloud Logging.

The sent logs will be aggregated into a Cloud Logging sink and exported to Cloud Storage.

The reason for using Cloud Storage as the destination for the logs is that the requirement in question requires setting up a lifecycle based on the storage period.

In this case, the log will be used for active queries for 30 days after it is saved, but after that, it needs to be stored for a longer period of time for auditing purposes.

If the data is to be used for active queries, we can use BigQuery's Cloud Storage data query feature and move the data past 30 days to Coldline to build a cost-optimal solution.

Therefore, the correct answer is as follows

1. Install the Cloud Logging agent on all instances.

Create a sync that exports the logs to the region's Cloud Storage bucket.

3. Create an Object Lifecycle rule to move the files to the Coldline Cloud Storage bucket after one month. 4.

4. set up a bucket-level retention policy using bucket locking.'

Your company has an application running on App Engine that allows users to upload music files and share them with other people. You want to allow users to upload files directly into Cloud Storage from their browser session. The payload should not be passed through the backend. What should you do?

A.
1. Set a CORS configuration in the target Cloud Storage bucket where the base URL of the App Engine application is an allowed origin. 2. Use the Cloud Storage Signed URL feature to generate a POST URL.
A.
1. Set a CORS configuration in the target Cloud Storage bucket where the base URL of the App Engine application is an allowed origin. 2. Use the Cloud Storage Signed URL feature to generate a POST URL.
Answers
B.
1. Set a CORS configuration in the target Cloud Storage bucket where the base URL of the App Engine application is an allowed origin. 2. Assign the Cloud Storage WRITER role to users who upload files.
B.
1. Set a CORS configuration in the target Cloud Storage bucket where the base URL of the App Engine application is an allowed origin. 2. Assign the Cloud Storage WRITER role to users who upload files.
Answers
C.
1. Use the Cloud Storage Signed URL feature to generate a POST URL. 2. Use App Engine default credentials to sign requests against Cloud Storage.
C.
1. Use the Cloud Storage Signed URL feature to generate a POST URL. 2. Use App Engine default credentials to sign requests against Cloud Storage.
Answers
D.
1. Assign the Cloud Storage WRITER role to users who upload files. 2. Use App Engine default credentials to sign requests against Cloud Storage.
D.
1. Assign the Cloud Storage WRITER role to users who upload files. 2. Use App Engine default credentials to sign requests against Cloud Storage.
Answers
Suggested answer: B

You are deploying an application to Google Cloud. The application is part of a system. The application in Google Cloud must communicate over a private network with applications in a non-Google Cloud environment. The expected average throughput is 200 kbps. The business requires:

* 99.99% system availability

* cost optimization

You need to design the connectivity between the locations to meet the business requirements. What should you provision?

A.
A Classic Cloud VPN gateway connected with one tunnel to an on-premises VPN gateway.
A.
A Classic Cloud VPN gateway connected with one tunnel to an on-premises VPN gateway.
Answers
B.
A Classic Cloud VPN gateway connected with two tunnels to an on-premises VPN gateway.
B.
A Classic Cloud VPN gateway connected with two tunnels to an on-premises VPN gateway.
Answers
C.
An HA Cloud VPN gateway connected with two tunnels to an on-premises VPN gateway.
C.
An HA Cloud VPN gateway connected with two tunnels to an on-premises VPN gateway.
Answers
D.
Two HA Cloud VPN gateways connected to two on-premises VPN gateways. Configure each HA Cloud VPN gateway to have two tunnels, each connected to different on-premises VPN gateways.
D.
Two HA Cloud VPN gateways connected to two on-premises VPN gateways. Configure each HA Cloud VPN gateway to have two tunnels, each connected to different on-premises VPN gateways.
Answers
Suggested answer: C

Explanation:

https://cloud.google.com/network-connectivity/docs/vpn/concepts/topologies#configurations_that_support_9999_availability

Your company uses Google Kubernetes Engine (GKE) as a platform for all workloads. Your company has a single large GKE cluster that contains batch, stateful, and stateless workloads. The GKE cluster is configured with a single node pool with 200 nodes. Your company needs to reduce the cost of this cluster but does not want to compromise availability. What should you do?

A.
Create a second GKE cluster for the batch workloads only. Allocate the 200 original nodes across both clusters.
A.
Create a second GKE cluster for the batch workloads only. Allocate the 200 original nodes across both clusters.
Answers
B.
Configure a HorizontalPodAutoscaler for all stateless workloads and for all compatible stateful workloads. Configure the cluster to use node auto scaling.
B.
Configure a HorizontalPodAutoscaler for all stateless workloads and for all compatible stateful workloads. Configure the cluster to use node auto scaling.
Answers
C.
Configure CPU and memory limits on the namespaces in the cluster. Configure all Pods to have a CPU and memory limits.
C.
Configure CPU and memory limits on the namespaces in the cluster. Configure all Pods to have a CPU and memory limits.
Answers
D.
Change the node pool to use spot VMs.
D.
Change the node pool to use spot VMs.
Answers
Suggested answer: B

Explanation:

One way to reduce the cost of a Google Kubernetes Engine (GKE) cluster without compromising availability is to use horizontal pod autoscalers (HPA) and node auto scaling. HPA allows you to automatically scale the number of Pods in a deployment based on the resource usage of the Pods. By configuring HPA for stateless workloads and for compatible stateful workloads, you can ensure that the number of Pods is automatically adjusted based on the actual resource usage, which can help to reduce costs. Node auto scaling allows you to automatically add or remove nodes from the node pool based on the resource usage of the cluster. By configuring node auto scaling, you can ensure that the cluster has the minimum number of nodes needed to meet the resource requirements of the workloads, which can also help to reduce costs.

Your company has a Google Cloud project that uses BigQuery for data warehousing. The VPN tunnel between the on-premises environment and Google Cloud is configured with Cloud VPN. Your security team wants to avoid data exfiltration by malicious insiders, compromised code, and accidental oversharing. What should you do?

A.
Configure VPC Service Controls and configure Private Google Access for on-premises hosts.
A.
Configure VPC Service Controls and configure Private Google Access for on-premises hosts.
Answers
B.
Create a service account, grant the BigQuery JobUser role and Storage Object Viewer role to the service account, and remove all other Identity and Access Management (1AM) access from the project.
B.
Create a service account, grant the BigQuery JobUser role and Storage Object Viewer role to the service account, and remove all other Identity and Access Management (1AM) access from the project.
Answers
C.
Configure Private Google Access.
C.
Configure Private Google Access.
Answers
D.
Configure Private Service Connect.
D.
Configure Private Service Connect.
Answers
Suggested answer: A
Total 285 questions
Go to page: of 29