ExamGecko
Home Home / Google / Associate Cloud Engineer

Google Associate Cloud Engineer Practice Test - Questions Answers, Page 15

Question list
Search
Search

List of questions

Search

Related questions











You created several resources in multiple Google Cloud projects. All projects are linked to different billing accounts. To better estimate future charges, you want to have a single visual representation of all costs incurred. You want to include new cost data as soon as possible. What should you do?

A.
Configure Billing Data Export to BigQuery and visualize the data in Data Studio.
A.
Configure Billing Data Export to BigQuery and visualize the data in Data Studio.
Answers
B.
Visit the Cost Table page to get a CSV export and visualize it using Data Studio.
B.
Visit the Cost Table page to get a CSV export and visualize it using Data Studio.
Answers
C.
Fill all resources in the Pricing Calculator to get an estimate of the monthly cost.
C.
Fill all resources in the Pricing Calculator to get an estimate of the monthly cost.
Answers
D.
Use the Reports view in the Cloud Billing Console to view the desired cost information.
D.
Use the Reports view in the Cloud Billing Console to view the desired cost information.
Answers
Suggested answer: A

Explanation:

https://cloud.google.com/billing/docs/how-to/export-data-bigquery 'Cloud Billing export to BigQuery enables you to export detailed Google Cloud billing data (such as usage, cost estimates, and pricing data) automatically throughout the day to a BigQuery dataset that you specify.'

Your company has workloads running on Compute Engine and on-premises. The Google Cloud Virtual Private Cloud (VPC) is connected to your WAN over a Virtual Private Network (VPN). You need to deploy a new Compute Engine instance and ensure that no public Internet traffic can be routed to it. What should you do?

A.
Create the instance without a public IP address.
A.
Create the instance without a public IP address.
Answers
B.
Create the instance with Private Google Access enabled.
B.
Create the instance with Private Google Access enabled.
Answers
C.
Create a deny-all egress firewall rule on the VPC network.
C.
Create a deny-all egress firewall rule on the VPC network.
Answers
D.
Create a route on the VPC to route all traffic to the instance over the VPN tunnel.
D.
Create a route on the VPC to route all traffic to the instance over the VPN tunnel.
Answers
Suggested answer: A

Explanation:

VMs cannot communicate over the internet without a public IP address. Private Google Access permits access to Google APIs and services in Google's production infrastructure. https://cloud.google.com/vpc/docs/private-google-access

Your team maintains the infrastructure for your organization. The current infrastructure requires changes. You need to share your proposed changes with the rest of the team. You want to follow Google's recommended best practices. What should you do?

A.
Use Deployment Manager templates to describe the proposed changes and store them in a Cloud Storage bucket.
A.
Use Deployment Manager templates to describe the proposed changes and store them in a Cloud Storage bucket.
Answers
B.
Use Deployment Manager templates to describe the proposed changes and store them in Cloud Source Repositories.
B.
Use Deployment Manager templates to describe the proposed changes and store them in Cloud Source Repositories.
Answers
C.
Apply the change in a development environment, run gcloud compute instances list, and then save the output in a shared Storage bucket.
C.
Apply the change in a development environment, run gcloud compute instances list, and then save the output in a shared Storage bucket.
Answers
D.
Apply the change in a development environment, run gcloud compute instances list, and then save the output in Cloud Source Repositories.
D.
Apply the change in a development environment, run gcloud compute instances list, and then save the output in Cloud Source Repositories.
Answers
Suggested answer: B

Explanation:

Showing Deployment Manager templates to your team will allow you to define the changes you want to implement in your cloud infrastructure. You can use Cloud Source Repositories to store Deployment Manager templates and collaborate with your team. Cloud Source Repositories are fully-featured, scalable, and private Git repositories you can use to store, manage and track changes to your code.

https://cloud.google.com/source-repositories/docs/features

You have a Compute Engine instance hosting an application used between 9 AM and 6 PM on weekdays. You want to back up this instance daily for disaster recovery purposes. You want to keep the backups for 30 days. You want the Google-recommended solution with the least management overhead and the least number of services. What should you do?

A.
1. Update your instances' metadata to add the following value: snapshot--schedule: 0 1 * * * 2. Update your instances' metadata to add the following value: snapshot--retention: 30
A.
1. Update your instances' metadata to add the following value: snapshot--schedule: 0 1 * * * 2. Update your instances' metadata to add the following value: snapshot--retention: 30
Answers
B.
1. In the Cloud Console, go to the Compute Engine Disks page and select your instance's disk. 2. In the Snapshot Schedule section, select Create Schedule and configure the following parameters: -- Schedule frequency: Daily -- Start time: 1:00 AM -- 2:00 AM -- Autodelete snapshots after 30 days
B.
1. In the Cloud Console, go to the Compute Engine Disks page and select your instance's disk. 2. In the Snapshot Schedule section, select Create Schedule and configure the following parameters: -- Schedule frequency: Daily -- Start time: 1:00 AM -- 2:00 AM -- Autodelete snapshots after 30 days
Answers
C.
1. Create a Cloud Function that creates a snapshot of your instance's disk. 2. Create a Cloud Function that deletes snapshots that are older than 30 days. 3. Use Cloud Scheduler to trigger both Cloud Functions daily at 1:00 AM.
C.
1. Create a Cloud Function that creates a snapshot of your instance's disk. 2. Create a Cloud Function that deletes snapshots that are older than 30 days. 3. Use Cloud Scheduler to trigger both Cloud Functions daily at 1:00 AM.
Answers
D.
1. Create a bash script in the instance that copies the content of the disk to Cloud Storage. 2. Create a bash script in the instance that deletes data older than 30 days in the backup Cloud Storage bucket. 3. Configure the instance's crontab to execute these scripts daily at 1:00 AM.
D.
1. Create a bash script in the instance that copies the content of the disk to Cloud Storage. 2. Create a bash script in the instance that deletes data older than 30 days in the backup Cloud Storage bucket. 3. Configure the instance's crontab to execute these scripts daily at 1:00 AM.
Answers
Suggested answer: B

Explanation:

Creating scheduled snapshots for persistent disk This document describes how to create a snapshot schedule to regularly and automatically back up your zonal and regional persistent disks. Use snapshot schedules as a best practice to back up your Compute Engine workloads. After creating a snapshot schedule, you can apply it to one or more persistent disks. https://cloud.google.com/compute/docs/disks/scheduled-snapshots

Your existing application running in Google Kubernetes Engine (GKE) consists of multiple pods running on four GKE n1--standard--2 nodes. You need to deploy additional pods requiring n2--highmem--16 nodes without any downtime. What should you do?

A.
Use gcloud container clusters upgrade. Deploy the new services.
A.
Use gcloud container clusters upgrade. Deploy the new services.
Answers
B.
Create a new Node Pool and specify machine type n2--highmem--16. Deploy the new pods.
B.
Create a new Node Pool and specify machine type n2--highmem--16. Deploy the new pods.
Answers
C.
Create a new cluster with n2--highmem--16 nodes. Redeploy the pods and delete the old cluster.
C.
Create a new cluster with n2--highmem--16 nodes. Redeploy the pods and delete the old cluster.
Answers
D.
Create a new cluster with both n1--standard--2 and n2--highmem--16 nodes. Redeploy the pods and delete the old cluster.
D.
Create a new cluster with both n1--standard--2 and n2--highmem--16 nodes. Redeploy the pods and delete the old cluster.
Answers
Suggested answer: B

Explanation:

https://cloud.google.com/kubernetes-engine/docs/concepts/deployment

You have an application that uses Cloud Spanner as a database backend to keep current state information about users. Cloud Bigtable logs all events triggered by users. You export Cloud Spanner data to Cloud Storage during daily backups. One of your analysts asks you to join data from Cloud Spanner and Cloud Bigtable for specific users. You want to complete this ad hoc request as efficiently as possible. What should you do?

A.
Create a dataflow job that copies data from Cloud Bigtable and Cloud Storage for specific users.
A.
Create a dataflow job that copies data from Cloud Bigtable and Cloud Storage for specific users.
Answers
B.
Create a dataflow job that copies data from Cloud Bigtable and Cloud Spanner for specific users.
B.
Create a dataflow job that copies data from Cloud Bigtable and Cloud Spanner for specific users.
Answers
C.
Create a Cloud Dataproc cluster that runs a Spark job to extract data from Cloud Bigtable and Cloud Storage for specific users.
C.
Create a Cloud Dataproc cluster that runs a Spark job to extract data from Cloud Bigtable and Cloud Storage for specific users.
Answers
D.
Create two separate BigQuery external tables on Cloud Storage and Cloud Bigtable. Use the BigQuery console to join these tables through user fields, and apply appropriate filters.
D.
Create two separate BigQuery external tables on Cloud Storage and Cloud Bigtable. Use the BigQuery console to join these tables through user fields, and apply appropriate filters.
Answers
Suggested answer: D

Explanation:

'The Cloud Spanner to Cloud Storage Text template is a batch pipeline that reads in data from a Cloud Spanner table, optionally transforms the data via a JavaScript User Defined Function (UDF) that you provide, and writes it to Cloud Storage as CSV text files.'

https://cloud.google.com/dataflow/docs/guides/templates/provided-batch#cloudspannertogcstext

'The Dataflow connector for Cloud Spanner lets you read data from and write data to Cloud Spanner in a Dataflow pipeline'

https://cloud.google.com/spanner/docs/dataflow-connector

https://cloud.google.com/bigquery/external-data-sources

You are hosting an application from Compute Engine virtual machines (VMs) in us--central1--a. You want to adjust your design to support the failure of a single Compute Engine zone, eliminate downtime, and minimize cost. What should you do?


A.
-- Create Compute Engine resources in us--central1--b. -- Balance the load across both us--central1--a and us--central1--b.
A.
-- Create Compute Engine resources in us--central1--b. -- Balance the load across both us--central1--a and us--central1--b.
Answers
B.
-- Create a Managed Instance Group and specify us--central1--a as the zone. -- Configure the Health Check with a short Health Interval.
B.
-- Create a Managed Instance Group and specify us--central1--a as the zone. -- Configure the Health Check with a short Health Interval.
Answers
C.
-- Create an HTTP(S) Load Balancer. -- Create one or more global forwarding rules to direct traffic to your VMs.
C.
-- Create an HTTP(S) Load Balancer. -- Create one or more global forwarding rules to direct traffic to your VMs.
Answers
D.
-- Perform regular backups of your application. -- Create a Cloud Monitoring Alert and be notified if your application becomes unavailable. -- Restore from backups when notified.
D.
-- Perform regular backups of your application. -- Create a Cloud Monitoring Alert and be notified if your application becomes unavailable. -- Restore from backups when notified.
Answers
Suggested answer: A

Explanation:

Choosing a region and zone You choose which region or zone hosts your resources, which controls where your data is stored and used. Choosing a region and zone is important for several reasons:

Handling failures

Distribute your resources across multiple zones and regions to tolerate outages. Google designs zones to be independent from each other: a zone usually has power, cooling, networking, and control planes that are isolated from other zones, and most single failure events will affect only a single zone. Thus, if a zone becomes unavailable, you can transfer traffic to another zone in the same region to keep your services running. Similarly, if a region experiences any disturbances, you should have backup services running in a different region. For more information about distributing your resources and designing a robust system, see Designing Robust Systems. Decreased network latency To decrease network latency, you might want to choose a region or zone that is close to your point of service. https://cloud.google.com/compute/docs/regions-zones#choosing_a_region_and_zone

A colleague handed over a Google Cloud Platform project for you to maintain. As part of a security checkup, you want to review who has been granted the Project Owner role. What should you do?

A.
In the console, validate which SSH keys have been stored as project-wide keys.
A.
In the console, validate which SSH keys have been stored as project-wide keys.
Answers
B.
Navigate to Identity-Aware Proxy and check the permissions for these resources.
B.
Navigate to Identity-Aware Proxy and check the permissions for these resources.
Answers
C.
Enable Audit Logs on the IAM & admin page for all resources, and validate the results.
C.
Enable Audit Logs on the IAM & admin page for all resources, and validate the results.
Answers
D.
Use the command gcloud projects get--iam--policy to view the current role assignments.
D.
Use the command gcloud projects get--iam--policy to view the current role assignments.
Answers
Suggested answer: D

Explanation:

A simple approach would be to use the command flags available when listing all the IAM policy for a given project. For instance, the following command: `gcloud projects get-iam-policy $PROJECT_ID --flatten='bindings[].members' --format='table(bindings.members)' --filter='bindings.role:roles/owner'` outputs all the users and service accounts associated with the role 'roles/owner' in the project in question. https://groups.google.com/g/google-cloud-dev/c/Z6sZs7TvygQ?pli=1

You are running multiple VPC-native Google Kubernetes Engine clusters in the same subnet. The IPs available for the nodes are exhausted, and you want to ensure that the clusters can grow in nodes when needed. What should you do?

A.
Create a new subnet in the same region as the subnet being used.
A.
Create a new subnet in the same region as the subnet being used.
Answers
B.
Add an alias IP range to the subnet used by the GKE clusters.
B.
Add an alias IP range to the subnet used by the GKE clusters.
Answers
C.
Create a new VPC, and set up VPC peering with the existing VPC.
C.
Create a new VPC, and set up VPC peering with the existing VPC.
Answers
D.
Expand the CIDR range of the relevant subnet for the cluster.
D.
Expand the CIDR range of the relevant subnet for the cluster.
Answers
Suggested answer: D

Explanation:

gcloud compute networks subnets expand-ip-range NAME gcloud compute networks subnets expand-ip-range - expand the IP range of a Compute Engine subnetwork https://cloud.google.com/sdk/gcloud/reference/compute/networks/subnets/expand-ip-range

You have a batch workload that runs every night and uses a large number of virtual machines (VMs). It is fault- tolerant and can tolerate some of the VMs being terminated. The current cost of VMs is too high. What should you do?

A.
Run a test using simulated maintenance events. If the test is successful, use preemptible N1 Standard VMs when running future jobs.
A.
Run a test using simulated maintenance events. If the test is successful, use preemptible N1 Standard VMs when running future jobs.
Answers
B.
Run a test using simulated maintenance events. If the test is successful, use N1 Standard VMs when running future jobs.
B.
Run a test using simulated maintenance events. If the test is successful, use N1 Standard VMs when running future jobs.
Answers
C.
Run a test using a managed instance group. If the test is successful, use N1 Standard VMs in the managed instance group when running future jobs.
C.
Run a test using a managed instance group. If the test is successful, use N1 Standard VMs in the managed instance group when running future jobs.
Answers
D.
Run a test using N1 standard VMs instead of N2. If the test is successful, use N1 Standard VMs when running future jobs.
D.
Run a test using N1 standard VMs instead of N2. If the test is successful, use N1 Standard VMs when running future jobs.
Answers
Suggested answer: A

Explanation:

Creating and starting a preemptible VM instance This page explains how to create and use a preemptible virtual machine (VM) instance. A preemptible instance is an instance you can create and run at a much lower price than normal instances. However, Compute Engine might terminate (preempt) these instances if it requires access to those resources for other tasks. Preemptible instances will always terminate after 24 hours. To learn more about preemptible instances, read the preemptible instances documentation. Preemptible instances are recommended only for fault-tolerant applications that can withstand instance preemptions. Make sure your application can handle preemptions before you decide to create a preemptible instance. To understand the risks and value of preemptible instances, read the preemptible instances documentation. https://cloud.google.com/compute/docs/instances/create-start-preemptible-instance

Total 289 questions
Go to page: of 29