ExamGecko
Home Home / Google / Associate Cloud Engineer

Google Associate Cloud Engineer Practice Test - Questions Answers, Page 10

Question list
Search
Search

List of questions

Search

Related questions











Your company's infrastructure is on-premises, but all machines are running at maximum capacity. You want to burst to Google Cloud. The workloads on Google Cloud must be able to directly communicate to the workloads on-premises using a private IP range. What should you do?

A.
In Google Cloud, configure the VPC as a host for Shared VPC.
A.
In Google Cloud, configure the VPC as a host for Shared VPC.
Answers
B.
In Google Cloud, configure the VPC for VPC Network Peering.
B.
In Google Cloud, configure the VPC for VPC Network Peering.
Answers
C.
Create bastion hosts both in your on-premises environment and on Google Cloud. Configure both as proxy servers using their public IP addresses.
C.
Create bastion hosts both in your on-premises environment and on Google Cloud. Configure both as proxy servers using their public IP addresses.
Answers
D.
Set up Cloud VPN between the infrastructure on-premises and Google Cloud.
D.
Set up Cloud VPN between the infrastructure on-premises and Google Cloud.
Answers
Suggested answer: D

Explanation:

'Google Cloud VPC Network Peering allows internal IP address connectivity across two Virtual Private Cloud (VPC) networks regardless of whether they belong to the same project or the same organization.'

https://cloud.google.com/vpc/docs/vpc-peering

while

'Cloud Interconnect provides low latency, high availability connections that enable you to reliably transfer data between your on-premises and Google Cloud Virtual Private Cloud (VPC) networks.'

https://cloud.google.com/network-connectivity/docs/interconnect/concepts/overview

and

'HA VPN is a high-availability (HA) Cloud VPN solution that lets you securely connect your on-premises network to your VPC network through an IPsec VPN connection in a single region.'

https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview

You want to select and configure a solution for storing and archiving data on Google Cloud Platform. You need to support compliance objectives for data from one geographic location. This data is archived after 30 days and needs to be accessed annually. What should you do?

A.
Select Multi-Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage.
A.
Select Multi-Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage.
Answers
B.
Select Multi-Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage.
B.
Select Multi-Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage.
Answers
C.
Select Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage.
C.
Select Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage.
Answers
D.
Select Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage.
D.
Select Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage.
Answers
Suggested answer: D

Explanation:

Google Cloud Coldline is a new cold-tier storage for archival data with access frequency of less than once per year. Unlike other cold storage options, Nearline has no delays prior to data access, so now it is the leading solution among competitors.

The Real description is about Coldline storage Class:

Coldline Storage

Coldline Storage is a very-low-cost, highly durable storage service for storing infrequently accessed data. Coldline Storage is a better choice than Standard Storage or Nearline Storage in scenarios where slightly lower availability, a 90-day minimum storage duration, and higher costs for data access are acceptable trade-offs for lowered at-rest storage costs.

Coldline Storage is ideal for data you plan to read or modify at most once a quarter. Note, however, that for data being kept entirely for backup or archiving purposes, Archive Storage is more cost-effective, as it offers the lowest storage costs.

https://cloud.google.com/storage/docs/storage-classes#coldline

Your company uses BigQuery for data warehousing. Over time, many different business units in your company have created 1000+ datasets across hundreds of projects. Your CIO wants you to examine all datasets to find tables that contain an employee_ssn column. You want to minimize effort in performing this task. What should you do?

A.
Go to Data Catalog and search for employee_ssn in the search box.
A.
Go to Data Catalog and search for employee_ssn in the search box.
Answers
B.
Write a shell script that uses the bq command line tool to loop through all the projects in your organization.
B.
Write a shell script that uses the bq command line tool to loop through all the projects in your organization.
Answers
C.
Write a script that loops through all the projects in your organization and runs a query on INFORMATION_SCHEMA.COLUMNS view to find the employee_ssn column.
C.
Write a script that loops through all the projects in your organization and runs a query on INFORMATION_SCHEMA.COLUMNS view to find the employee_ssn column.
Answers
D.
Write a Cloud Dataflow job that loops through all the projects in your organization and runs a query on INFORMATION_SCHEMA.COLUMNS view to find employee_ssn column.
D.
Write a Cloud Dataflow job that loops through all the projects in your organization and runs a query on INFORMATION_SCHEMA.COLUMNS view to find employee_ssn column.
Answers
Suggested answer: A

Explanation:

https://cloud.google.com/bigquery/docs/quickstarts/quickstart-web-ui?authuser=4

You create a Deployment with 2 replicas in a Google Kubernetes Engine cluster that has a single preemptible node pool. After a few minutes, you use kubectl to examine the status of your Pod and observe that one of them is still in Pending status:

What is the most likely cause?

A.
The pending Pod's resource requests are too large to fit on a single node of the cluster.
A.
The pending Pod's resource requests are too large to fit on a single node of the cluster.
Answers
B.
Too many Pods are already running in the cluster, and there are not enough resources left to schedule the pending Pod.
B.
Too many Pods are already running in the cluster, and there are not enough resources left to schedule the pending Pod.
Answers
C.
The node pool is configured with a service account that does not have permission to pull the container image used by the pending Pod.
C.
The node pool is configured with a service account that does not have permission to pull the container image used by the pending Pod.
Answers
D.
The pending Pod was originally scheduled on a node that has been preempted between the creation of the Deployment and your verification of the Pods' status. It is currently being rescheduled on a new node.
D.
The pending Pod was originally scheduled on a node that has been preempted between the creation of the Deployment and your verification of the Pods' status. It is currently being rescheduled on a new node.
Answers
Suggested answer: B

Explanation:

The pending Pods resource requests are too large to fit on a single node of the cluster. Too many Pods are already running in the cluster, and there are not enough resources left to schedule the pending Pod. is the right answer.

When you have a deployment with some pods in running and other pods in the pending state, more often than not it is a problem with resources on the nodes. Heres a sample output of this use case. We see that the problem is with insufficient CPU on the Kubernetes nodes so we have to either enable auto-scaling or manually scale up the nodes.

You want to find out when users were added to Cloud Spanner Identity Access Management (IAM) roles on your Google Cloud Platform (GCP) project. What should you do in the GCP Console?

A.
Open the Cloud Spanner console to review configurations.
A.
Open the Cloud Spanner console to review configurations.
Answers
B.
Open the IAM & admin console to review IAM policies for Cloud Spanner roles.
B.
Open the IAM & admin console to review IAM policies for Cloud Spanner roles.
Answers
C.
Go to the Stackdriver Monitoring console and review information for Cloud Spanner.
C.
Go to the Stackdriver Monitoring console and review information for Cloud Spanner.
Answers
D.
Go to the Stackdriver Logging console, review admin activity logs, and filter them for Cloud Spanner IAM roles.
D.
Go to the Stackdriver Logging console, review admin activity logs, and filter them for Cloud Spanner IAM roles.
Answers
Suggested answer: D

Explanation:

https://cloud.google.com/monitoring/audit-logging

Your company implemented BigQuery as an enterprise data warehouse. Users from multiple business units run queries on this data warehouse. However, you notice that query costs for BigQuery are very high, and you need to control costs. Which two methods should you use? (Choose two.)

A.
Split the users from business units to multiple projects.
A.
Split the users from business units to multiple projects.
Answers
B.
Apply a user- or project-level custom query quota for BigQuery data warehouse.
B.
Apply a user- or project-level custom query quota for BigQuery data warehouse.
Answers
C.
Create separate copies of your BigQuery data warehouse for each business unit.
C.
Create separate copies of your BigQuery data warehouse for each business unit.
Answers
D.
Split your BigQuery data warehouse into multiple data warehouses for each business unit.
D.
Split your BigQuery data warehouse into multiple data warehouses for each business unit.
Answers
E.
Change your BigQuery query model from on-demand to flat rate. Apply the appropriate number of slots to each Project.
E.
Change your BigQuery query model from on-demand to flat rate. Apply the appropriate number of slots to each Project.
Answers
Suggested answer: B, E

Explanation:

https://cloud.google.com/bigquery/docs/custom-quotas https://cloud.google.com/bigquery/pricing#flat_rate_pricing

You are building a product on top of Google Kubernetes Engine (GKE). You have a single GKE cluster. For each of your customers, a Pod is running in that cluster, and your customers can run arbitrary code inside their Pod. You want to maximize the isolation between your customers' Pods. What should you do?

A.
Use Binary Authorization and whitelist only the container images used by your customers' Pods.
A.
Use Binary Authorization and whitelist only the container images used by your customers' Pods.
Answers
B.
Use the Container Analysis API to detect vulnerabilities in the containers used by your customers' Pods.
B.
Use the Container Analysis API to detect vulnerabilities in the containers used by your customers' Pods.
Answers
C.
Create a GKE node pool with a sandbox type configured to gvisor. Add the parameter runtimeClassName: gvisor to the specification of your customers' Pods.
C.
Create a GKE node pool with a sandbox type configured to gvisor. Add the parameter runtimeClassName: gvisor to the specification of your customers' Pods.
Answers
D.
Use the cos_containerd image for your GKE nodes. Add a nodeSelector with the value cloud.google.com/gke-os-distribution: cos_containerd to the specification of your customers' Pods.
D.
Use the cos_containerd image for your GKE nodes. Add a nodeSelector with the value cloud.google.com/gke-os-distribution: cos_containerd to the specification of your customers' Pods.
Answers
Suggested answer: C

Explanation:

GKE Sandbox provides an extra layer of security to prevent untrusted code from affecting the host kernel on your cluster nodes when containers in the Pod execute unknown or untrusted code. Multi-tenant clusters and clusters whose containers run untrusted workloads are more exposed to security vulnerabilities than other clusters. Examples include SaaS providers, web-hosting providers, or other organizations that allow their users to upload and run code. When you enable GKE Sandbox on a node pool, a sandbox is created for each Pod running on a node in that node pool. In addition, nodes running sandboxed Pods are prevented from accessing other Google Cloud services or cluster metadata. Each sandbox uses its own userspace kernel. With this in mind, you can make decisions about how to group your containers into Pods, based on the level of isolation you require and the characteristics of your applications.

Ref:https://cloud.google.com/kubernetes-engine/docs/concepts/sandbox-pods

Your customer has implemented a solution that uses Cloud Spanner and notices some read latency-related performance issues on one table. This table is accessed only by their users using a primary key. The table schema is shown below.

You want to resolve the issue. What should you do?

A.
Option A
A.
Option A
Answers
B.
Option B
B.
Option B
Answers
C.
Option C
C.
Option C
Answers
D.
Option D
D.
Option D
Answers
Suggested answer: C

Explanation:

As mentioned in Schema and data model, you should be careful when choosing a primary key to not accidentally create hotspots in your database. One cause of hotspots is having a column whose value monotonically increases as the first key part, because this results in all inserts occurring at the end of your key space. This pattern is undesirable because Cloud Spanner divides data among servers by key ranges, which means all your inserts will be directed at a single server that will end up doing all the work. https://cloud.google.com/spanner/docs/schema-design#primary-key-prevent-hotspots

Your finance team wants to view the billing report for your projects. You want to make sure that the finance team does not get additional permissions to the project. What should you do?

A.
Add the group for the finance team to roles/billing user role.
A.
Add the group for the finance team to roles/billing user role.
Answers
B.
Add the group for the finance team to roles/billing admin role.
B.
Add the group for the finance team to roles/billing admin role.
Answers
C.
Add the group for the finance team to roles/billing viewer role.
C.
Add the group for the finance team to roles/billing viewer role.
Answers
D.
Add the group for the finance team to roles/billing project/Manager role.
D.
Add the group for the finance team to roles/billing project/Manager role.
Answers
Suggested answer: C

Explanation:

'Billing Account Viewer access would usually be granted to finance teams, it provides access to spend information, but does not confer the right to link or unlink projects or otherwise manage the properties of the billing account.' https://cloud.google.com/billing/docs/how-to/billing-access

Your organization has strict requirements to control access to Google Cloud projects. You need to enable your Site Reliability Engineers (SREs) to approve requests from the Google Cloud support team when an SRE opens a support case. You want to follow Google-recommended practices. What should you do?

A.
Add your SREs to roles/iam.roleAdmin role.
A.
Add your SREs to roles/iam.roleAdmin role.
Answers
B.
Add your SREs to roles/accessapproval approver role.
B.
Add your SREs to roles/accessapproval approver role.
Answers
C.
Add your SREs to a group and then add this group to roles/iam roleAdmin role.
C.
Add your SREs to a group and then add this group to roles/iam roleAdmin role.
Answers
D.
Add your SREs to a group and then add this group to roles/accessapproval approver role.
D.
Add your SREs to a group and then add this group to roles/accessapproval approver role.
Answers
Suggested answer: D
Total 289 questions
Go to page: of 29