ExamGecko
Home / Google / Professional Data Engineer / List of questions
Ask Question

Google Professional Data Engineer Practice Test - Questions Answers, Page 12

List of questions

Question 111

Report
Export
Collapse

Cloud Dataproc charges you only for what you really use with _____ billing.

month-by-month
month-by-month
minute-by-minute
minute-by-minute
week-by-week
week-by-week
hour-by-hour
hour-by-hour
Suggested answer: B

Explanation:

One of the advantages of Cloud Dataproc is its low cost. Dataproc charges for what you really use with minute-by-minute billing and a low, ten-minute-minimum billing period.

Reference: https://cloud.google.com/dataproc/docs/concepts/overview

asked 18/09/2024
Graham Munengami
33 questions

Question 112

Report
Export
Collapse

The YARN ResourceManager and the HDFS NameNode interfaces are available on a Cloud Dataproc cluster ____.

application node
application node
conditional node
conditional node
master node
master node
worker node
worker node
Suggested answer: C

Explanation:

The YARN ResourceManager and the HDFS NameNode interfaces are available on a Cloud Dataproc cluster master node. The cluster master-host-name is the name of your Cloud Dataproc cluster followed by an -m suffixófor example, if your cluster is named "my-cluster", the master-host-name would be "my-cluster-m".

Reference: https://cloud.google.com/dataproc/docs/concepts/cluster-web-interfaces#interfaces

asked 18/09/2024
Gerrit Struik
54 questions

Question 113

Report
Export
Collapse

Which of these is NOT a way to customize the software on Dataproc cluster instances?

Set initialization actions
Set initialization actions
Modify configuration files using cluster properties
Modify configuration files using cluster properties
Configure the cluster using Cloud Deployment Manager
Configure the cluster using Cloud Deployment Manager
Log into the master node and make changes from there
Log into the master node and make changes from there
Suggested answer: C

Explanation:

You can access the master node of the cluster by clicking the SSH button next to it in the Cloud Console.

You can easily use the --properties option of the dataproc command in the Google Cloud SDK to modify many common configuration files when creating a cluster.

When creating a Cloud Dataproc cluster, you can specify initialization actions in executables and/or scripts that Cloud Dataproc will run on all nodes in your Cloud Dataproc cluster immediately after the cluster is set up. [https:// cloud.google.com/dataproc/docs/concepts/configuring-clusters/initactions]

Reference: https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/clusterproperties

asked 18/09/2024
Bob Xiong
38 questions

Question 114

Report
Export
Collapse

In order to securely transfer web traffic data from your computer's web browser to the Cloud Dataproc cluster you should use a(n) _____.

VPN connection
VPN connection
Special browser
Special browser
SSH tunnel
SSH tunnel
FTP connection
FTP connection
Suggested answer: C

Explanation:

To connect to the web interfaces, it is recommended to use an SSH tunnel to create a secure connection to the master node.

Reference: https://cloud.google.com/dataproc/docs/concepts/cluster-webinterfaces#connecting_to_the_web_interfaces

asked 18/09/2024
Ida Aasvistad
35 questions

Question 115

Report
Export
Collapse

All Google Cloud Bigtable client requests go through a front-end server ______ they are sent to a Cloud Bigtable node.

before
before
after
after
only if
only if
once
once
Suggested answer: A

Explanation:

In a Cloud Bigtable architecture all client requests go through a front-end server before they are sent to a Cloud Bigtable node.

The nodes are organized into a Cloud Bigtable cluster, which belongs to a Cloud Bigtable instance, which is a container for the cluster. Each node in the cluster handles a subset of the requests to the cluster.

When additional nodes are added to a cluster, you can increase the number of simultaneous requests that the cluster can handle, as well as the maximum throughput for the entire cluster.

Reference: https://cloud.google.com/bigtable/docs/overview

asked 18/09/2024
Adnan Safdar
33 questions

Question 116

Report
Export
Collapse

What is the general recommendation when designing your row keys for a Cloud Bigtable schema?

Include multiple time series values within the row key
Include multiple time series values within the row key
Keep the row keep as an 8 bit integer
Keep the row keep as an 8 bit integer
Keep your row key reasonably short
Keep your row key reasonably short
Keep your row key as long as the field permits
Keep your row key as long as the field permits
Suggested answer: C

Explanation:

A general guide is to, keep your row keys reasonably short. Long row keys take up additional memory and storage and increase the time it takes to get responses from the Cloud Bigtable server.

Reference: https://cloud.google.com/bigtable/docs/schema-design#row-keys

asked 18/09/2024
Nicolas Del Borrello
41 questions

Question 117

Report
Export
Collapse

Which of the following statements is NOT true regarding Bigtable access roles?

Using IAM roles, you cannot give a user access to only one table in a project, rather than all tables in a project.
Using IAM roles, you cannot give a user access to only one table in a project, rather than all tables in a project.
To give a user access to only one table in a project, grant the user the Bigtable Editor role for that table.
To give a user access to only one table in a project, grant the user the Bigtable Editor role for that table.
You can configure access control only at the project level.
You can configure access control only at the project level.
To give a user access to only one table in a project, you must configure access through your application.
To give a user access to only one table in a project, you must configure access through your application.
Suggested answer: B

Explanation:

For Cloud Bigtable, you can configure access control at the project level. For example, you can grant the ability to:

Read from, but not write to, any table within the project.

Read from and write to any table within the project, but not manage instances.

Read from and write to any table within the project, and manage instances.

Reference: https://cloud.google.com/bigtable/docs/access-control

asked 18/09/2024
Joza Pakledinac
29 questions

Question 118

Report
Export
Collapse

For the best possible performance, what is the recommended zone for your Compute Engine instance and Cloud Bigtable instance?

Have the Compute Engine instance in the furthest zone from the Cloud Bigtable instance.
Have the Compute Engine instance in the furthest zone from the Cloud Bigtable instance.
Have both the Compute Engine instance and the Cloud Bigtable instance to be in different zones.
Have both the Compute Engine instance and the Cloud Bigtable instance to be in different zones.
Have both the Compute Engine instance and the Cloud Bigtable instance to be in the same zone.
Have both the Compute Engine instance and the Cloud Bigtable instance to be in the same zone.
Have the Cloud Bigtable instance to be in the same zone as all of the consumers of your data.
Have the Cloud Bigtable instance to be in the same zone as all of the consumers of your data.
Suggested answer: C

Explanation:

It is recommended to create your Compute Engine instance in the same zone as your Cloud Bigtable instance for the best possible performance, If it's not possible to create a instance in the same zone, you should create your instance in another zone within the same region. For example, if your Cloud Bigtable instance is located in us-central1-b, you could create your instance in us-central1-f. This change may result in several milliseconds of additional latency for each Cloud Bigtable request.

It is recommended to avoid creating your Compute Engine instance in a different region from your Cloud Bigtable instance, which can add hundreds of milliseconds of latency to each Cloud Bigtable request.

Reference: https://cloud.google.com/bigtable/docs/creating-compute-instance

asked 18/09/2024
Frantisek Kohanyi Kohanyi
37 questions

Question 119

Report
Export
Collapse

Which row keys are likely to cause a disproportionate number of reads and/or writes on a particular node in a Bigtable cluster (select 2 answers)?

A sequential numeric ID
A sequential numeric ID
A timestamp followed by a stock symbol
A timestamp followed by a stock symbol
A non-sequential numeric ID
A non-sequential numeric ID
A stock symbol followed by a timestamp
A stock symbol followed by a timestamp
Suggested answer: A, B

Explanation:

...using a timestamp as the first element of a row key can cause a variety of problems.

In brief, when a row key for a time series includes a timestamp, all of your writes will target a single node; fill that node; and then move onto the next node in the cluster, resulting in hotspotting.

Suppose your system assigns a numeric ID to each of your application's users. You might be tempted to use the user's numeric ID as the row key for your table. However, since new users are more likely to be active users, this approach is likely to push most of your traffic to a small number of nodes.

[https://cloud.google.com/bigtable/docs/schema-design]

Reference: https://cloud.google.com/bigtable/docs/schema-design-timeseries#ensure_that_your_row_key_avoids_hotspotting

asked 18/09/2024
Sharhonda Herman
48 questions

Question 120

Report
Export
Collapse

When a Cloud Bigtable node fails, ____ is lost.

all data
all data
no data
no data
the last transaction
the last transaction
the time dimension
the time dimension
Suggested answer: B

Explanation:

A Cloud Bigtable table is sharded into blocks of contiguous rows, called tablets, to help balance the workload of queries. Tablets are stored on Colossus, Google's file system, in SSTable format. Each tablet is associated with a specific Cloud Bigtable node.

Data is never stored in Cloud Bigtable nodes themselves; each node has pointers to a set of tablets that are stored on Colossus. As a result:

Rebalancing tablets from one node to another is very fast, because the actual data is not copied.

Cloud Bigtable simply updates the pointers for each node.

Recovery from the failure of a Cloud Bigtable node is very fast, because only metadata needs to be migrated to the replacement node.

When a Cloud Bigtable node fails, no data is lost

Reference: https://cloud.google.com/bigtable/docs/overview

asked 18/09/2024
Jean-Bosco Muganza
42 questions
Total 377 questions
Go to page: of 38

Related questions