ExamGecko
Home / Google / Professional Data Engineer / List of questions
Ask Question

Google Professional Data Engineer Practice Test - Questions Answers, Page 14

List of questions

Question 131

Report
Export
Collapse

What is the recommended action to do in order to switch between SSD and HDD storage for your Google Cloud Bigtable instance?

create a third instance and sync the data from the two storage types via batch jobs
create a third instance and sync the data from the two storage types via batch jobs
export the data from the existing instance and import the data into a new instance
export the data from the existing instance and import the data into a new instance
run parallel instances where one is HDD and the other is SDD
run parallel instances where one is HDD and the other is SDD
the selection is final and you must resume using the same storage type
the selection is final and you must resume using the same storage type
Suggested answer: B

Explanation:

When you create a Cloud Bigtable instance and cluster, your choice of SSD or HDD storage for the cluster is permanent. You cannot use the Google Cloud Platform Console to change the type of storage that is used for the cluster.

If you need to convert an existing HDD cluster to SSD, or vice-versa, you can export the data from the existing instance and import the data into a new instance. Alternatively, you can write a Cloud Dataflow or Hadoop MapReduce job that copies the data from one instance to another.

Reference: https://cloud.google.com/bigtable/docs/choosing-ssd-hddñ

Topic 6, Main Questions Set C

asked 18/09/2024
Najim Abdelmoula
46 questions

Question 132

Report
Export
Collapse

You are training a spam classifier. You notice that you are overfitting the training dat a. Which three actions can you take to resolve this problem? (Choose three.)

Get more training examples
Get more training examples
Reduce the number of training examples
Reduce the number of training examples
Use a smaller set of features
Use a smaller set of features
Use a larger set of features
Use a larger set of features
Increase the regularization parameters
Increase the regularization parameters
Decrease the regularization parameters
Decrease the regularization parameters
Suggested answer: A, D, F
asked 18/09/2024
Velmurugan P
42 questions

Question 133

Report
Export
Collapse

You are implementing security best practices on your data pipeline. Currently, you are manually executing jobs as the Project Owner. You want to automate these jobs by taking nightly batch files containing non-public information from

Google Cloud Storage, processing them with a Spark Scala job on a Google Cloud Dataproc cluster, and depositing the results into Google BigQuery.

How should you securely run this workload?

Restrict the Google Cloud Storage bucket so only you can see the files
Restrict the Google Cloud Storage bucket so only you can see the files
Grant the Project Owner role to a service account, and run the job with it
Grant the Project Owner role to a service account, and run the job with it
Use a service account with the ability to read the batch files and to write to BigQuery
Use a service account with the ability to read the batch files and to write to BigQuery
Use a user account with the Project Viewer role on the Cloud Dataproc cluster to read the batch files and write to BigQuery
Use a user account with the Project Viewer role on the Cloud Dataproc cluster to read the batch files and write to BigQuery
Suggested answer: B
asked 18/09/2024
Roberto Garavaglia
45 questions

Question 134

Report
Export
Collapse

You are using Google BigQuery as your data warehouse. Your users report that the following simple query is running very slowly, no matter when they run the query:

SELECT country, state, city FROM [myproject:mydataset.mytable] GROUP BY country You check the query plan for the query and see the following output in the Read section of Stage:1:

Google Professional Data Engineer image Question 134 29730 09182024191422000000

What is the most likely cause of the delay for this query?

Users are running too many concurrent queries in the system
Users are running too many concurrent queries in the system
The [myproject:mydataset.mytable] table has too many partitions
The [myproject:mydataset.mytable] table has too many partitions
Either the state or the city columns in the [myproject:mydataset.mytable] table have too many NULL values
Either the state or the city columns in the [myproject:mydataset.mytable] table have too many NULL values
Most rows in the [myproject:mydataset.mytable] table have the same value in the country column, causing data skew
Most rows in the [myproject:mydataset.mytable] table have the same value in the country column, causing data skew
Suggested answer: A
asked 18/09/2024
Coleman Owie
39 questions

Question 135

Report
Export
Collapse

Your globally distributed auction application allows users to bid on items. Occasionally, users place identical bids at nearly identical times, and different application servers process those bids. Each bid event contains the item, amount, user, and timestamp. You want to collate those bid events into a single location in real time to determine which user bid first. What should you do?

Create a file on a shared file and have the application servers write all bid events to that file.Process the file with Apache Hadoop to identify which user bid first.
Create a file on a shared file and have the application servers write all bid events to that file.Process the file with Apache Hadoop to identify which user bid first.
Have each application server write the bid events to Cloud Pub/Sub as they occur. Push the events from Cloud Pub/Sub to a custom endpoint that writes the bid event information into Cloud SQL.
Have each application server write the bid events to Cloud Pub/Sub as they occur. Push the events from Cloud Pub/Sub to a custom endpoint that writes the bid event information into Cloud SQL.
Set up a MySQL database for each application server to write bid events into. Periodically query each of those distributed MySQL databases and update a master MySQL database with bid event information.
Set up a MySQL database for each application server to write bid events into. Periodically query each of those distributed MySQL databases and update a master MySQL database with bid event information.
Have each application server write the bid events to Google Cloud Pub/Sub as they occur. Use a pull subscription to pull the bid events using Google Cloud Dataflow. Give the bid for each item to the user in the bid event that is processed first.
Have each application server write the bid events to Google Cloud Pub/Sub as they occur. Use a pull subscription to pull the bid events using Google Cloud Dataflow. Give the bid for each item to the user in the bid event that is processed first.
Suggested answer: C
asked 18/09/2024
Marc Codó
45 questions

Question 136

Report
Export
Collapse

Your organization has been collecting and analyzing data in Google BigQuery for 6 months. The majority of the data analyzed is placed in a time-partitioned table named events_partitioned. To reduce the cost of queries, your organization created a view called events, which queries only the last 14 days of dat a. The view is described in legacy SQL. Next month, existing applications will be connecting to BigQuery to read the events data via an ODBC connection. You need to ensure the applications can connect. Which two actions should you take? (Choose two.)

Create a new view over events using standard SQL
Create a new view over events using standard SQL
Create a new partitioned table using a standard SQL query
Create a new partitioned table using a standard SQL query
Create a new view over events_partitioned using standard SQL
Create a new view over events_partitioned using standard SQL
Create a service account for the ODBC connection to use for authentication
Create a service account for the ODBC connection to use for authentication
Create a Google Cloud Identity and Access Management (Cloud IAM) role for the ODBC connection and shared "events"
Create a Google Cloud Identity and Access Management (Cloud IAM) role for the ODBC connection and shared "events"
Suggested answer: A, E
asked 18/09/2024
Flamur Kapaj
44 questions

Question 137

Report
Export
Collapse

You have enabled the free integration between Firebase Analytics and Google BigQuery. Firebase now automatically creates a new table daily in BigQuery in the format app_events_YYYYMMDD. You want to query all of the tables for the past 30 days in legacy SQL. What should you do?

Use the TABLE_DATE_RANGE function
Use the TABLE_DATE_RANGE function
Use the WHERE_PARTITIONTIME pseudo column
Use the WHERE_PARTITIONTIME pseudo column
Use WHERE date BETWEEN YYYY-MM-DD AND YYYY-MM-DD
Use WHERE date BETWEEN YYYY-MM-DD AND YYYY-MM-DD
Use SELECT IF.(date >= YYYY-MM-DD AND date <= YYYY-MM-DD
Use SELECT IF.(date >= YYYY-MM-DD AND date <= YYYY-MM-DD
Suggested answer: A

Explanation:

Reference: https://cloud.google.com/blog/products/gcp/using-bigquery-and-firebase-analytics-tounderstandyour-mobile-app?hl=am

asked 18/09/2024
JP Pelovello
48 questions

Question 138

Report
Export
Collapse

Your company is currently setting up data pipelines for their campaign. For all the Google Cloud Pub/Sub streaming data, one of the important business requirements is to be able to periodically identify the inputs and their timings during their campaign. Engineers have decided to use windowing and transformation in Google Cloud Dataflow for this purpose. However, when testing this feature, they find that the Cloud Dataflow job fails for the all streaming insert. What is the most likely cause of this problem?

They have not assigned the timestamp, which causes the job to fail
They have not assigned the timestamp, which causes the job to fail
They have not set the triggers to accommodate the data coming in late, which causes the job to fail
They have not set the triggers to accommodate the data coming in late, which causes the job to fail
They have not applied a global windowing function, which causes the job to fail when the pipeline is created
They have not applied a global windowing function, which causes the job to fail when the pipeline is created
They have not applied a non-global windowing function, which causes the job to fail when the pipeline is created
They have not applied a non-global windowing function, which causes the job to fail when the pipeline is created
Suggested answer: C
asked 18/09/2024
Reneus Martini
33 questions

Question 139

Report
Export
Collapse

You architect a system to analyze seismic dat a. Your extract, transform, and load (ETL) process runs as a series of MapReduce jobs on an Apache Hadoop cluster. The ETL process takes days to process a data set because some steps are computationally expensive. Then you discover that a sensor calibration step has been omitted. How should you change your ETL process to carry out sensor calibration systematically in the future?

Modify the transformMapReduce jobs to apply sensor calibration before they do anything else.
Modify the transformMapReduce jobs to apply sensor calibration before they do anything else.
Introduce a new MapReduce job to apply sensor calibration to raw data, and ensure all other MapReduce jobs are chained after this.
Introduce a new MapReduce job to apply sensor calibration to raw data, and ensure all other MapReduce jobs are chained after this.
Add sensor calibration data to the output of the ETL process, and document that all users need to apply sensor calibration themselves.
Add sensor calibration data to the output of the ETL process, and document that all users need to apply sensor calibration themselves.
Develop an algorithm through simulation to predict variance of data output from the last MapReduce job based on calibration factors, and apply the correction to all data.
Develop an algorithm through simulation to predict variance of data output from the last MapReduce job based on calibration factors, and apply the correction to all data.
Suggested answer: A
asked 18/09/2024
Mithun E
50 questions

Question 140

Report
Export
Collapse

An online retailer has built their current application on Google App Engine. A new initiative at the company mandates that they extend their application to allow their customers to transact directly via the application.

They need to manage their shopping transactions and analyze combined data from multiple datasets using a business intelligence (BI) tool. They want to use only a single database for this purpose.

Which Google Cloud database should they choose?

BigQuery
BigQuery
Cloud SQL
Cloud SQL
Cloud BigTable
Cloud BigTable
Cloud Datastore
Cloud Datastore
Suggested answer: C

Explanation:

Reference: https://cloud.google.com/solutions/business-intelligence/

asked 18/09/2024
Richard Drayer Camacho
37 questions
Total 377 questions
Go to page: of 38
Search

Related questions