ExamGecko
Home Home / Google / Professional Data Engineer

Google Professional Data Engineer Practice Test - Questions Answers, Page 3

Question list
Search
Search

List of questions

Search

Related questions











Your company uses a proprietary system to send inventory data every 6 hours to a data ingestion service in the cloud. Transmitted data includes a payload of several fields and the timestamp of the transmission. If there are any concerns about a transmission, the system re-transmits the dat a. How should you deduplicate the data most efficiency?

A.
Assign global unique identifiers (GUID) to each data entry.
A.
Assign global unique identifiers (GUID) to each data entry.
Answers
B.
Compute the hash value of each data entry, and compare it with all historical data.
B.
Compute the hash value of each data entry, and compare it with all historical data.
Answers
C.
Store each data entry as the primary key in a separate database and apply an index.
C.
Store each data entry as the primary key in a separate database and apply an index.
Answers
D.
Maintain a database table to store the hash value and other metadata for each data entry.
D.
Maintain a database table to store the hash value and other metadata for each data entry.
Answers
Suggested answer: D

Your company has hired a new data scientist who wants to perform complicated analyses across very large datasets stored in Google Cloud Storage and in a Cassandra cluster on Google Compute Engine.

The scientist primarily wants to create labelled data sets for machine learning projects, along with some visualization tasks. She reports that her laptop is not powerful enough to perform her tasks and it is slowing her down. You want to help her perform her tasks. What should you do?

A.
Run a local version of Jupiter on the laptop.
A.
Run a local version of Jupiter on the laptop.
Answers
B.
Grant the user access to Google Cloud Shell.
B.
Grant the user access to Google Cloud Shell.
Answers
C.
Host a visualization tool on a VM on Google Compute Engine.
C.
Host a visualization tool on a VM on Google Compute Engine.
Answers
D.
Deploy Google Cloud Datalab to a virtual machine (VM) on Google Compute Engine.
D.
Deploy Google Cloud Datalab to a virtual machine (VM) on Google Compute Engine.
Answers
Suggested answer: B

You are deploying 10,000 new Internet of Things devices to collect temperature data in your warehouses globally. You need to process, store and analyze these very large datasets in real time.

What should you do?

A.
Send the data to Google Cloud Datastore and then export to BigQuery.
A.
Send the data to Google Cloud Datastore and then export to BigQuery.
Answers
B.
Send the data to Google Cloud Pub/Sub, stream Cloud Pub/Sub to Google Cloud Dataflow, and store the data in Google BigQuery.
B.
Send the data to Google Cloud Pub/Sub, stream Cloud Pub/Sub to Google Cloud Dataflow, and store the data in Google BigQuery.
Answers
C.
Send the data to Cloud Storage and then spin up an Apache Hadoop cluster as needed in Google Cloud Dataproc whenever analysis is required.
C.
Send the data to Cloud Storage and then spin up an Apache Hadoop cluster as needed in Google Cloud Dataproc whenever analysis is required.
Answers
D.
Export logs in batch to Google Cloud Storage and then spin up a Google Cloud SQL instance, import the data from Cloud Storage, and run an analysis as needed.
D.
Export logs in batch to Google Cloud Storage and then spin up a Google Cloud SQL instance, import the data from Cloud Storage, and run an analysis as needed.
Answers
Suggested answer: B

You have spent a few days loading data from comma-separated values (CSV) files into the Google BigQuery table CLICK_STREAM. The column DT stores the epoch time of click events. For convenience, you chose a simple schema where every field is treated as the STRING type. Now, you want to compute web session durations of users who visit your site, and you want to change its data type to the TIMESTAMP. You want to minimize the migration effort without making future queries computationally expensive. What should you do?

A.
Delete the table CLICK_STREAM, and then re-create it such that the column DT is of the TIMESTAMP type. Reload the data.
A.
Delete the table CLICK_STREAM, and then re-create it such that the column DT is of the TIMESTAMP type. Reload the data.
Answers
B.
Add a column TS of the TIMESTAMP type to the table CLICK_STREAM, and populate the numeric values from the column TS for each row. Reference the column TS instead of the column DT from now on.
B.
Add a column TS of the TIMESTAMP type to the table CLICK_STREAM, and populate the numeric values from the column TS for each row. Reference the column TS instead of the column DT from now on.
Answers
C.
Create a view CLICK_STREAM_V, where strings from the column DT are cast into TIMESTAMP values. Reference the view CLICK_STREAM_V instead of the table CLICK_STREAM from now on.
C.
Create a view CLICK_STREAM_V, where strings from the column DT are cast into TIMESTAMP values. Reference the view CLICK_STREAM_V instead of the table CLICK_STREAM from now on.
Answers
D.
Add two columns to the table CLICK STREAM: TS of the TIMESTAMP type and IS_NEW of the BOOLEAN type. Reload all data in append mode. For each appended row, set the value of IS_NEW to true. For future queries, reference the column TS instead of the column DT, with the WHERE clause ensuring that the value of IS_NEW must be true.
D.
Add two columns to the table CLICK STREAM: TS of the TIMESTAMP type and IS_NEW of the BOOLEAN type. Reload all data in append mode. For each appended row, set the value of IS_NEW to true. For future queries, reference the column TS instead of the column DT, with the WHERE clause ensuring that the value of IS_NEW must be true.
Answers
E.
Construct a query to return every row of the table CLICK_STREAM, while using the built-in function to cast strings from the column DT into TIMESTAMP values. Run the query into a destination table NEW_CLICK_STREAM, in which the column TS is the TIMESTAMP type. Reference the table NEW_CLICK_STREAM instead of the table CLICK_STREAM from now on. In the future, new data is loaded into the table NEW_CLICK_STREAM.
E.
Construct a query to return every row of the table CLICK_STREAM, while using the built-in function to cast strings from the column DT into TIMESTAMP values. Run the query into a destination table NEW_CLICK_STREAM, in which the column TS is the TIMESTAMP type. Reference the table NEW_CLICK_STREAM instead of the table CLICK_STREAM from now on. In the future, new data is loaded into the table NEW_CLICK_STREAM.
Answers
Suggested answer: D

You want to use Google Stackdriver Logging to monitor Google BigQuery usage. You need an instant notification to be sent to your monitoring tool when new data is appended to a certain table using an insert job, but you do not want to receive notifications for other tables. What should you do?

A.
Make a call to the Stackdriver API to list all logs, and apply an advanced filter.
A.
Make a call to the Stackdriver API to list all logs, and apply an advanced filter.
Answers
B.
In the Stackdriver logging admin interface, and enable a log sink export to BigQuery.
B.
In the Stackdriver logging admin interface, and enable a log sink export to BigQuery.
Answers
C.
In the Stackdriver logging admin interface, enable a log sink export to Google Cloud Pub/Sub, and subscribe to the topic from your monitoring tool.
C.
In the Stackdriver logging admin interface, enable a log sink export to Google Cloud Pub/Sub, and subscribe to the topic from your monitoring tool.
Answers
D.
Using the Stackdriver API, create a project sink with advanced log filter to export to Pub/Sub, and subscribe to the topic from your monitoring tool.
D.
Using the Stackdriver API, create a project sink with advanced log filter to export to Pub/Sub, and subscribe to the topic from your monitoring tool.
Answers
Suggested answer: B

You are working on a sensitive project involving private user dat a. You have set up a project on Google Cloud Platform to house your work internally. An external consultant is going to assist with coding a complex transformation in a

Google Cloud Dataflow pipeline for your project. How should you maintain users' privacy?

A.
Grant the consultant the Viewer role on the project.
A.
Grant the consultant the Viewer role on the project.
Answers
B.
Grant the consultant the Cloud Dataflow Developer role on the project.
B.
Grant the consultant the Cloud Dataflow Developer role on the project.
Answers
C.
Create a service account and allow the consultant to log on with it.
C.
Create a service account and allow the consultant to log on with it.
Answers
D.
Create an anonymized sample of the data for the consultant to work with in a different project.
D.
Create an anonymized sample of the data for the consultant to work with in a different project.
Answers
Suggested answer: C

You are building a model to predict whether or not it will rain on a given day. You have thousands of input features and want to see if you can improve training speed by removing some features while having a minimum effect on model accuracy. What can you do?

A.
Eliminate features that are highly correlated to the output labels.
A.
Eliminate features that are highly correlated to the output labels.
Answers
B.
Combine highly co-dependent features into one representative feature.
B.
Combine highly co-dependent features into one representative feature.
Answers
C.
Instead of feeding in each feature individually, average their values in batches of 3.
C.
Instead of feeding in each feature individually, average their values in batches of 3.
Answers
D.
Remove the features that have null values for more than 50% of the training records.
D.
Remove the features that have null values for more than 50% of the training records.
Answers
Suggested answer: B

Your company is performing data preprocessing for a learning algorithm in Google Cloud Dataflow.

Numerous data logs are being are being generated during this step, and the team wants to analyze them. Due to the dynamic nature of the campaign, the data is growing exponentially every hour.

The data scientists have written the following code to read the data for a new key features in the logs.

BigQueryIO.Read

.named("ReadLogData")

.from("clouddataflow-readonly:samples.log_data")

You want to improve the performance of this data read. What should you do?

A.
Specify the TableReference object in the code.
A.
Specify the TableReference object in the code.
Answers
B.
Use .fromQuery operation to read specific fields from the table.
B.
Use .fromQuery operation to read specific fields from the table.
Answers
C.
Use of both the Google BigQuery TableSchema and TableFieldSchema classes.
C.
Use of both the Google BigQuery TableSchema and TableFieldSchema classes.
Answers
D.
Call a transform that returns TableRow objects, where each element in the PCollexction represents a single row in the table.
D.
Call a transform that returns TableRow objects, where each element in the PCollexction represents a single row in the table.
Answers
Suggested answer: D

Your company is streaming real-time sensor data from their factory floor into Bigtable and they have noticed extremely poor performance. How should the row key be redesigned to improve Bigtable performance on queries that populate real-time dashboards?

A.
Use a row key of the form <timestamp>.
A.
Use a row key of the form <timestamp>.
Answers
B.
Use a row key of the form <sensorid>.
B.
Use a row key of the form <sensorid>.
Answers
C.
Use a row key of the form <timestamp>#<sensorid>.
C.
Use a row key of the form <timestamp>#<sensorid>.
Answers
D.
Use a row key of the form >#<sensorid>#<timestamp>.
D.
Use a row key of the form >#<sensorid>#<timestamp>.
Answers
Suggested answer: A

Your company's customer and order databases are often under heavy load. This makes performing analytics against them difficult without harming operations. The databases are in a MySQL cluster, with nightly backups taken using mysqldump. You want to perform analytics with minimal impact on operations. What should you do?

A.
Add a node to the MySQL cluster and build an OLAP cube there.
A.
Add a node to the MySQL cluster and build an OLAP cube there.
Answers
B.
Use an ETL tool to load the data from MySQL into Google BigQuery.
B.
Use an ETL tool to load the data from MySQL into Google BigQuery.
Answers
C.
Connect an on-premises Apache Hadoop cluster to MySQL and perform ETL.
C.
Connect an on-premises Apache Hadoop cluster to MySQL and perform ETL.
Answers
D.
Mount the backups to Google Cloud SQL, and then process the data using Google Cloud Dataproc.
D.
Mount the backups to Google Cloud SQL, and then process the data using Google Cloud Dataproc.
Answers
Suggested answer: C
Total 372 questions
Go to page: of 38