ExamGecko
Home Home / Google / Professional Data Engineer

Google Professional Data Engineer Practice Test - Questions Answers, Page 2

Question list
Search
Search

List of questions

Search

Related questions











You are designing a basket abandonment system for an ecommerce company. The system will send a message to a user based on these rules:

No interaction by the user on the site for 1 hour

Has added more than $30 worth of products to the basket

Has not completed a transaction

You use Google Cloud Dataflow to process the data and decide if a message should be sent. How should you design the pipeline?

A.
Use a fixed-time window with a duration of 60 minutes.
A.
Use a fixed-time window with a duration of 60 minutes.
Answers
B.
Use a sliding time window with a duration of 60 minutes.
B.
Use a sliding time window with a duration of 60 minutes.
Answers
C.
Use a session window with a gap time duration of 60 minutes.
C.
Use a session window with a gap time duration of 60 minutes.
Answers
D.
Use a global window with a time based trigger with a delay of 60 minutes.
D.
Use a global window with a time based trigger with a delay of 60 minutes.
Answers
Suggested answer: C

Your company handles data processing for a number of different clients. Each client prefers to use their own suite of analytics tools, with some allowing direct query access via Google BigQuery. You need to secure the data so that clients cannot see each other's dat a. You want to ensure appropriate access to the data. Which three steps should you take? (Choose three.)

A.
Load data into different partitions.
A.
Load data into different partitions.
Answers
B.
Load data into a different dataset for each client.
B.
Load data into a different dataset for each client.
Answers
C.
Put each client's BigQuery dataset into a different table.
C.
Put each client's BigQuery dataset into a different table.
Answers
D.
Restrict a client's dataset to approved users.
D.
Restrict a client's dataset to approved users.
Answers
E.
Only allow a service account to access the datasets.
E.
Only allow a service account to access the datasets.
Answers
F.
Use the appropriate identity and access management (IAM) roles for each client's users.
F.
Use the appropriate identity and access management (IAM) roles for each client's users.
Answers
Suggested answer: B, D, F

You want to process payment transactions in a point-of-sale application that will run on Google Cloud Platform. Your user base could grow exponentially, but you do not want to manage infrastructure scaling.

Which Google database service should you use?

A.
Cloud SQL
A.
Cloud SQL
Answers
B.
BigQuery
B.
BigQuery
Answers
C.
Cloud Bigtable
C.
Cloud Bigtable
Answers
D.
Cloud Datastore
D.
Cloud Datastore
Answers
Suggested answer: A

You want to use a database of information about tissue samples to classify future tissue samples as either normal or mutated. You are evaluating an unsupervised anomaly detection method for classifying the tissue samples. Which two characteristic support this method? (Choose two.)

A.
There are very few occurrences of mutations relative to normal samples.
A.
There are very few occurrences of mutations relative to normal samples.
Answers
B.
There are roughly equal occurrences of both normal and mutated samples in the database.
B.
There are roughly equal occurrences of both normal and mutated samples in the database.
Answers
C.
You expect future mutations to have different features from the mutated samples in the database.
C.
You expect future mutations to have different features from the mutated samples in the database.
Answers
D.
You expect future mutations to have similar features to the mutated samples in the database.
D.
You expect future mutations to have similar features to the mutated samples in the database.
Answers
E.
You already have labels for which samples are mutated and which are normal in the database.
E.
You already have labels for which samples are mutated and which are normal in the database.
Answers
Suggested answer: A, D

Explanation:

Unsupervised anomaly detection techniques detect anomalies in an unlabeled test data set under the assumption that the majority of the instances in the data set are normal by looking for instances that seem to fit least to the remainder of the data set.

https://en.wikipedia.org/wiki/Anomaly_detection

You need to store and analyze social media postings in Google BigQuery at a rate of 10,000 messages per minute in near real-time. Initially, design the application to use streaming inserts for individual postings. Your application also performs data aggregations right after the streaming inserts. You discover that the queries after streaming inserts do not exhibit strong consistency, and reports from the queries might miss in-flight dat a. How can you adjust your application design?

A.
Re-write the application to load accumulated data every 2 minutes.
A.
Re-write the application to load accumulated data every 2 minutes.
Answers
B.
Convert the streaming insert code to batch load for individual messages.
B.
Convert the streaming insert code to batch load for individual messages.
Answers
C.
Load the original message to Google Cloud SQL, and export the table every hour to BigQuery via streaming inserts.
C.
Load the original message to Google Cloud SQL, and export the table every hour to BigQuery via streaming inserts.
Answers
D.
Estimate the average latency for data availability after streaming inserts, and always run queries after waiting twice as long.
D.
Estimate the average latency for data availability after streaming inserts, and always run queries after waiting twice as long.
Answers
Suggested answer: D

Explanation:

The data is first comes to buffer and then written to Storage. If we are running queries in buffer we will face above mentioned issues. If we wait for the bigquery to write the data to storage then we won't face the issue. So We need to wait till it's written tio storage

Your startup has never implemented a formal security policy. Currently, everyone in the company has access to the datasets stored in Google BigQuery. Teams have freedom to use the service as they see fit, and they have not documented their use cases. You have been asked to secure the data warehouse. You need to discover what everyone is doing. What should you do first?

A.
Use Google Stackdriver Audit Logs to review data access.
A.
Use Google Stackdriver Audit Logs to review data access.
Answers
B.
Get the identity and access management IIAM) policy of each table
B.
Get the identity and access management IIAM) policy of each table
Answers
C.
Use Stackdriver Monitoring to see the usage of BigQuery query slots.
C.
Use Stackdriver Monitoring to see the usage of BigQuery query slots.
Answers
D.
Use the Google Cloud Billing API to see what account the warehouse is being billed to.
D.
Use the Google Cloud Billing API to see what account the warehouse is being billed to.
Answers
Suggested answer: A

Your company is migrating their 30-node Apache Hadoop cluster to the cloud. They want to re-use Hadoop jobs they have already created and minimize the management of the cluster as much as possible. They also want to be able to persist data beyond the life of the cluster. What should you do?

A.
Create a Google Cloud Dataflow job to process the data.
A.
Create a Google Cloud Dataflow job to process the data.
Answers
B.
Create a Google Cloud Dataproc cluster that uses persistent disks for HDFS.
B.
Create a Google Cloud Dataproc cluster that uses persistent disks for HDFS.
Answers
C.
Create a Hadoop cluster on Google Compute Engine that uses persistent disks.
C.
Create a Hadoop cluster on Google Compute Engine that uses persistent disks.
Answers
D.
Create a Cloud Dataproc cluster that uses the Google Cloud Storage connector.
D.
Create a Cloud Dataproc cluster that uses the Google Cloud Storage connector.
Answers
E.
Create a Hadoop cluster on Google Compute Engine that uses Local SSD disks.
E.
Create a Hadoop cluster on Google Compute Engine that uses Local SSD disks.
Answers
Suggested answer: D

Business owners at your company have given you a database of bank transactions. Each row contains the user ID, transaction type, transaction location, and transaction amount. They ask you to investigate what type of machine learning can be applied to the dat a. Which three machine learning applications can you use? (Choose three.)

A.
Supervised learning to determine which transactions are most likely to be fraudulent.
A.
Supervised learning to determine which transactions are most likely to be fraudulent.
Answers
B.
Unsupervised learning to determine which transactions are most likely to be fraudulent.
B.
Unsupervised learning to determine which transactions are most likely to be fraudulent.
Answers
C.
Clustering to divide the transactions into N categories based on feature similarity.
C.
Clustering to divide the transactions into N categories based on feature similarity.
Answers
D.
Supervised learning to predict the location of a transaction.
D.
Supervised learning to predict the location of a transaction.
Answers
E.
Reinforcement learning to predict the location of a transaction.
E.
Reinforcement learning to predict the location of a transaction.
Answers
F.
Unsupervised learning to predict the location of a transaction.
F.
Unsupervised learning to predict the location of a transaction.
Answers
Suggested answer: B, C, D

Your company's on-premises Apache Hadoop servers are approaching end-of-life, and IT has decided to migrate the cluster to Google Cloud Dataproc. A like-for-like migration of the cluster would require 50 TB of Google Persistent Disk per node. The CIO is concerned about the cost of using that much block storage. You want to minimize the storage cost of the migration. What should you do?

A.
Put the data into Google Cloud Storage.
A.
Put the data into Google Cloud Storage.
Answers
B.
Use preemptible virtual machines (VMs) for the Cloud Dataproc cluster.
B.
Use preemptible virtual machines (VMs) for the Cloud Dataproc cluster.
Answers
C.
Tune the Cloud Dataproc cluster so that there is just enough disk for all data.
C.
Tune the Cloud Dataproc cluster so that there is just enough disk for all data.
Answers
D.
Migrate some of the cold data into Google Cloud Storage, and keep only the hot data in Persistent Disk.
D.
Migrate some of the cold data into Google Cloud Storage, and keep only the hot data in Persistent Disk.
Answers
Suggested answer: B

You work for a car manufacturer and have set up a data pipeline using Google Cloud Pub/Sub to capture anomalous sensor events. You are using a push subscription in Cloud Pub/Sub that calls a custom HTTPS endpoint that you have created to take action of these anomalous events as they occur. Your custom HTTPS endpoint keeps getting an inordinate amount of duplicate messages. What is the most likely cause of these duplicate messages?

A.
The message body for the sensor event is too large.
A.
The message body for the sensor event is too large.
Answers
B.
Your custom endpoint has an out-of-date SSL certificate.
B.
Your custom endpoint has an out-of-date SSL certificate.
Answers
C.
The Cloud Pub/Sub topic has too many messages published to it.
C.
The Cloud Pub/Sub topic has too many messages published to it.
Answers
D.
Your custom endpoint is not acknowledging messages within the acknowledgement deadline.
D.
Your custom endpoint is not acknowledging messages within the acknowledgement deadline.
Answers
Suggested answer: B
Total 372 questions
Go to page: of 38