ExamGecko
Home Home / Google / Professional Data Engineer

Google Professional Data Engineer Practice Test - Questions Answers, Page 15

Question list
Search
Search

List of questions

Search

Related questions











You launched a new gaming app almost three years ago. You have been uploading log files from the previous day to a separate Google BigQuery table with the table name format LOGS_yyyymmdd. You have been using table wildcard functions to generate daily and monthly reports for all time ranges.

Recently, you discovered that some queries that cover long date ranges are exceeding the limit of 1,000 tables and failing. How can you resolve this issue?

A.
Convert all daily log tables into date-partitioned tables
A.
Convert all daily log tables into date-partitioned tables
Answers
B.
Convert the sharded tables into a single partitioned table
B.
Convert the sharded tables into a single partitioned table
Answers
C.
Enable query caching so you can cache data from previous months
C.
Enable query caching so you can cache data from previous months
Answers
D.
Create separate views to cover each month, and query from these views
D.
Create separate views to cover each month, and query from these views
Answers
Suggested answer: A

Your analytics team wants to build a simple statistical model to determine which customers are most likely to work with your company again, based on a few different metrics. They want to run the model on Apache Spark, using data housed in Google Cloud Storage, and you have recommended using Google Cloud Dataproc to execute this job. Testing has shown that this workload can run in approximately 30 minutes on a 15-node cluster, outputting the results into Google

BigQuery. The plan is to run this workload weekly. How should you optimize the cluster for cost?

A.
Migrate the workload to Google Cloud Dataflow
A.
Migrate the workload to Google Cloud Dataflow
Answers
B.
Use pre-emptible virtual machines (VMs) for the cluster
B.
Use pre-emptible virtual machines (VMs) for the cluster
Answers
C.
Use a higher-memory node so that the job runs faster
C.
Use a higher-memory node so that the job runs faster
Answers
D.
Use SSDs on the worker nodes so that the job can run faster
D.
Use SSDs on the worker nodes so that the job can run faster
Answers
Suggested answer: A

Your company receives both batch- and stream-based event dat a. You want to process the data using Google Cloud Dataflow over a predictable time period.

However, you realize that in some instances data can arrive late or out of order. How should you design your Cloud Dataflow pipeline to handle data that is late or out of order?

A.
Set a single global window to capture all the data.
A.
Set a single global window to capture all the data.
Answers
B.
Set sliding windows to capture all the lagged data.
B.
Set sliding windows to capture all the lagged data.
Answers
C.
Use watermarks and timestamps to capture the lagged data.
C.
Use watermarks and timestamps to capture the lagged data.
Answers
D.
Ensure every datasource type (stream or batch) has a timestamp, and use the timestamps to define the logic for lagged data.
D.
Ensure every datasource type (stream or batch) has a timestamp, and use the timestamps to define the logic for lagged data.
Answers
Suggested answer: B

You have some data, which is shown in the graphic below. The two dimensions are X and Y, and the shade of each dot represents what class it is. You want to classify this data accurately using a linear algorithm.

To do this you need to add a synthetic feature. What should the value of that feature be?

A.
X^2+Y^2
A.
X^2+Y^2
Answers
B.
X^2
B.
X^2
Answers
C.
Y^2
C.
Y^2
Answers
D.
cos(X)
D.
cos(X)
Answers
Suggested answer: D

You are integrating one of your internal IT applications and Google BigQuery, so users can query BigQuery from the application's interface. You do not want individual users to authenticate to BigQuery and you do not want to give them access to the dataset. You need to securely access BigQuery from your IT application.

What should you do?

A.
Create groups for your users and give those groups access to the dataset
A.
Create groups for your users and give those groups access to the dataset
Answers
B.
Integrate with a single sign-on (SSO) platform, and pass each user's credentials along with the query request
B.
Integrate with a single sign-on (SSO) platform, and pass each user's credentials along with the query request
Answers
C.
Create a service account and grant dataset access to that account. Use the service account's private key to access the dataset
C.
Create a service account and grant dataset access to that account. Use the service account's private key to access the dataset
Answers
D.
Create a dummy user and grant dataset access to that user. Store the username and password for that user in a file on the files system, and use those credentials to access the BigQuery dataset
D.
Create a dummy user and grant dataset access to that user. Store the username and password for that user in a file on the files system, and use those credentials to access the BigQuery dataset
Answers
Suggested answer: C

You set up a streaming data insert into a Redis cluster via a Kafka cluster. Both clusters are running on Compute Engine instances. You need to encrypt data at rest with encryption keys that you can create, rotate, and destroy as needed.

What should you do?

A.
Create a dedicated service account, and use encryption at rest to reference your data stored in your Compute Engine cluster instances as part of your API service calls.
A.
Create a dedicated service account, and use encryption at rest to reference your data stored in your Compute Engine cluster instances as part of your API service calls.
Answers
B.
Create encryption keys in Cloud Key Management Service. Use those keys to encrypt your data in all of the Compute Engine cluster instances.
B.
Create encryption keys in Cloud Key Management Service. Use those keys to encrypt your data in all of the Compute Engine cluster instances.
Answers
C.
Create encryption keys locally. Upload your encryption keys to Cloud Key Management Service.Use those keys to encrypt your data in all of the Compute Engine cluster instances.
C.
Create encryption keys locally. Upload your encryption keys to Cloud Key Management Service.Use those keys to encrypt your data in all of the Compute Engine cluster instances.
Answers
D.
Create encryption keys in Cloud Key Management Service. Reference those keys in your API service calls when accessing the data in your Compute Engine cluster instances.
D.
Create encryption keys in Cloud Key Management Service. Reference those keys in your API service calls when accessing the data in your Compute Engine cluster instances.
Answers
Suggested answer: C

You are developing an application that uses a recommendation engine on Google Cloud. Your solution should display new videos to customers based on past views. Your solution needs to generate labels for the entities in videos that the customer has viewed. Your design must be able to provide very fast filtering suggestions based on data from other customer preferences on several TB of dat a. What should you do?

A.
Build and train a complex classification model with Spark MLlib to generate labels and filter the results.Deploy the models using Cloud Dataproc. Call the model from your application.
A.
Build and train a complex classification model with Spark MLlib to generate labels and filter the results.Deploy the models using Cloud Dataproc. Call the model from your application.
Answers
B.
Build and train a classification model with Spark MLlib to generate labels. Build and train a second classification model with Spark MLlib to filter results to match customer preferences. Deploy the models using Cloud Dataproc. Call the models from your application.
B.
Build and train a classification model with Spark MLlib to generate labels. Build and train a second classification model with Spark MLlib to filter results to match customer preferences. Deploy the models using Cloud Dataproc. Call the models from your application.
Answers
C.
Build an application that calls the Cloud Video Intelligence API to generate labels. Store data in Cloud Bigtable, and filter the predicted labels to match the user's viewing history to generate preferences.
C.
Build an application that calls the Cloud Video Intelligence API to generate labels. Store data in Cloud Bigtable, and filter the predicted labels to match the user's viewing history to generate preferences.
Answers
D.
Build an application that calls the Cloud Video Intelligence API to generate labels. Store data in Cloud SQL, and join and filter the predicted labels to match the user's viewing history to generate preferences.
D.
Build an application that calls the Cloud Video Intelligence API to generate labels. Store data in Cloud SQL, and join and filter the predicted labels to match the user's viewing history to generate preferences.
Answers
Suggested answer: C

You are selecting services to write and transform JSON messages from Cloud Pub/Sub to BigQuery for a data pipeline on Google Cloud. You want to minimize service costs. You also want to monitor and accommodate input data volume that will vary in size with minimal manual intervention. What should you do?

A.
Use Cloud Dataproc to run your transformations. Monitor CPU utilization for the cluster. Resize the number of worker nodes in your cluster via the command line.
A.
Use Cloud Dataproc to run your transformations. Monitor CPU utilization for the cluster. Resize the number of worker nodes in your cluster via the command line.
Answers
B.
Use Cloud Dataproc to run your transformations. Use the diagnose command to generate an operational output archive. Locate the bottleneck and adjust cluster resources.
B.
Use Cloud Dataproc to run your transformations. Use the diagnose command to generate an operational output archive. Locate the bottleneck and adjust cluster resources.
Answers
C.
Use Cloud Dataflow to run your transformations. Monitor the job system lag with Stackdriver. Use the default autoscaling setting for worker instances.
C.
Use Cloud Dataflow to run your transformations. Monitor the job system lag with Stackdriver. Use the default autoscaling setting for worker instances.
Answers
D.
Use Cloud Dataflow to run your transformations. Monitor the total execution time for a sampling of jobs. Configure the job to use non-default Compute Engine machine types when needed.
D.
Use Cloud Dataflow to run your transformations. Monitor the total execution time for a sampling of jobs. Configure the job to use non-default Compute Engine machine types when needed.
Answers
Suggested answer: B

Your infrastructure includes a set of YouTube channels. You have been tasked with creating a process for sending the YouTube channel data to Google Cloud for analysis. You want to design a solution that allows your world-wide marketing teams to perform ANSI SQL and other types of analysis on up-todate YouTube channels log dat a. How should you set up the log data transfer into Google Cloud?

A.
Use Storage Transfer Service to transfer the offsite backup files to a Cloud Storage Multi-Regional storage bucket as a final destination.
A.
Use Storage Transfer Service to transfer the offsite backup files to a Cloud Storage Multi-Regional storage bucket as a final destination.
Answers
B.
Use Storage Transfer Service to transfer the offsite backup files to a Cloud Storage Regional bucket as a final destination.
B.
Use Storage Transfer Service to transfer the offsite backup files to a Cloud Storage Regional bucket as a final destination.
Answers
C.
Use BigQuery Data Transfer Service to transfer the offsite backup files to a Cloud Storage Multi- Regional storage bucket as a final destination.
C.
Use BigQuery Data Transfer Service to transfer the offsite backup files to a Cloud Storage Multi- Regional storage bucket as a final destination.
Answers
D.
Use BigQuery Data Transfer Service to transfer the offsite backup files to a Cloud Storage Regional storage bucket as a final destination.
D.
Use BigQuery Data Transfer Service to transfer the offsite backup files to a Cloud Storage Regional storage bucket as a final destination.
Answers
Suggested answer: B

You are designing storage for very large text files for a data pipeline on Google Cloud. You want to support ANSI SQL queries. You also want to support compression and parallel load from the input locations using Google recommended practices. What should you do?

A.
Transform text files to compressed Avro using Cloud Dataflow. Use BigQuery for storage and query.
A.
Transform text files to compressed Avro using Cloud Dataflow. Use BigQuery for storage and query.
Answers
B.
Transform text files to compressed Avro using Cloud Dataflow. Use Cloud Storage and BigQuery permanent linked tables for query.
B.
Transform text files to compressed Avro using Cloud Dataflow. Use Cloud Storage and BigQuery permanent linked tables for query.
Answers
C.
Compress text files to gzip using the Grid Computing Tools. Use BigQuery for storage and query.
C.
Compress text files to gzip using the Grid Computing Tools. Use BigQuery for storage and query.
Answers
D.
Compress text files to gzip using the Grid Computing Tools. Use Cloud Storage, and then import into Cloud Bigtable for query.
D.
Compress text files to gzip using the Grid Computing Tools. Use Cloud Storage, and then import into Cloud Bigtable for query.
Answers
Suggested answer: D
Total 372 questions
Go to page: of 38