ExamGecko
Home Home / Google / Professional Machine Learning Engineer

Google Professional Machine Learning Engineer Practice Test - Questions Answers, Page 10

Question list
Search
Search

List of questions

Search

Related questions











You need to build an ML model for a social media application to predict whether a user's submitted profile photo meets the requirements. The application will inform the user if the picture meets the requirements. How should you build a model to ensure that the application does not falsely accept a non-compliant picture?

A.
Use AutoML to optimize the model's recall in order to minimize false negatives.
A.
Use AutoML to optimize the model's recall in order to minimize false negatives.
Answers
B.
Use AutoML to optimize the model's F1 score in order to balance the accuracy of false positives and false negatives.
B.
Use AutoML to optimize the model's F1 score in order to balance the accuracy of false positives and false negatives.
Answers
C.
Use Vertex AI Workbench user-managed notebooks to build a custom model that has three times as many examples of pictures that meet the profile photo requirements.
C.
Use Vertex AI Workbench user-managed notebooks to build a custom model that has three times as many examples of pictures that meet the profile photo requirements.
Answers
D.
Use Vertex AI Workbench user-managed notebooks to build a custom model that has three times as many examples of pictures that do not meet the profile photo requirements.
D.
Use Vertex AI Workbench user-managed notebooks to build a custom model that has three times as many examples of pictures that do not meet the profile photo requirements.
Answers
Suggested answer: A

Explanation:

Recall is the ratio of true positives to the sum of true positives and false negatives. It measures how well the model can identify all the relevant cases. In this scenario, the relevant cases are the pictures that do not meet the profile photo requirements. Therefore, minimizing false negatives means minimizing the cases where the model incorrectly predicts that a non-compliant picture meets the requirements. By using AutoML to optimize the model's recall, the model will be more likely to reject a non-compliant picture and inform the user accordingly.Reference:

[AutoML Vision] is a service that allows you to train custom ML models for image classification and object detection tasks. You can use AutoML to optimize your model for different metrics, such as recall, precision, or F1 score.

[Recall] is one of the evaluation metrics for ML models. It is defined as TP / (TP + FN), where TP is the number of true positives and FN is the number of false negatives. Recall measures how well the model can identify all the relevant cases. A high recall means that the model has a low rate of false negatives.

You lead a data science team at a large international corporation. Most of the models your team trains are large-scale models using high-level TensorFlow APIs on AI Platform with GPUs. Your team usually takes a few weeks or months to iterate on a new version of a model. You were recently asked to review your team's spending. How should you reduce your Google Cloud compute costs without impacting the model's performance?

A.
Use AI Platform to run distributed training jobs with checkpoints.
A.
Use AI Platform to run distributed training jobs with checkpoints.
Answers
B.
Use AI Platform to run distributed training jobs without checkpoints.
B.
Use AI Platform to run distributed training jobs without checkpoints.
Answers
C.
Migrate to training with Kuberflow on Google Kubernetes Engine, and use preemptible VMs with checkpoints.
C.
Migrate to training with Kuberflow on Google Kubernetes Engine, and use preemptible VMs with checkpoints.
Answers
D.
Migrate to training with Kuberflow on Google Kubernetes Engine, and use preemptible VMs without checkpoints.
D.
Migrate to training with Kuberflow on Google Kubernetes Engine, and use preemptible VMs without checkpoints.
Answers
Suggested answer: C

Explanation:

Option A is incorrect because using AI Platform to run distributed training jobs with checkpoints does not reduce the compute costs, but rather increases them by using more resources and storing the checkpoints.

Option B is incorrect because using AI Platform to run distributed training jobs without checkpoints may reduce the compute costs, but it also risks losing the progress of the training if the job fails or is interrupted.

Option C is correct because migrating to training with Kubeflow on Google Kubernetes Engine, and using preemptible VMs with checkpoints can reduce the compute costs significantly by using cheaper and more scalable resources, while also preserving the state of the training with checkpoints.

Option D is incorrect because using preemptible VMs without checkpoints may reduce the compute costs, but it also risks losing the training progress if the VMs are preempted.

Kubeflow on Google Cloud

Using preemptible VMs and GPUs

Saving and loading models

You have deployed a model on Vertex AI for real-time inference. During an online prediction request, you get an ''Out of Memory'' error. What should you do?

A.
Use batch prediction mode instead of online mode.
A.
Use batch prediction mode instead of online mode.
Answers
B.
Send the request again with a smaller batch of instances.
B.
Send the request again with a smaller batch of instances.
Answers
C.
Use base64 to encode your data before using it for prediction.
C.
Use base64 to encode your data before using it for prediction.
Answers
D.
Apply for a quota increase for the number of prediction requests.
D.
Apply for a quota increase for the number of prediction requests.
Answers
Suggested answer: B

Explanation:

Option A is incorrect because using batch prediction mode instead of online mode does not solve the ''Out of Memory'' error, but rather changes the latency and throughput of the prediction service.Batch prediction mode is suitable for large-scale, asynchronous, and non-urgent predictions, while online prediction mode is suitable for low-latency, synchronous, and real-time predictions1.

Option B is correct because sending the request again with a smaller batch of instances can reduce the memory consumption of the prediction service and avoid the ''Out of Memory'' error. The batch size is the number of instances that are processed together in one request.A smaller batch size means less data to load into memory at once2.

Option C is incorrect because using base64 to encode your data before using it for prediction does not reduce the memory consumption of the prediction service, but rather increases it.Base64 encoding is a way of representing binary data as ASCII characters, which increases the size of the data by about 33%3.Base64 encoding is only required for certain data types, such as images and audio, that cannot be represented as JSON or CSV4.

Option D is incorrect because applying for a quota increase for the number of prediction requests does not solve the ''Out of Memory'' error, but rather increases the number of requests that can be sent to the prediction service per day.Quotas are limits on the usage of Google Cloud resources, such as CPU, memory, disk, and network5. Quotas do not affect the performance of the prediction service, but rather the availability and cost of the service.

Choosing between online and batch prediction

Online prediction input data

Base64 encoding

Preparing data for prediction

Quotas and limits

You work at a subscription-based company. You have trained an ensemble of trees and neural networks to predict customer churn, which is the likelihood that customers will not renew their yearly subscription. The average prediction is a 15% churn rate, but for a particular customer the model predicts that they are 70% likely to churn. The customer has a product usage history of 30%, is located in New York City, and became a customer in 1997. You need to explain the difference between the actual prediction, a 70% churn rate, and the average prediction. You want to use Vertex Explainable AI. What should you do?

A.
Train local surrogate models to explain individual predictions.
A.
Train local surrogate models to explain individual predictions.
Answers
B.
Configure sampled Shapley explanations on Vertex Explainable AI.
B.
Configure sampled Shapley explanations on Vertex Explainable AI.
Answers
C.
Configure integrated gradients explanations on Vertex Explainable AI.
C.
Configure integrated gradients explanations on Vertex Explainable AI.
Answers
D.
Measure the effect of each feature as the weight of the feature multiplied by the feature value.
D.
Measure the effect of each feature as the weight of the feature multiplied by the feature value.
Answers
Suggested answer: B

Explanation:

Option A is incorrect because training local surrogate models to explain individual predictions is not a feature of Vertex Explainable AI, but rather a general technique for interpreting black-box models.Local surrogate models are simpler models that approximate the behavior of the original model around a specific input1.

Option B is correct because configuring sampled Shapley explanations on Vertex Explainable AI is a way to explain the difference between the actual prediction and the average prediction for a given input.Sampled Shapley explanations are based on the Shapley value, which is a game-theoretic concept that measures how much each feature contributes to the prediction2.Vertex Explainable AI supports sampled Shapley explanations for tabular data, such as customer churn3.

Option C is incorrect because configuring integrated gradients explanations on Vertex Explainable AI is not suitable for explaining the difference between the actual prediction and the average prediction for a given input.Integrated gradients explanations are based on the idea of computing the gradients of the prediction with respect to the input features along a path from a baseline input to the actual input4.Vertex Explainable AI supports integrated gradients explanations for image and text data, but not for tabular data3.

Option D is incorrect because measuring the effect of each feature as the weight of the feature multiplied by the feature value is not a valid way to explain the difference between the actual prediction and the average prediction for a given input. This method assumes that the model is linear and additive, which is not the case for an ensemble of trees and neural networks.Moreover, this method does not account for the interactions between features or the non-linearity of the model5.

Local surrogate models

Shapley value

Vertex Explainable AI overview

Integrated gradients

Feature importance

You need to execute a batch prediction on 100million records in a BigQuery table with a custom TensorFlow DNN regressor model, and then store the predicted results in a BigQuery table. You want to minimize the effort required to build this inference pipeline. What should you do?

A.
Import the TensorFlow model with BigQuery ML, and run the ml.predict function.
A.
Import the TensorFlow model with BigQuery ML, and run the ml.predict function.
Answers
B.
Use the TensorFlow BigQuery reader to load the data, and use the BigQuery API to write the results to BigQuery.
B.
Use the TensorFlow BigQuery reader to load the data, and use the BigQuery API to write the results to BigQuery.
Answers
C.
Create a Dataflow pipeline to convert the data in BigQuery to TFRecords. Run a batch inference on Vertex AI Prediction, and write the results to BigQuery.
C.
Create a Dataflow pipeline to convert the data in BigQuery to TFRecords. Run a batch inference on Vertex AI Prediction, and write the results to BigQuery.
Answers
D.
Load the TensorFlow SavedModel in a Dataflow pipeline. Use the BigQuery I/O connector with a custom function to perform the inference within the pipeline, and write the results to BigQuery.
D.
Load the TensorFlow SavedModel in a Dataflow pipeline. Use the BigQuery I/O connector with a custom function to perform the inference within the pipeline, and write the results to BigQuery.
Answers
Suggested answer: A

Explanation:

Option A is correct because importing the TensorFlow model with BigQuery ML, and running the ml.predict function is the easiest way to execute a batch prediction on a large BigQuery table with a custom TensorFlow model, and store the predicted results in another BigQuery table.BigQuery ML allows you to import TensorFlow models that are stored in Cloud Storage, and use them for prediction with SQL queries1.The ml.predict function returns a table with the predicted values, which can be saved to another BigQuery table2.

Option B is incorrect because using the TensorFlow BigQuery reader to load the data, and using the BigQuery API to write the results to BigQuery requires more effort to build the inference pipeline than option A.The TensorFlow BigQuery reader is a way to read data from BigQuery into TensorFlow datasets, which can be used for training or prediction3.However, this option also requires writing code to load the TensorFlow model, run the prediction, and use the BigQuery API to write the results back to BigQuery4.

Option C is incorrect because creating a Dataflow pipeline to convert the data in BigQuery to TFRecords, running a batch inference on Vertex AI Prediction, and writing the results to BigQuery requires more effort to build the inference pipeline than option A.Dataflow is a service for creating and running data processing pipelines, such as ETL (extract, transform, load) or batch processing5. Vertex AI Prediction is a service for deploying and serving ML models for online or batch prediction. However, this option also requires writing code to create the Dataflow pipeline, convert the data to TFRecords, run the batch inference, and write the results to BigQuery.

Option D is incorrect because loading the TensorFlow SavedModel in a Dataflow pipeline, using the BigQuery I/O connector with a custom function to perform the inference within the pipeline, and writing the results to BigQuery requires more effort to build the inference pipeline than option A. The BigQuery I/O connector is a way to read and write data from BigQuery within a Dataflow pipeline. However, this option also requires writing code to load the TensorFlow SavedModel, create the custom function for inference, and write the results to BigQuery.

Importing models into BigQuery ML

Using imported models for prediction

TensorFlow BigQuery reader

BigQuery API

Dataflow overview

[Vertex AI Prediction overview]

[Batch prediction with Dataflow]

[BigQuery I/O connector]

[Using TensorFlow models in Dataflow]

You are creating a deep neural network classification model using a dataset with categorical input values. Certain columns have a cardinality greater than 10,000 unique values. How should you encode these categorical values as input into the model?

A.
Convert each categorical value into an integer value.
A.
Convert each categorical value into an integer value.
Answers
B.
Convert the categorical string data to one-hot hash buckets.
B.
Convert the categorical string data to one-hot hash buckets.
Answers
C.
Map the categorical variables into a vector of boolean values.
C.
Map the categorical variables into a vector of boolean values.
Answers
D.
Convert each categorical value into a run-length encoded string.
D.
Convert each categorical value into a run-length encoded string.
Answers
Suggested answer: B

Explanation:

Option A is incorrect because converting each categorical value into an integer value is not a good way to encode categorical values with high cardinality. This method implies an ordinal relationship between the categories, which may not be true.For example, assigning the values 1, 2, and 3 to the categories ''red'', ''green'', and ''blue'' does not make sense, as there is no inherent order among these colors1.

Option B is correct because converting the categorical string data to one-hot hash buckets is a suitable way to encode categorical values with high cardinality. This method uses a hash function to map each category to a fixed-length vector of binary values, where only one element is 1 and the rest are 0.This method preserves the sparsity and independence of the categories, and reduces the dimensionality of the input space2.

Option C is incorrect because mapping the categorical variables into a vector of boolean values is not a valid way to encode categorical values with high cardinality. This method implies that each category can be represented by a combination of true/false values, which may not be possible for a large number of categories.For example, if there are 10,000 categories, then there are 2^10,000 possible combinations of boolean values, which is impractical to store and process3.

Option D is incorrect because converting each categorical value into a run-length encoded string is not a useful way to encode categorical values with high cardinality. This method compresses a string by replacing consecutive repeated characters with the character and the number of repetitions. For example, ''AAAABBBCC'' becomes ''A4B3C2''.This method does not reduce the dimensionality of the input space, and does not preserve the semantic meaning of the categories4.

Encoding categorical features

One-hot hash buckets

Boolean vector

Run-length encoding

You need to train a natural language model to perform text classification on product descriptions that contain millions of examples and 100,000 unique words. You want to preprocess the words individually so that they can be fed into a recurrent neural network. What should you do?

A.
Create a hot-encoding of words, and feed the encodings into your model.
A.
Create a hot-encoding of words, and feed the encodings into your model.
Answers
B.
Identify word embeddings from a pre-trained model, and use the embeddings in your model.
B.
Identify word embeddings from a pre-trained model, and use the embeddings in your model.
Answers
C.
Sort the words by frequency of occurrence, and use the frequencies as the encodings in your model.
C.
Sort the words by frequency of occurrence, and use the frequencies as the encodings in your model.
Answers
D.
Assign a numerical value to each word from 1 to 100,000 and feed the values as inputs in your model.
D.
Assign a numerical value to each word from 1 to 100,000 and feed the values as inputs in your model.
Answers
Suggested answer: B

Explanation:

Option A is incorrect because creating a one-hot encoding of words, and feeding the encodings into your model is not an efficient way to preprocess the words individually for a natural language model.One-hot encoding is a method of representing categorical variables as binary vectors, where each element corresponds to a category and only one element is 1 and the rest are 01.However, this method is not suitable for high-dimensional and sparse data, such as words in a large vocabulary, because it requires a lot of memory and computation, and does not capture the semantic similarity or relationship between words2.

Option B is correct because identifying word embeddings from a pre-trained model, and using the embeddings in your model is a good way to preprocess the words individually for a natural language model.Word embeddings are low-dimensional and dense vectors that represent the meaning and usage of words in a continuous space3.Word embeddings can be learned from a large corpus of text using neural networks, such as word2vec, GloVe, or BERT4.Using pre-trained word embeddings can save time and resources, and improve the performance of the natural language model, especially when the training data is limited or noisy5.

Option C is incorrect because sorting the words by frequency of occurrence, and using the frequencies as the encodings in your model is not a meaningful way to preprocess the words individually for a natural language model. This method implies that the frequency of a word is a good indicator of its importance or relevance, which may not be true. For example, the word ''the'' is very frequent but not very informative, while the word ''unicorn'' is rare but more distinctive. Moreover, this method does not capture the semantic similarity or relationship between words, and may introduce noise or bias into the model.

Option D is incorrect because assigning a numerical value to each word from 1 to 100,000 and feeding the values as inputs in your model is not a valid way to preprocess the words individually for a natural language model. This method implies an ordinal relationship between the words, which may not be true. For example, assigning the values 1, 2, and 3 to the words ''apple'', ''banana'', and ''orange'' does not make sense, as there is no inherent order among these fruits. Moreover, this method does not capture the semantic similarity or relationship between words, and may confuse the model with irrelevant or misleading information.

One-hot encoding

Word embeddings

Word embedding

Pre-trained word embeddings

Using pre-trained word embeddings in a Keras model

[Term frequency]

[Term frequency-inverse document frequency]

[Ordinal variable]

[Encoding categorical features]

Your data science team has requested a system that supports scheduled model retraining, Docker containers, and a service that supports autoscaling and monitoring for online prediction requests. Which platform components should you choose for this system?

A.
Vertex AI Pipelines and App Engine
A.
Vertex AI Pipelines and App Engine
Answers
B.
Vertex AI Pipelines, Vertex AI Prediction, and Vertex AI Model Monitoring
B.
Vertex AI Pipelines, Vertex AI Prediction, and Vertex AI Model Monitoring
Answers
C.
Cloud Composer, BigQuery ML, and Vertex AI Prediction
C.
Cloud Composer, BigQuery ML, and Vertex AI Prediction
Answers
D.
Cloud Composer, Vertex AI Training with custom containers, and App Engine
D.
Cloud Composer, Vertex AI Training with custom containers, and App Engine
Answers
Suggested answer: B

Explanation:

Option A is incorrect because Vertex AI Pipelines and App Engine do not meet all the requirements of the system.Vertex AI Pipelines is a service that allows you to create, run, and manage ML workflows using TensorFlow Extended (TFX) components or custom components1.App Engine is a service that allows you to build and deploy scalable web applications using standard or flexible environments2.However, App Engine does not support Docker containers in the standard environment, and does not provide a dedicated service for online prediction and monitoring of ML models3.

Option B is correct because Vertex AI Pipelines, Vertex AI Prediction, and Vertex AI Model Monitoring meet all the requirements of the system.Vertex AI Prediction is a service that allows you to deploy and serve ML models for online or batch prediction, with support for autoscaling and custom containers4.Vertex AI Model Monitoring is a service that allows you to monitor the performance and fairness of your deployed models, and get alerts for any issues or anomalies5.

Option C is incorrect because Cloud Composer, BigQuery ML, and Vertex AI Prediction do not meet all the requirements of the system. Cloud Composer is a service that allows you to create, schedule, and manage workflows using Apache Airflow. BigQuery ML is a service that allows you to create and use ML models within BigQuery using SQL queries. However, BigQuery ML does not support custom containers, and Vertex AI Prediction does not support scheduled model retraining or model monitoring.

Option D is incorrect because Cloud Composer, Vertex AI Training with custom containers, and App Engine do not meet all the requirements of the system. Vertex AI Training is a service that allows you to train ML models using built-in algorithms or custom containers.However, Vertex AI Training does not support online prediction or model monitoring, and App Engine does not support Docker containers in the standard environment or online prediction and monitoring of ML models3.

Vertex AI Pipelines overview

App Engine overview

Choosing an App Engine environment

Vertex AI Prediction overview

Vertex AI Model Monitoring overview

[Cloud Composer overview]

[BigQuery ML overview]

[BigQuery ML limitations]

[Vertex AI Training overview]

You are profiling the performance of your TensorFlow model training time and notice a performance issue caused by inefficiencies in the input data pipeline for a single 5 terabyte CSV file dataset on Cloud Storage. You need to optimize the input pipeline performance. Which action should you try first to increase the efficiency of your pipeline?

A.
Preprocess the input CSV file into a TFRecord file.
A.
Preprocess the input CSV file into a TFRecord file.
Answers
B.
Randomly select a 10 gigabyte subset of the data to train your model.
B.
Randomly select a 10 gigabyte subset of the data to train your model.
Answers
C.
Split into multiple CSV files and use a parallel interleave transformation.
C.
Split into multiple CSV files and use a parallel interleave transformation.
Answers
D.
Set the reshuffle_each_iteration parameter to true in the tf.data.Dataset.shuffle method.
D.
Set the reshuffle_each_iteration parameter to true in the tf.data.Dataset.shuffle method.
Answers
Suggested answer: A

Explanation:

According to the web search results, the TFRecord format is a recommended way to store large amounts of data efficiently and improve the performance of the data input pipeline123.The TFRecord format is a binary format that can be compressed and serialized, which reduces the I/O overhead and the memory footprint of the data1.The tf.data API provides tools to create and read TFRecord files easily1.

The other options are not as effective as option A. Option B would reduce the amount of data available for training and might affect the model accuracy. Option C would still require reading from a single CSV file at a time, which might not utilize the full bandwidth of the remote storage. Option D would only affect the order of the data elements, not the speed of reading them.

You need to design an architecture that serves asynchronous predictions to determine whether a particular mission-critical machine part will fail. Your system collects data from multiple sensors from the machine. You want to build a model that will predict a failure in the next N minutes, given the average of each sensor's data from the past 12hours. How should you design the architecture?

A.
1. HTTP requests are sent by the sensors to your ML model, which is deployed as a microservice and exposes a REST API for prediction 2. Your application queries a Vertex AI endpoint where you deployed your model. 3. Responses are received by the caller application as soon as the model produces the prediction.
A.
1. HTTP requests are sent by the sensors to your ML model, which is deployed as a microservice and exposes a REST API for prediction 2. Your application queries a Vertex AI endpoint where you deployed your model. 3. Responses are received by the caller application as soon as the model produces the prediction.
Answers
B.
1. Events are sent by the sensors to Pub/Sub, consumed in real time, and processed by a Dataflow stream processing pipeline. 2. The pipeline invokes the model for prediction and sends the predictions to another Pub/Sub topic. 3. Pub/Sub messages containing predictions are then consumed by a downstream system for monitoring.
B.
1. Events are sent by the sensors to Pub/Sub, consumed in real time, and processed by a Dataflow stream processing pipeline. 2. The pipeline invokes the model for prediction and sends the predictions to another Pub/Sub topic. 3. Pub/Sub messages containing predictions are then consumed by a downstream system for monitoring.
Answers
C.
1. Export your data to Cloud Storage using Dataflow. 2. Submit a Vertex AI batch prediction job that uses your trained model in Cloud Storage to perform scoring on the preprocessed data. 3. Export the batch prediction job outputs from Cloud Storage and import them into Cloud SQL.
C.
1. Export your data to Cloud Storage using Dataflow. 2. Submit a Vertex AI batch prediction job that uses your trained model in Cloud Storage to perform scoring on the preprocessed data. 3. Export the batch prediction job outputs from Cloud Storage and import them into Cloud SQL.
Answers
D.
1. Export the data to Cloud Storage using the BigQuery command-line tool 2. Submit a Vertex AI batch prediction job that uses your trained model in Cloud Storage to perform scoring on the preprocessed data. 3. Export the batch prediction job outputs from Cloud Storage and import them into BigQuery.
D.
1. Export the data to Cloud Storage using the BigQuery command-line tool 2. Submit a Vertex AI batch prediction job that uses your trained model in Cloud Storage to perform scoring on the preprocessed data. 3. Export the batch prediction job outputs from Cloud Storage and import them into BigQuery.
Answers
Suggested answer: B

Explanation:

Reasoning: The question asks for a design that servesasynchronous predictionsto determine whether a machine part will fail. This means that the predictions do not need to be returned immediately to the sensors, but can be processed in batches and sent to a downstream system for monitoring. Option B is the only one that uses astreamingdata pipeline with Pub/Sub and Dataflow, which can handle real-time data ingestion, processing, and prediction. Option B also invokes the model for prediction, which is required by the question. The other options either usesynchronous predictions(option A),batch predictions(options C and D), or do not invoke the model for prediction (option D).

Total 285 questions
Go to page: of 29