ExamGecko
Home Home / Google / Professional Machine Learning Engineer

Google Professional Machine Learning Engineer Practice Test - Questions Answers, Page 17

Question list
Search
Search

List of questions

Search

Related questions











You work for a retail company. You have created a Vertex Al forecast model that produces monthly item sales predictions. You want to quickly create a report that will help to explain how the model calculates the predictions. You have one month of recent actual sales data that was not included in the training dataset. How should you generate data for your report?

A.
Create a batch prediction job by using the actual sales data Compare the predictions to the actuals in the report.
A.
Create a batch prediction job by using the actual sales data Compare the predictions to the actuals in the report.
Answers
B.
Create a batch prediction job by using the actual sates data and configure the job settings to generate feature attributions. Compare the results in the report.
B.
Create a batch prediction job by using the actual sates data and configure the job settings to generate feature attributions. Compare the results in the report.
Answers
C.
Generate counterfactual examples by using the actual sales data Create a batch prediction job using the actual sales data and the counterfactual examples Compare the results in the report.
C.
Generate counterfactual examples by using the actual sales data Create a batch prediction job using the actual sales data and the counterfactual examples Compare the results in the report.
Answers
D.
Train another model by using the same training dataset as the original and exclude some columns. Using the actual sales data create one batch prediction job by using the new model and another one with the original model Compare the two sets of predictions in the report.
D.
Train another model by using the same training dataset as the original and exclude some columns. Using the actual sales data create one batch prediction job by using the new model and another one with the original model Compare the two sets of predictions in the report.
Answers
Suggested answer: B

Explanation:

According to the official exam guide1, one of the skills assessed in the exam is to ''explain the predictions of a trained model''.Vertex AI provides feature attributions using Shapley Values, a cooperative game theory algorithm that assigns credit to each feature in a model for a particular outcome2. Feature attributions can help you understand how the model calculates the predictions and debug or optimize the model accordingly.You can use Forecasting with AutoML or Tabular Workflow for Forecasting to generate and query local feature attributions2. The other options are not relevant or optimal for this scenario.Reference:

Professional ML Engineer Exam Guide

Feature attributions for forecasting

Google Professional Machine Learning Certification Exam 2023

Latest Google Professional Machine Learning Engineer Actual Free Exam Questions

Your team has a model deployed to a Vertex Al endpoint You have created a Vertex Al pipeline that automates the model training process and is triggered by a Cloud Function. You need to prioritize keeping the model up-to-date, but also minimize retraining costs. How should you configure retraining'?

A.
Configure Pub/Sub to call the Cloud Function when a sufficient amount of new data becomes available.
A.
Configure Pub/Sub to call the Cloud Function when a sufficient amount of new data becomes available.
Answers
B.
Configure a Cloud Scheduler job that calls the Cloud Function at a predetermined frequency that fits your team's budget.
B.
Configure a Cloud Scheduler job that calls the Cloud Function at a predetermined frequency that fits your team's budget.
Answers
C.
Enable model monitoring on the Vertex Al endpoint Configure Pub/Sub to call the Cloud Function when anomalies are detected.
C.
Enable model monitoring on the Vertex Al endpoint Configure Pub/Sub to call the Cloud Function when anomalies are detected.
Answers
D.
Enable model monitoring on the Vertex Al endpoint Configure Pub/Sub to call the Cloud Function when feature drift is detected.
D.
Enable model monitoring on the Vertex Al endpoint Configure Pub/Sub to call the Cloud Function when feature drift is detected.
Answers
Suggested answer: D

Explanation:

According to the official exam guide1, one of the skills assessed in the exam is to ''configure and optimize model monitoring jobs''.Vertex AI Model Monitoring documentation states that ''model monitoring helps you detect when your model's performance degrades over time due to changes in the data that your model receives or returns'' and that 'you can configure model monitoring to send notifications to Pub/Sub when it detects anomalies or drift in your model's predictions'2. Therefore, enabling model monitoring on the Vertex AI endpoint and configuring Pub/Sub to call the Cloud Function when feature drift is detected would help you keep the model up-to-date and minimize retraining costs. The other options are not relevant or optimal for this scenario.Reference:

Professional ML Engineer Exam Guide

Vertex AI Model Monitoring

Google Professional Machine Learning Certification Exam 2023

Latest Google Professional Machine Learning Engineer Actual Free Exam Questions

Your company stores a large number of audio files of phone calls made to your customer call center in an on-premises database. Each audio file is in wav format and is approximately 5 minutes long. You need to analyze these audio files for customer sentiment. You plan to use the Speech-to-Text API. You want to use the most efficient approach. What should you do?

A.
1 Upload the audio files to Cloud Storage 2. Call the speech: Iongrunningrecognize API endpoint to generate transcriptions 3. Call the predict method of an AutoML sentiment analysis model to analyze the transcriptions
A.
1 Upload the audio files to Cloud Storage 2. Call the speech: Iongrunningrecognize API endpoint to generate transcriptions 3. Call the predict method of an AutoML sentiment analysis model to analyze the transcriptions
Answers
B.
1 Upload the audio files to Cloud Storage 2 Call the speech: Iongrunningrecognize API endpoint to generate transcriptions. 3 Create a Cloud Function that calls the Natural Language API by using the analyzesentiment method
B.
1 Upload the audio files to Cloud Storage 2 Call the speech: Iongrunningrecognize API endpoint to generate transcriptions. 3 Create a Cloud Function that calls the Natural Language API by using the analyzesentiment method
Answers
C.
1 Iterate over your local Tiles in Python 2. Use the Speech-to-Text Python library to create a speech.RecognitionAudio object and set the content to the audio file data 3. Call the speech: recognize API endpoint to generate transcriptions 4. Call the predict method of an AutoML sentiment analysis model to analyze the transcriptions
C.
1 Iterate over your local Tiles in Python 2. Use the Speech-to-Text Python library to create a speech.RecognitionAudio object and set the content to the audio file data 3. Call the speech: recognize API endpoint to generate transcriptions 4. Call the predict method of an AutoML sentiment analysis model to analyze the transcriptions
Answers
D.
1 Iterate over your local files in Python 2 Use the Speech-to-Text Python Library to create a speech.RecognitionAudio object, and set the content to the audio file data 3. Call the speech: lengrunningrecognize API endpoint to generate transcriptions 4 Call the Natural Language API by using the analyzesenriment method
D.
1 Iterate over your local files in Python 2 Use the Speech-to-Text Python Library to create a speech.RecognitionAudio object, and set the content to the audio file data 3. Call the speech: lengrunningrecognize API endpoint to generate transcriptions 4 Call the Natural Language API by using the analyzesenriment method
Answers
Suggested answer: B

Explanation:

According to the official exam guide1, one of the skills assessed in the exam is to ''design, build, and productionalize ML models to solve business challenges using Google Cloud technologies''.The Speech-to-Text API2allows you to convert audio to text by applying powerful neural network models.The Natural Language API3enables you to analyze text and extract information about the sentiment, entities, and syntax.The Cloud Functions4service lets you write and deploy code that runs in response to events, such as a Pub/Sub message or an HTTP request. Therefore, option B is the most efficient approach to analyze the audio files for customer sentiment, as it leverages the existing Google Cloud services and avoids unnecessary data processing and model training. The other options are not relevant or optimal for this scenario.Reference:

Professional ML Engineer Exam Guide

Speech-to-Text API

Natural Language API

Cloud Functions

Google Professional Machine Learning Certification Exam 2023

Latest Google Professional Machine Learning Engineer Actual Free Exam Questions

You work for a social media company. You want to create a no-code image classification model for an iOS mobile application to identify fashion accessories You have a labeled dataset in Cloud Storage You need to configure a training workflow that minimizes cost and serves predictions with the lowest possible latency What should you do?

A.
Train the model by using AutoML, and register the model in Vertex Al Model Registry Configure your mobile application to send batch requests during prediction.
A.
Train the model by using AutoML, and register the model in Vertex Al Model Registry Configure your mobile application to send batch requests during prediction.
Answers
B.
Train the model by using AutoML Edge and export it as a Core ML model Configure your mobile application to use the mlmodel file directly.
B.
Train the model by using AutoML Edge and export it as a Core ML model Configure your mobile application to use the mlmodel file directly.
Answers
C.
Train the model by using AutoML Edge and export the model as a TFLite model Configure your mobile application to use the tflite file directly
C.
Train the model by using AutoML Edge and export the model as a TFLite model Configure your mobile application to use the tflite file directly
Answers
D.
Train the model by using AutoML, and expose the model as a Vertex Al endpoint Configure your mobile application to invoke the endpoint during prediction.
D.
Train the model by using AutoML, and expose the model as a Vertex Al endpoint Configure your mobile application to invoke the endpoint during prediction.
Answers
Suggested answer: B

Explanation:

AutoML Edgeis a service that allows you to train and deploy custom image classification models for mobile devices12.It supports exporting models asCore MLfiles, which are compatible with iOS applications3.

Using a Core ML model directly on the device eliminates the need for network requests and reduces prediction latency. It also minimizes the cost of serving predictions, as there is no need to pay for cloud resources or network bandwidth.

Option A is incorrect because sending batch requests during prediction does not reduce latency, as the requests still need to be processed by the cloud service. It also incurs more cost than using a local model on the device.

Option C is incorrect because TFLite models are not compatible with iOS applications.TFLite models are designed for Android and other platforms that support TensorFlow Lite4.

Option D is incorrect because exposing the model as a Vertex AI endpoint requires network requests and cloud resources, which increase latency and cost. It also does not leverage the benefits of AutoML Edge, which is optimized for mobile devices.

You work for a retail company. You have been asked to develop a model to predict whether a customer will purchase a product on a given day. Your team has processed the company's sales data, and created a table with the following rows:

* Customer_id

* Product_id

* Date

* Days_since_last_purchase (measured in days)

* Average_purchase_frequency (measured in 1/days)

* Purchase (binary class, if customer purchased product on the Date)

You need to interpret your models results for each individual prediction. What should you do?

A.
Create a BigQuery table Use BigQuery ML to build a boosted tree classifier Inspect the partition rules of the trees to understand how each prediction flows through the trees.
A.
Create a BigQuery table Use BigQuery ML to build a boosted tree classifier Inspect the partition rules of the trees to understand how each prediction flows through the trees.
Answers
B.
Create a Vertex Al tabular dataset Train an AutoML model to predict customer purchases Deploy the model to a Vertex Al endpoint and enable feature attributions Use the 'explain' method to get feature attribution values for each individual prediction.
B.
Create a Vertex Al tabular dataset Train an AutoML model to predict customer purchases Deploy the model to a Vertex Al endpoint and enable feature attributions Use the 'explain' method to get feature attribution values for each individual prediction.
Answers
C.
Create a BigQuery table Use BigQuery ML to build a logistic regression classification model Use the values of the coefficients of the model to interpret the feature importance with higher values corresponding to more importance.
C.
Create a BigQuery table Use BigQuery ML to build a logistic regression classification model Use the values of the coefficients of the model to interpret the feature importance with higher values corresponding to more importance.
Answers
D.
Create a Vertex Al tabular dataset Train an AutoML model to predict customer purchases Deploy the model to a Vertex Al endpoint. At each prediction enable L1 regularization to detect non-informative features.
D.
Create a Vertex Al tabular dataset Train an AutoML model to predict customer purchases Deploy the model to a Vertex Al endpoint. At each prediction enable L1 regularization to detect non-informative features.
Answers
Suggested answer: B

Explanation:

According to the official exam guide1, one of the skills assessed in the exam is to ''explain the predictions of a trained model''.Vertex AI provides feature attributions using Shapley Values, a cooperative game theory algorithm that assigns credit to each feature in a model for a particular outcome2. Feature attributions can help you understand how the model calculates the predictions and debug or optimize the model accordingly.You can use AutoML for Tabular Data to generate and query local feature attributions3. The other options are not relevant or optimal for this scenario.Reference:

Professional ML Engineer Exam Guide

Feature attributions for classification and regression

AutoML for Tabular Data

Google Professional Machine Learning Certification Exam 2023

Latest Google Professional Machine Learning Engineer Actual Free Exam Questions

You work for a company that captures live video footage of checkout areas in their retail stores You need to use the live video footage to build a mode! to detect the number of customers waiting for service in near real time You want to implement a solution quickly and with minimal effort How should you build the model?

A.
Use the Vertex Al Vision Occupancy Analytics model.
A.
Use the Vertex Al Vision Occupancy Analytics model.
Answers
B.
Use the Vertex Al Vision Person/vehicle detector model
B.
Use the Vertex Al Vision Person/vehicle detector model
Answers
C.
Train an AutoML object detection model on an annotated dataset by using Vertex AutoML
C.
Train an AutoML object detection model on an annotated dataset by using Vertex AutoML
Answers
D.
Train a Seq2Seq+ object detection model on an annotated dataset by using Vertex AutoML
D.
Train a Seq2Seq+ object detection model on an annotated dataset by using Vertex AutoML
Answers
Suggested answer: A

Explanation:

According to the official exam guide1, one of the skills assessed in the exam is to ''design, build, and productionalize ML models to solve business challenges using Google Cloud technologies''.The Vertex AI Vision Occupancy Analytics model2is a specialized pre-built vision model that lets you count people or vehicles given specific inputs you add in video frames. It provides advanced features such as active zones counting, line crossing counting, and dwelling detection. This model is suitable for the use case of detecting the number of customers waiting for service in near real time.You can easily create and deploy an occupancy analytics application using Vertex AI Vision3. The other options are not relevant or optimal for this scenario.Reference:

Professional ML Engineer Exam Guide

Occupancy analytics guide

Create an occupancy analytics app with BigQuery forecasting

Google Professional Machine Learning Certification Exam 2023

Latest Google Professional Machine Learning Engineer Actual Free Exam Questions

You work as an analyst at a large banking firm. You are developing a robust, scalable ML pipeline to train several regression and classification models. Your primary focus for the pipeline is model interpretability. You want to productionize the pipeline as quickly as possible What should you do?

A.
Use Tabular Workflow for Wide & Deep through Vertex Al Pipelines to jointly train wide linear models and deep neural networks.
A.
Use Tabular Workflow for Wide & Deep through Vertex Al Pipelines to jointly train wide linear models and deep neural networks.
Answers
B.
Use Google Kubernetes Engine to build a custom training pipeline for XGBoost-based models.
B.
Use Google Kubernetes Engine to build a custom training pipeline for XGBoost-based models.
Answers
C.
Use Tabular Workflow forTabel through Vertex Al Pipelines to train attention-based models.
C.
Use Tabular Workflow forTabel through Vertex Al Pipelines to train attention-based models.
Answers
D.
Use Cloud Composer to build the training pipelines for custom deep learning-based models.
D.
Use Cloud Composer to build the training pipelines for custom deep learning-based models.
Answers
Suggested answer: D

Explanation:

According to the official exam guide1, one of the skills assessed in the exam is to ''automate and orchestrate ML pipelines using Cloud Composer''.Cloud Composer2is a fully managed workflow orchestration service that uses Apache Airflow to create, schedule, monitor, and manage workflows. Cloud Composer allows you to build custom training pipelines for deep learning-based models and integrate them with other Google Cloud services.You can also use Cloud Composer to implement model interpretability techniques, such as feature attributions, explainable AI, or model debugging3. The other options are not relevant or optimal for this scenario.Reference:

Professional ML Engineer Exam Guide

Cloud Composer

Model interpretability with Cloud Composer

Google Professional Machine Learning Certification Exam 2023

Latest Google Professional Machine Learning Engineer Actual Free Exam Questions

You developed a Transformer model in TensorFlow to translate text Your training data includes millions of documents in a Cloud Storage bucket. You plan to use distributed training to reduce training time. You need to configure the training job while minimizing the effort required to modify code and to manage the clusters configuration. What should you do?

A.
Create a Vertex Al custom training job with GPU accelerators for the second worker pool Use tf .distribute.MultiWorkerMirroredStrategy for distribution.
A.
Create a Vertex Al custom training job with GPU accelerators for the second worker pool Use tf .distribute.MultiWorkerMirroredStrategy for distribution.
Answers
B.
Create a Vertex Al custom distributed training job with Reduction Server Use N1 high-memory machine type instances for the first and second pools, and use N1 high-CPU machine type instances for the third worker pool.
B.
Create a Vertex Al custom distributed training job with Reduction Server Use N1 high-memory machine type instances for the first and second pools, and use N1 high-CPU machine type instances for the third worker pool.
Answers
C.
Create a training job that uses Cloud TPU VMs Use tf.distribute.TPUStrategy for distribution.
C.
Create a training job that uses Cloud TPU VMs Use tf.distribute.TPUStrategy for distribution.
Answers
D.
Create a Vertex Al custom training job with a single worker pool of A2 GPU machine type instances Use tf .distribute.MirroredStraregy for distribution.
D.
Create a Vertex Al custom training job with a single worker pool of A2 GPU machine type instances Use tf .distribute.MirroredStraregy for distribution.
Answers
Suggested answer: C

Explanation:

According to the official exam guide1, one of the skills assessed in the exam is to ''configure and optimize model training jobs''.Cloud TPU VMs2are a new way to access Cloud TPUs directly on the TPU host machines, offering a simpler and more flexible user experience. Cloud TPU VMs are optimized for ML model training and can reduce training time and cost.You can use Cloud TPU VMs to train Transformer models in TensorFlow by using the tf.distribute.TPUStrategy3, which handles the distribution of computations across the TPU cores. The other options are not relevant or optimal for this scenario.Reference:

Professional ML Engineer Exam Guide

Cloud TPU VMs

Distributed training with TPUStrategy

Google Professional Machine Learning Certification Exam 2023

Latest Google Professional Machine Learning Engineer Actual Free Exam Questions

You are developing a process for training and running your custom model in production. You need to be able to show lineage for your model and predictions. What should you do?

A.
1 Create a Vertex Al managed dataset 2 Use a Vertex Ai training pipeline to train your model 3 Generate batch predictions in Vertex Al
A.
1 Create a Vertex Al managed dataset 2 Use a Vertex Ai training pipeline to train your model 3 Generate batch predictions in Vertex Al
Answers
B.
1 Use a Vertex Al Pipelines custom training job component to train your model 2. Generate predictions by using a Vertex Al Pipelines model batch predict component
B.
1 Use a Vertex Al Pipelines custom training job component to train your model 2. Generate predictions by using a Vertex Al Pipelines model batch predict component
Answers
C.
1 Upload your dataset to BigQuery 2. Use a Vertex Al custom training job to train your model 3 Generate predictions by using Vertex Al SDK custom prediction routines
C.
1 Upload your dataset to BigQuery 2. Use a Vertex Al custom training job to train your model 3 Generate predictions by using Vertex Al SDK custom prediction routines
Answers
D.
1 Use Vertex Al Experiments to train your model. 2 Register your model in Vertex Al Model Registry 3. Generate batch predictions in Vertex Al
D.
1 Use Vertex Al Experiments to train your model. 2 Register your model in Vertex Al Model Registry 3. Generate batch predictions in Vertex Al
Answers
Suggested answer: D

Explanation:

According to the official exam guide1, one of the skills assessed in the exam is to ''track the lineage of pipeline artifacts''.Vertex AI Experiments2is a service that allows you to track and compare the results of your model training runs. Vertex AI Experiments automatically logs metadata such as hyperparameters, metrics, and artifacts for each training run. You can use Vertex AI Experiments to train your custom model using TensorFlow, PyTorch, XGBoost, or scikit-learn.Vertex AI Model Registry3is a service that allows you to manage your trained models in a central location. You can use Vertex AI Model Registry to register your model, add labels and descriptions, and view the model's lineage graph. The lineage graph shows the artifacts and executions that are part of the model's creation, such as the dataset, the training pipeline, and the evaluation metrics. The other options are not relevant or optimal for this scenario.Reference:

Professional ML Engineer Exam Guide

Vertex AI Experiments

Vertex AI Model Registry

Google Professional Machine Learning Certification Exam 2023

Latest Google Professional Machine Learning Engineer Actual Free Exam Questions

You work for a hotel and have a dataset that contains customers' written comments scanned from paper-based customer feedback forms which are stored as PDF files Every form has the same layout. You need to quickly predict an overall satisfaction score from the customer comments on each form. How should you accomplish this task'?

A.
Use the Vision API to parse the text from each PDF file Use the Natural Language API analyzesentiment feature to infer overall satisfaction scores.
A.
Use the Vision API to parse the text from each PDF file Use the Natural Language API analyzesentiment feature to infer overall satisfaction scores.
Answers
B.
Use the Vision API to parse the text from each PDF file Use the Natural Language API analyzeEntitysentiment feature to infer overall satisfaction scores.
B.
Use the Vision API to parse the text from each PDF file Use the Natural Language API analyzeEntitysentiment feature to infer overall satisfaction scores.
Answers
C.
Uptrain a Document Al custom extractor to parse the text in the comments section of each PDF file. Use the Natural Language API analyze sentiment feature to infer overall satisfaction scores.
C.
Uptrain a Document Al custom extractor to parse the text in the comments section of each PDF file. Use the Natural Language API analyze sentiment feature to infer overall satisfaction scores.
Answers
D.
Uptrain a Document Al custom extractor to parse the text in the comments section of each PDF file. Use the Natural Language API analyzeEntitySentiment feature to infer overall satisfaction scores.
D.
Uptrain a Document Al custom extractor to parse the text in the comments section of each PDF file. Use the Natural Language API analyzeEntitySentiment feature to infer overall satisfaction scores.
Answers
Suggested answer: C

Explanation:

According to the official exam guide1, one of the skills assessed in the exam is to ''design, build, and productionalize ML models to solve business challenges using Google Cloud technologies''.Document AI2is a document understanding platform that takes unstructured data from documents and transforms it into structured data, making it easier to understand, analyze, and consume.Document AI Workbench3allows you to create custom extractors to parse the text in specific sections of your documents.Natural Language API4is a service that provides natural language understanding technologies, such as sentiment analysis, entity analysis, and other text annotations.The analyzeSentiment feature5inspects the given text and identifies the prevailing emotional opinion within the text, especially to determine a writer's attitude as positive, negative, or neutral. Therefore, option C is the best way to accomplish the task of predicting an overall satisfaction score from the customer comments on each form. The other options are not relevant or optimal for this scenario.Reference:

Professional ML Engineer Exam Guide

Document AI

Document AI Workbench

Natural Language API

Sentiment analysis

Google Professional Machine Learning Certification Exam 2023

Latest Google Professional Machine Learning Engineer Actual Free Exam Questions

Total 285 questions
Go to page: of 29