ExamGecko
Home Home / Google / Professional Machine Learning Engineer

Google Professional Machine Learning Engineer Practice Test - Questions Answers, Page 19

Question list
Search
Search

List of questions

Search

Related questions











You need to develop a custom TensorRow model that will be used for online predictions. The training data is stored in BigQuery. You need to apply instance-level data transformations to the data for model training and serving. You want to use the same preprocessing routine during model training and serving. How should you configure the preprocessing routine?

A.
Create a BigQuery script to preprocess the data, and write the result to another BigQuery table.
A.
Create a BigQuery script to preprocess the data, and write the result to another BigQuery table.
Answers
B.
Create a pipeline in Vertex Al Pipelines to read the data from BigQuery and preprocess it using a custom preprocessing component.
B.
Create a pipeline in Vertex Al Pipelines to read the data from BigQuery and preprocess it using a custom preprocessing component.
Answers
C.
Create a preprocessing function that reads and transforms the data from BigQuery Create a Vertex Al custom prediction routine that calls the preprocessing function at serving time.
C.
Create a preprocessing function that reads and transforms the data from BigQuery Create a Vertex Al custom prediction routine that calls the preprocessing function at serving time.
Answers
D.
Create an Apache Beam pipeline to read the data from BigQuery and preprocess it by using TensorFlow Transform and Dataflow.
D.
Create an Apache Beam pipeline to read the data from BigQuery and preprocess it by using TensorFlow Transform and Dataflow.
Answers
Suggested answer: D

Explanation:

According to the official exam guide1, one of the skills assessed in the exam is to ''design, build, and productionalize ML models to solve business challenges using Google Cloud technologies''.TensorFlow Transform2is a library for preprocessing data with TensorFlow. TensorFlow Transform enables you to define and execute distributed pre-processing or feature engineering functions on large data sets, and then export the same functions as a TensorFlow graph for re-use during training or serving. TensorFlow Transform can handle both instance-level and full-pass data transformations.Apache Beam3is an open source framework for building scalable and portable data pipelines. Apache Beam supports both batch and streaming data processing.Dataflow4is a fully managed service for running Apache Beam pipelines on Google Cloud. Dataflow handles the provisioning and management of the compute resources, as well as the optimization and execution of the pipelines. Therefore, option D is the best way to configure the preprocessing routine for the given use case, as it allows you to use the same preprocessing logic during model training and serving, and leverage the scalability and performance of Dataflow. The other options are not relevant or optimal for this scenario.Reference:

Professional ML Engineer Exam Guide

TensorFlow Transform

Apache Beam

Dataflow

Google Professional Machine Learning Certification Exam 2023

Latest Google Professional Machine Learning Engineer Actual Free Exam Questions

You are pre-training a large language model on Google Cloud. This model includes custom TensorFlow operations in the training loop Model training will use a large batch size, and you expect training to take several weeks You need to configure a training architecture that minimizes both training time and compute costs What should you do?

A.
A.
Answers
B.
B.
Answers
C.
C.
Answers
D.
D.
Answers
Suggested answer: D

Explanation:

According to the official exam guide1, one of the skills assessed in the exam is to ''design, build, and productionalize ML models to solve business challenges using Google Cloud technologies''.TPUs2are Google's custom-developed application-specific integrated circuits (ASICs) used to accelerate machine learning workloads. TPUs are designed to handle large batch sizes, high dimensional data, and complex computations.TPUs can significantly reduce the training time and compute costs of large language models, especially when used with distributed training strategies, such as MultiWorkerMirroredStrategy3. Therefore, option D is the best way to configure a training architecture that minimizes both training time and compute costs for the given use case. The other options are not relevant or optimal for this scenario.Reference:

Professional ML Engineer Exam Guide

TPUs

MultiWorkerMirroredStrategy

Google Professional Machine Learning Certification Exam 2023

Latest Google Professional Machine Learning Engineer Actual Free Exam Questions

You are building a TensorFlow text-to-image generative model by using a dataset that contains billions of images with their respective captions. You want to create a low maintenance, automated workflow that reads the data from a Cloud Storage bucket collects statistics, splits the dataset into training/validation/test datasets performs data transformations, trains the model using the training/validation datasets. and validates the model by using the test dataset. What should you do?

A.
Use the Apache Airflow SDK to create multiple operators that use Dataflow and Vertex Al services Deploy the workflow on Cloud Composer.
A.
Use the Apache Airflow SDK to create multiple operators that use Dataflow and Vertex Al services Deploy the workflow on Cloud Composer.
Answers
B.
Use the MLFlow SDK and deploy it on a Google Kubernetes Engine Cluster Create multiple components that use Dataflow and Vertex Al services.
B.
Use the MLFlow SDK and deploy it on a Google Kubernetes Engine Cluster Create multiple components that use Dataflow and Vertex Al services.
Answers
C.
Use the Kubeflow Pipelines (KFP) SDK to create multiple components that use Dataflow and Vertex Al services Deploy the workflow on Vertex Al Pipelines.
C.
Use the Kubeflow Pipelines (KFP) SDK to create multiple components that use Dataflow and Vertex Al services Deploy the workflow on Vertex Al Pipelines.
Answers
D.
Use the TensorFlow Extended (TFX) SDK to create multiple components that use Dataflow and Vertex Al services Deploy the workflow on Vertex Al Pipelines.
D.
Use the TensorFlow Extended (TFX) SDK to create multiple components that use Dataflow and Vertex Al services Deploy the workflow on Vertex Al Pipelines.
Answers
Suggested answer: D

Explanation:

According to the web search results, TensorFlow Extended (TFX) is a platform for building end-to-end machine learning pipelines using TensorFlow1. TFX provides a set of components that can be orchestrated using either the TFX SDK or Kubeflow Pipelines. TFX components can handle different aspects of the pipeline, such as data ingestion, data validation, data transformation, model training, model evaluation, model serving, and more.TFX components can also leverage other Google Cloud services, such as Dataflow2and Vertex AI3. Dataflow is a fully managed service for running Apache Beam pipelines on Google Cloud. Dataflow handles the provisioning and management of the compute resources, as well as the optimization and execution of the pipelines. Vertex AI is a unified platform for machine learning development and deployment. Vertex AI offers various services and tools for building, managing, and serving machine learning models. Therefore, option D is the best way to create a low maintenance, automated workflow for the given use case, as it allows you to use the TFX SDK to define and execute your pipeline components, and use Dataflow and Vertex AI services to scale and optimize your pipeline. The other options are not relevant or optimal for this scenario.Reference:

TensorFlow Extended

Dataflow

Vertex AI

Google Professional Machine Learning Certification Exam 2023

Latest Google Professional Machine Learning Engineer Actual Free Exam Questions

You are developing an ML pipeline using Vertex Al Pipelines. You want your pipeline to upload a new version of the XGBoost model to Vertex Al Model Registry and deploy it to Vertex Al End points for online inference. You want to use the simplest approach. What should you do?

A.
Use the Vertex Al REST API within a custom component based on a vertex-ai/prediction/xgboost-cpu image.
A.
Use the Vertex Al REST API within a custom component based on a vertex-ai/prediction/xgboost-cpu image.
Answers
B.
Use the Vertex Al ModelEvaluationOp component to evaluate the model.
B.
Use the Vertex Al ModelEvaluationOp component to evaluate the model.
Answers
C.
Use the Vertex Al SDK for Python within a custom component based on a python: 3.10 Image.
C.
Use the Vertex Al SDK for Python within a custom component based on a python: 3.10 Image.
Answers
D.
Chain the Vertex Al ModelUploadOp and ModelDeployop components together.
D.
Chain the Vertex Al ModelUploadOp and ModelDeployop components together.
Answers
Suggested answer: D

Explanation:

According to the web search results, Vertex AI Pipelines is a serverless orchestrator for running ML pipelines, using either the KFP SDK or TFX1.Vertex AI Pipelines provides a set of prebuilt components that can be used to perform common ML tasks, such as training, evaluation, deployment, and more2.Vertex AI ModelUploadOp and ModelDeployOp are two such components that can be used to upload a new version of the XGBoost model to Vertex AI Model Registry and deploy it to Vertex AI Endpoints for online inference3. Therefore, option D is the best way to use the simplest approach for the given use case, as it only requires chaining two prebuilt components together. The other options are not relevant or optimal for this scenario.Reference:

Vertex AI Pipelines

Google Cloud Pipeline Components

Vertex AI ModelUploadOp and ModelDeployOp

Google Professional Machine Learning Certification Exam 2023

Latest Google Professional Machine Learning Engineer Actual Free Exam Questions

You work for an online retailer. Your company has a few thousand short lifecycle products. Your company has five years of sales data stored in BigQuery. You have been asked to build a model that will make monthly sales predictions for each product. You want to use a solution that can be implemented quickly with minimal effort. What should you do?

A.
Use Prophet on Vertex Al Training to build a custom model.
A.
Use Prophet on Vertex Al Training to build a custom model.
Answers
B.
Use Vertex Al Forecast to build a NN-based model.
B.
Use Vertex Al Forecast to build a NN-based model.
Answers
C.
Use BigQuery ML to build a statistical AR1MA_PLUS model.
C.
Use BigQuery ML to build a statistical AR1MA_PLUS model.
Answers
D.
Use TensorFlow on Vertex Al Training to build a custom model.
D.
Use TensorFlow on Vertex Al Training to build a custom model.
Answers
Suggested answer: C

Explanation:

According to the web search results, BigQuery ML1is a service that allows you to create and execute machine learning models in BigQuery using SQL queries.BigQuery ML supports various types of models, such as linear regression, logistic regression, k-means clustering, matrix factorization, deep neural networks, and time series forecasting1.ARIMA_PLUS2is a statistical model for time series forecasting that is built in to BigQuery ML. ARIMA_PLUS stands for AutoRegressive Integrated Moving Average with eXogenous regressors. ARIMA_PLUS models the relationship between a target variable and its past values, as well as other external factors that might influence the target variable.ARIMA_PLUS can handle multiple time series, seasonality, holidays, and missing values2. Therefore, option C is the best way to use a solution that can be implemented quickly with minimal effort for the given use case, as it allows you to use SQL queries to build and run a forecasting model in BigQuery without moving the data or writing custom code. The other options are not relevant or optimal for this scenario.Reference:

BigQuery ML

ARIMA_PLUS

Google Professional Machine Learning Certification Exam 2023

Latest Google Professional Machine Learning Engineer Actual Free Exam Questions

You are creating a model training pipeline to predict sentiment scores from text-based product reviews. You want to have control over how the model parameters are tuned, and you will deploy the model to an endpoint after it has been trained You will use Vertex Al Pipelines to run the pipeline You need to decide which Google Cloud pipeline components to use What components should you choose?

A.
A.
Answers
B.
B.
Answers
C.
C.
Answers
D.
D.
Answers
Suggested answer: A

Explanation:

According to the web search results, Vertex AI Pipelines is a serverless orchestrator for running ML pipelines, using either the KFP SDK or TFX1.Vertex AI Pipelines provides a set of prebuilt components that can be used to perform common ML tasks, such as training, evaluation, deployment, and more2.Vertex AI ModelEvaluationOp and ModelDeployOp are two such components that can be used to evaluate and deploy a model to an endpoint for online inference3. However, Vertex AI Pipelines does not provide a prebuilt component for hyperparameter tuning.Therefore, to have control over how the model parameters are tuned, you need to use a custom component that calls the Vertex AI HyperparameterTuningJob service4. Therefore, option A is the best way to decide which Google Cloud pipeline components to use for the given use case, as it includes a custom component for hyperparameter tuning, and prebuilt components for model evaluation and deployment. The other options are not relevant or optimal for this scenario.Reference:

Vertex AI Pipelines

Google Cloud Pipeline Components

Vertex AI ModelEvaluationOp and ModelDeployOp

Vertex AI HyperparameterTuningJob

Google Professional Machine Learning Certification Exam 2023

Latest Google Professional Machine Learning Engineer Actual Free Exam Questions

Your team frequently creates new ML models and runs experiments. Your team pushes code to a single repository hosted on Cloud Source Repositories. You want to create a continuous integration pipeline that automatically retrains the models whenever there is any modification of the code. What should be your first step to set up the CI pipeline?

A.
Configure a Cloud Build trigger with the event set as 'Pull Request'
A.
Configure a Cloud Build trigger with the event set as 'Pull Request'
Answers
B.
Configure a Cloud Build trigger with the event set as 'Push to a branch'
B.
Configure a Cloud Build trigger with the event set as 'Push to a branch'
Answers
C.
Configure a Cloud Function that builds the repository each time there is a code change.
C.
Configure a Cloud Function that builds the repository each time there is a code change.
Answers
D.
Configure a Cloud Function that builds the repository each time a new branch is created.
D.
Configure a Cloud Function that builds the repository each time a new branch is created.
Answers
Suggested answer: B

Explanation:

According to the web search results, Cloud Build1is a service that executes your builds on Google Cloud Platform infrastructure.Cloud Build can import source code from Cloud Source Repositories2, Cloud Storage, GitHub, Bitbucket, or any publicly hosted Git repository. Cloud Build allows you to create and manage build triggers, which are automated workflows that run whenever a code change is pushed to your source repository. You can use Cloud Build triggers to automatically retrain your ML models whenever there is any modification of the code. Therefore, option B is the best way to set up the CI pipeline for the given use case, as it allows you to configure a Cloud Build trigger with the event set as ''Push to a branch'', which means the trigger will run whenever a new commit is pushed to a specific branch of your source repository. The other options are not relevant or optimal for this scenario.Reference:

Cloud Build

Cloud Source Repositories

Google Professional Machine Learning Certification Exam 2023

Latest Google Professional Machine Learning Engineer Actual Free Exam Questions

You have built a custom model that performs several memory-intensive preprocessing tasks before it makes a prediction. You deployed the model to a Vertex Al endpoint. and validated that results were received in a reasonable amount of time After routing user traffic to the endpoint, you discover that the endpoint does not autoscale as expected when receiving multiple requests What should you do?

A.
Use a machine type with more memory
A.
Use a machine type with more memory
Answers
B.
Decrease the number of workers per machine
B.
Decrease the number of workers per machine
Answers
C.
Increase the CPU utilization target in the autoscaling configurations
C.
Increase the CPU utilization target in the autoscaling configurations
Answers
D.
Decrease the CPU utilization target in the autoscaling configurations
D.
Decrease the CPU utilization target in the autoscaling configurations
Answers
Suggested answer: D

Explanation:

According to the web search results, Vertex AI is a unified platform for machine learning development and deployment.Vertex AI offers various services and tools for building, managing, and serving machine learning models1.Vertex AI allows you to deploy your models to endpoints for online prediction, and configure the compute resources and autoscaling options for your deployed models2. Autoscaling with Vertex AI endpoints is (by default) based on the CPU utilization across all cores of the machine type you have specified. The default threshold of 60% represents 60% on all cores.For example, for a 4 core machine, that means you need 240% utilization to trigger autoscaling3. Therefore, if you discover that the endpoint does not autoscale as expected when receiving multiple requests, you might need to decrease the CPU utilization target in the autoscaling configurations. This way, you can lower the threshold for triggering autoscaling and allocate more resources to handle the prediction requests. Therefore, option D is the best way to solve the problem for the given use case. The other options are not relevant or optimal for this scenario.Reference:

Vertex AI

Deploy a model to an endpoint

Vertex AI endpoint doesn't scale up / down

Google Professional Machine Learning Certification Exam 2023

Latest Google Professional Machine Learning Engineer Actual Free Exam Questions

Your company manages an ecommerce website. You developed an ML model that recommends additional products to users in near real time based on items currently in the user's cart. The workflow will include the following processes.

1 The website will send a Pub/Sub message with the relevant data and then receive a message with the prediction from Pub/Sub.

2 Predictions will be stored in BigQuery

3. The model will be stored in a Cloud Storage bucket and will be updated frequently

You want to minimize prediction latency and the effort required to update the model How should you reconfigure the architecture?

A.
Write a Cloud Function that loads the model into memory for prediction Configure the function to be triggered when messages are sent to Pub/Sub.
A.
Write a Cloud Function that loads the model into memory for prediction Configure the function to be triggered when messages are sent to Pub/Sub.
Answers
B.
Create a pipeline in Vertex Al Pipelines that performs preprocessing, prediction and postprocessing Configure the pipeline to be triggered by a Cloud Function when messages are sent to Pub/Sub.
B.
Create a pipeline in Vertex Al Pipelines that performs preprocessing, prediction and postprocessing Configure the pipeline to be triggered by a Cloud Function when messages are sent to Pub/Sub.
Answers
C.
Expose the model as a Vertex Al endpoint Write a custom DoFn in a Dataflow job that calls the endpoint for prediction.
C.
Expose the model as a Vertex Al endpoint Write a custom DoFn in a Dataflow job that calls the endpoint for prediction.
Answers
D.
Use the Runlnference API with watchFilePatterr. in a Dataflow job that wraps around the model and serves predictions.
D.
Use the Runlnference API with watchFilePatterr. in a Dataflow job that wraps around the model and serves predictions.
Answers
Suggested answer: D

Explanation:

According to the web search results, RunInference API1is a feature of Apache Beam that enables you to run models as part of your pipeline in a way that is optimized for machine learning inference. RunInference API supports features like batching, caching, and model reloading.RunInference API can be used with various frameworks, such as TensorFlow, PyTorch, Sklearn, XGBoost, ONNX, and TensorRT1.Dataflow2is a fully managed service for running Apache Beam pipelines on Google Cloud. Dataflow handles the provisioning and management of the compute resources, as well as the optimization and execution of the pipelines. Therefore, option D is the best way to reconfigure the architecture for the given use case, as it allows you to use the RunInference API with watchFilePattern in a Dataflow job that wraps around the model and serves predictions.This way, you can minimize prediction latency and the effort required to update the model, as the RunInference API will automatically reload the model from the Cloud Storage bucket whenever there is a change in the model file1. The other options are not relevant or optimal for this scenario.Reference:

RunInference API

Dataflow

Google Professional Machine Learning Certification Exam 2023

Latest Google Professional Machine Learning Engineer Actual Free Exam Questions

You are collaborating on a model prototype with your team. You need to create a Vertex Al Workbench environment for the members of your team and also limit access to other employees in your project. What should you do?

A.
1. Create a new service account and grant it the Notebook Viewer role. 2 Grant the Service Account User role to each team member on the service account. 3 Grant the Vertex Al User role to each team member. 4. Provision a Vertex Al Workbench user-managed notebook instance that uses the new service account.
A.
1. Create a new service account and grant it the Notebook Viewer role. 2 Grant the Service Account User role to each team member on the service account. 3 Grant the Vertex Al User role to each team member. 4. Provision a Vertex Al Workbench user-managed notebook instance that uses the new service account.
Answers
B.
1. Grant the Vertex Al User role to the default Compute Engine service account. 2. Grant the Service Account User role to each team member on the default Compute Engine service account. 3. Provision a Vertex Al Workbench user-managed notebook instance that uses the default Compute Engine service account.
B.
1. Grant the Vertex Al User role to the default Compute Engine service account. 2. Grant the Service Account User role to each team member on the default Compute Engine service account. 3. Provision a Vertex Al Workbench user-managed notebook instance that uses the default Compute Engine service account.
Answers
C.
1 Create a new service account and grant it the Vertex Al User role. 2 Grant the Service Account User role to each team member on the service account. 3. Grant the Notebook Viewer role to each team member. 4 Provision a Vertex Al Workbench user-managed notebook instance that uses the new service account.
C.
1 Create a new service account and grant it the Vertex Al User role. 2 Grant the Service Account User role to each team member on the service account. 3. Grant the Notebook Viewer role to each team member. 4 Provision a Vertex Al Workbench user-managed notebook instance that uses the new service account.
Answers
D.
1 Grant the Vertex Al User role to the primary team member. 2. Grant the Notebook Viewer role to the other team members. 3. Provision a Vertex Al Workbench user-managed notebook instance that uses the primary user's account.
D.
1 Grant the Vertex Al User role to the primary team member. 2. Grant the Notebook Viewer role to the other team members. 3. Provision a Vertex Al Workbench user-managed notebook instance that uses the primary user's account.
Answers
Suggested answer: C

Explanation:

To create a Vertex AI Workbench environment for your team and limit access to other employees in your project, you should follow these steps:

Create a new service account and grant it the Vertex AI User role.This role grants full access to all resources in Vertex AI, including creating and managing notebook instances1.

Grant the Service Account User role to each team member on the service account.This role allows the team members to impersonate the service account and use its permissions2.

Grant the Notebook Viewer role to each team member.This role allows the team members to view and connect to the notebook instance, but not to modify or delete it3.

Provision a Vertex AI Workbench user-managed notebook instance that uses the new service account. This way, the notebook instance will run as the service account and only the team members who have the Service Account User and Notebook Viewer roles will be able to access it.

1: Vertex AI access control with IAM | Google Cloud

2: Understanding service accounts | Cloud IAM Documentation

3: Manage access to a Vertex AI Workbench instance | Google Cloud

[4]: Create and manage Vertex AI Workbench instances | Google Cloud

Total 285 questions
Go to page: of 29