ExamGecko
Home Home / Google / Professional Machine Learning Engineer

Google Professional Machine Learning Engineer Practice Test - Questions Answers, Page 25

Question list
Search
Search

List of questions

Search

Related questions











You work at a gaming startup that has several terabytes of structured data in Cloud Storage. This data includes gameplay time data user metadata and game metadata. You want to build a model that recommends new games to users that requires the least amount of coding. What should you do?

A.
Load the data in BigQuery Use BigQuery ML to tram an Autoencoder model.
A.
Load the data in BigQuery Use BigQuery ML to tram an Autoencoder model.
Answers
B.
Load the data in BigQuery Use BigQuery ML to train a matrix factorization model.
B.
Load the data in BigQuery Use BigQuery ML to train a matrix factorization model.
Answers
C.
Read data to a Vertex Al Workbench notebook Use TensorFlow to train a two-tower model.
C.
Read data to a Vertex Al Workbench notebook Use TensorFlow to train a two-tower model.
Answers
D.
Read data to a Vertex AI Workbench notebook Use TensorFlow to train a matrix factorization model.
D.
Read data to a Vertex AI Workbench notebook Use TensorFlow to train a matrix factorization model.
Answers
Suggested answer: B

Explanation:

BigQuery is a serverless data warehouse that allows you to perform SQL queries on large-scale data. BigQuery ML is a feature of BigQuery that enables you to create and execute machine learning models using standard SQL queries. You can use BigQuery ML to train a matrix factorization model, which is a common technique for recommender systems. Matrix factorization models learn the latent factors that represent the preferences of users and the characteristics of items, and use them to predict the ratings or interactions between users and items. You can use theCREATE MODELstatement to create a matrix factorization model in BigQuery ML, and specify thematrix_factorizationoption as the model type. You can also use theML.RECOMMENDfunction to generate recommendations for new games based on the trained model. This solution requires the least amount of coding, as you only need to write SQL queries to train and use the model.Reference: The answer can be verified from official Google Cloud documentation and resources related to BigQuery and BigQuery ML.

BigQuery ML | Google Cloud

Using matrix factorization | BigQuery ML

ML.RECOMMEND function | BigQuery ML

You are developing a model to help your company create more targeted online advertising campaigns. You need to create a dataset that you will use to train the model. You want to avoid creating or reinforcing unfair bias in the model. What should you do?

Choose 2 answers

A.
Include a comprehensive set of demographic features.
A.
Include a comprehensive set of demographic features.
Answers
B.
include only the demographic groups that most frequently interact with advertisements.
B.
include only the demographic groups that most frequently interact with advertisements.
Answers
C.
Collect a random sample of production traffic to build the training dataset.
C.
Collect a random sample of production traffic to build the training dataset.
Answers
D.
Collect a stratified sample of production traffic to build the training dataset.
D.
Collect a stratified sample of production traffic to build the training dataset.
Answers
E.
Conduct fairness tests across sensitive categories and demographics on the trained model.
E.
Conduct fairness tests across sensitive categories and demographics on the trained model.
Answers
Suggested answer: C, E

Explanation:

To avoid creating or reinforcing unfair bias in the model, you should collect a representative sample of production traffic to build the training dataset, and conduct fairness tests across sensitive categories and demographics on the trained model. A representative sample is one that reflects the true distribution of the population, and does not over- or under-represent any group. A random sample is a simple way to obtain a representative sample, as it ensures that every data point has an equal chance of being selected. A stratified sample is another way to obtain a representative sample, as it ensures that every subgroup has a proportional representation in the sample. However, a stratified sample requires prior knowledge of the subgroups and their sizes, which may not be available or easy to obtain. Therefore, a random sample is a more feasible option in this case. A fairness test is a way to measure and evaluate the potential bias and discrimination of the model, based on different categories and demographics, such as age, gender, race, etc. A fairness test can help you identify and mitigate any unfair outcomes or impacts of the model, and ensure that the model treats all groups fairly and equitably. A fairness test can be conducted using various methods and tools, such as confusion matrices, ROC curves, fairness indicators, etc.Reference: The answer can be verified from official Google Cloud documentation and resources related to data sampling and fairness testing.

Sampling data | BigQuery

Fairness Indicators | TensorFlow

What-if Tool | TensorFlow

You are developing an ML model in a Vertex Al Workbench notebook. You want to track artifacts and compare models during experimentation using different approaches. You need to rapidly and easily transition successful experiments to production as you iterate on your model implementation. What should you do?

A.
1 Initialize the Vertex SDK with the name of your experiment Log parameters and metrics for each experiment, and attach dataset and model artifacts as inputs and outputs to each execution. 2 After a successful experiment create a Vertex Al pipeline.
A.
1 Initialize the Vertex SDK with the name of your experiment Log parameters and metrics for each experiment, and attach dataset and model artifacts as inputs and outputs to each execution. 2 After a successful experiment create a Vertex Al pipeline.
Answers
B.
1. Initialize the Vertex SDK with the name of your experiment Log parameters and metrics for each experiment, save your dataset to a Cloud Storage bucket and upload the models to Vertex Al Model Registry. 2 After a successful experiment create a Vertex Al pipeline.
B.
1. Initialize the Vertex SDK with the name of your experiment Log parameters and metrics for each experiment, save your dataset to a Cloud Storage bucket and upload the models to Vertex Al Model Registry. 2 After a successful experiment create a Vertex Al pipeline.
Answers
C.
1 Create a Vertex Al pipeline with parameters you want to track as arguments to your Pipeline Job Use the Metrics. Model, and Dataset artifact types from the Kubeflow Pipelines DSL as the inputs and outputs of the components in your pipeline. 2. Associate the pipeline with your experiment when you submit the job.
C.
1 Create a Vertex Al pipeline with parameters you want to track as arguments to your Pipeline Job Use the Metrics. Model, and Dataset artifact types from the Kubeflow Pipelines DSL as the inputs and outputs of the components in your pipeline. 2. Associate the pipeline with your experiment when you submit the job.
Answers
D.
1 Create a Vertex Al pipeline Use the Dataset and Model artifact types from the Kubeflow Pipelines. DSL as the inputs and outputs of the components in your pipeline. 2. In your training component use the Vertex Al SDK to create an experiment run Configure the log_params and log_metrics functions to track parameters and metrics of your experiment.
D.
1 Create a Vertex Al pipeline Use the Dataset and Model artifact types from the Kubeflow Pipelines. DSL as the inputs and outputs of the components in your pipeline. 2. In your training component use the Vertex Al SDK to create an experiment run Configure the log_params and log_metrics functions to track parameters and metrics of your experiment.
Answers
Suggested answer: A

Explanation:

Vertex AI is a unified platform for building and managing machine learning solutions on Google Cloud. It provides various services and tools for different stages of the machine learning lifecycle, such as data preparation, model training, deployment, monitoring, and experimentation. Vertex AI Workbench is an integrated development environment (IDE) that allows you to create and run Jupyter notebooks on Google Cloud. You can use Vertex AI Workbench to develop your ML model in Python, using libraries such as TensorFlow, PyTorch, scikit-learn, etc. You can also use the Vertex SDK, which is a Python client library for Vertex AI, to track artifacts and compare models during experimentation. You can use theaiplatform.initfunction to initialize the Vertex SDK with the name of your experiment. You can use theaiplatform.start_runandaiplatform.end_runfunctions to create and close an experiment run. You can use theaiplatform.log_paramsandaiplatform.log_metricsfunctions to log the parameters and metrics for each experiment run. You can also use theaiplatform.log_datasetsandaiplatform.log_modelfunctions to attach the dataset and model artifacts as inputs and outputs to each experiment run. These functions allow you to record and store the metadata and artifacts of your experiments, and compare them using the Vertex AI Experiments UI. After a successful experiment, you can create a Vertex AI pipeline, which is a way to automate and orchestrate your ML workflows. You can use theaiplatform.PipelineJobclass to create a pipeline job, and specify the components and dependencies of your pipeline. You can also use theaiplatform.CustomContainerTrainingJobclass to create a custom container training job, and use therunmethod to run the job as a pipeline component. You can use theaiplatform.Model.deploymethod to deploy your model as a pipeline component. You can also use theaiplatform.Model.monitormethod to monitor your model as a pipeline component. By creating a Vertex AI pipeline, you can rapidly and easily transition successful experiments to production, and reuse and share your ML workflows. This solution requires minimal changes to your code, and leverages the Vertex AI services and tools to streamline your ML development process.Reference: The answer can be verified from official Google Cloud documentation and resources related to Vertex AI, Vertex AI Workbench, Vertex SDK, and Vertex AI pipelines.

Vertex AI | Google Cloud

Vertex AI Workbench | Google Cloud

Vertex SDK for Python | Google Cloud

Vertex AI pipelines | Google Cloud

You recently created a new Google Cloud Project After testing that you can submit a Vertex Al Pipeline job from the Cloud Shell, you want to use a Vertex Al Workbench user-managed notebook instance to run your code from that instance You created the instance and ran the code but this time the job fails with an insufficient permissions error. What should you do?

A.
Ensure that the Workbench instance that you created is in the same region of the Vertex Al Pipelines resources you will use.
A.
Ensure that the Workbench instance that you created is in the same region of the Vertex Al Pipelines resources you will use.
Answers
B.
Ensure that the Vertex Al Workbench instance is on the same subnetwork of the Vertex Al Pipeline resources that you will use.
B.
Ensure that the Vertex Al Workbench instance is on the same subnetwork of the Vertex Al Pipeline resources that you will use.
Answers
C.
Ensure that the Vertex Al Workbench instance is assigned the Identity and Access Management (1AM) Vertex Al User rote.
C.
Ensure that the Vertex Al Workbench instance is assigned the Identity and Access Management (1AM) Vertex Al User rote.
Answers
D.
Ensure that the Vertex Al Workbench instance is assigned the Identity and Access Management (1AM) Notebooks Runner role.
D.
Ensure that the Vertex Al Workbench instance is assigned the Identity and Access Management (1AM) Notebooks Runner role.
Answers
Suggested answer: C

Explanation:

Vertex AI Workbench is an integrated development environment (IDE) that allows you to create and run Jupyter notebooks on Google Cloud. Vertex AI Pipelines is a service that allows you to create and manage machine learning workflows using Vertex AI components. To submit a Vertex AI Pipeline job from a Vertex AI Workbench instance, you need to have the appropriate permissions to access the Vertex AI resources. The Identity and Access Management (IAM) Vertex AI User role is a predefined role that grants the minimum permissions required to use Vertex AI services, such as creating and deploying models, endpoints, and pipelines. By assigning the Vertex AI User role to the Vertex AI Workbench instance, you can ensure that the instance has sufficient permissions to submit a Vertex AI Pipeline job. You can assign the role to the instance by using the Cloud Console, the gcloud command-line tool, or the Cloud IAM API.Reference: The answer can be verified from official Google Cloud documentation and resources related to Vertex AI Workbench, Vertex AI Pipelines, and IAM.

Vertex AI Workbench | Google Cloud

Vertex AI Pipelines | Google Cloud

Vertex AI roles | Google Cloud

Granting, changing, and revoking access to resources | Google Cloud

You work for a semiconductor manufacturing company. You need to create a real-time application that automates the quality control process High-definition images of each semiconductor are taken at the end of the assembly line in real time. The photos are uploaded to a Cloud Storage bucket along with tabular data that includes each semiconductor's batch number serial number dimensions, and weight You need to configure model training and serving while maximizing model accuracy. What should you do?

A.
Use Vertex Al Data Labeling Service to label the images and train an AutoML image classification model. Deploy the model and configure Pub/Sub to publish a message when an image is categorized into the failing class.
A.
Use Vertex Al Data Labeling Service to label the images and train an AutoML image classification model. Deploy the model and configure Pub/Sub to publish a message when an image is categorized into the failing class.
Answers
B.
Use Vertex Al Data Labeling Service to label the images and train an AutoML image classification model. Schedule a daily batch prediction job that publishes a Pub/Sub message when the job completes.
B.
Use Vertex Al Data Labeling Service to label the images and train an AutoML image classification model. Schedule a daily batch prediction job that publishes a Pub/Sub message when the job completes.
Answers
C.
Convert the images into an embedding representation Import this data into BigQuery, and train a BigQuery. ML K-means clustenng model with two clusters Deploy the model and configure Pub/Sub to publish a message when a semiconductor's data is categorized into the failing cluster.
C.
Convert the images into an embedding representation Import this data into BigQuery, and train a BigQuery. ML K-means clustenng model with two clusters Deploy the model and configure Pub/Sub to publish a message when a semiconductor's data is categorized into the failing cluster.
Answers
D.
Import the tabular data into BigQuery use Vertex Al Data Labeling Service to label the data and train an AutoML tabular classification model Deploy the model and configure Pub/Sub to publish a message when a semiconductor's data is categorized into the failing class.
D.
Import the tabular data into BigQuery use Vertex Al Data Labeling Service to label the data and train an AutoML tabular classification model Deploy the model and configure Pub/Sub to publish a message when a semiconductor's data is categorized into the failing class.
Answers
Suggested answer: A

Explanation:

Vertex AI is a unified platform for building and managing machine learning solutions on Google Cloud. It provides various services and tools for different stages of the machine learning lifecycle, such as data preparation, model training, deployment, monitoring, and experimentation. Vertex AI Data Labeling Service is a service that allows you to create and manage human-labeled datasets for machine learning. You can use Vertex AI Data Labeling Service to label the images of semiconductors with binary labels, such as ''pass'' or ''fail'', based on the quality criteria. You can also use Vertex AI AutoML Image Classification, which is a service that allows you to create and train custom image classification models without writing any code. You can use Vertex AI AutoML Image Classification to train an image classification model on the labeled images of semiconductors, and optimize the model for accuracy. You can also use Vertex AI to deploy the model to an endpoint, which is a service that allows you to serve online predictions from your model. You can configure Pub/Sub, which is a service that allows you to publish and subscribe to messages, to publish a message when an image is categorized into the failing class by the model. You can use the message to trigger an action, such as alerting the quality control team or stopping the production line. This solution can help you create a real-time application that automates the quality control process of semiconductors, and maximizes the model accuracy.Reference: The answer can be verified from official Google Cloud documentation and resources related to Vertex AI, Vertex AI Data Labeling Service, Vertex AI AutoML Image Classification, and Pub/Sub.

Vertex AI | Google Cloud

Vertex AI Data Labeling Service | Google Cloud

Vertex AI AutoML Image Classification | Google Cloud

Pub/Sub | Google Cloud

You work at a gaming startup that has several terabytes of structured data in Cloud Storage. This data includes gameplay time data, user metadata, and game metadata. You want to build a model that recommends new games to users that requires the least amount of coding. What should you do?

A.
Load the data in BigQuery. Use BigQuery ML to train an Autoencoder model.
A.
Load the data in BigQuery. Use BigQuery ML to train an Autoencoder model.
Answers
B.
Load the data in BigQuery. Use BigQuery ML to train a matrix factorization model.
B.
Load the data in BigQuery. Use BigQuery ML to train a matrix factorization model.
Answers
C.
Read data to a Vertex Al Workbench notebook. Use TensorFlow to train a two-tower model.
C.
Read data to a Vertex Al Workbench notebook. Use TensorFlow to train a two-tower model.
Answers
D.
Read data to a Vertex Al Workbench notebook. Use TensorFlow to train a matrix factorization model.
D.
Read data to a Vertex Al Workbench notebook. Use TensorFlow to train a matrix factorization model.
Answers
Suggested answer: B

Explanation:

The best option to build a game recommendation model with the least amount of coding is to use BigQuery ML, which allows you to create and execute machine learning models using standard SQL queries. BigQuery ML supports several types of models, including matrix factorization, which is a common technique for collaborative filtering-based recommendation systems. Matrix factorization models learn latent factors for users and items from the observed ratings, and then use them to predict the ratings for new user-item pairs. BigQuery ML provides a built-in function calledML.RECOMMENDthat can generate recommendations for a given user based on a trained matrix factorization model. To use BigQuery ML, you need to load the data in BigQuery, which is a serverless, scalable, and cost-effective data warehouse. You can use thebqcommand-line tool, the BigQuery API, or the Cloud Console to load data from Cloud Storage to BigQuery. Alternatively, you can use federated queries to query data directly from Cloud Storage without loading it to BigQuery, but this may incur additional costs and performance overhead. Option A is incorrect because BigQuery ML does not support Autoencoder models, which are a type of neural network that can learn compressed representations of the input data. Autoencoder models are not suitable for recommendation systems, as they do not capture the interactions between users and items. Option C is incorrect because using TensorFlow to train a two-tower model requires more coding than using BigQuery ML. A two-tower model is a type of neural network that learns embeddings for users and items separately, and then combines them with a dot product or a cosine similarity to compute the rating. TensorFlow is a low-level framework that requires you to define the model architecture, the loss function, the optimizer, the training loop, and the evaluation metrics. Moreover, you need to read the data from Cloud Storage to a Vertex AI Workbench notebook, which is an instance of JupyterLab that runs on a Google Cloud virtual machine. This may involve additional steps such as authentication, authorization, and data preprocessing. Option D is incorrect because using TensorFlow to train a matrix factorization model also requires more coding than using BigQuery ML. Although TensorFlow provides some high-level APIs such as Keras and TensorFlow Recommenders that can simplify the model development, you still need to handle the data loading and the model training and evaluation yourself. Furthermore, you need to read the data from Cloud Storage to a Vertex AI Workbench notebook, which may incur additional complexity and costs.Reference:

BigQuery ML documentation

Using matrix factorization with BigQuery ML

Recommendations AI documentation

Loading data into BigQuery

Querying data in Cloud Storage from BigQuery

Vertex AI Workbench documentation

TensorFlow documentation

TensorFlow Recommenders documentation

While running a model training pipeline on Vertex Al, you discover that the evaluation step is failing because of an out-of-memory error. You are currently using TensorFlow Model Analysis (TFMA) with a standard Evaluator TensorFlow Extended (TFX) pipeline component for the evaluation step. You want to stabilize the pipeline without downgrading the evaluation quality while minimizing infrastructure overhead. What should you do?

A.
Add tfma.MetricsSpec () to limit the number of metrics in the evaluation step.
A.
Add tfma.MetricsSpec () to limit the number of metrics in the evaluation step.
Answers
B.
Migrate your pipeline to Kubeflow hosted on Google Kubernetes Engine, and specify the appropriate node parameters for the evaluation step.
B.
Migrate your pipeline to Kubeflow hosted on Google Kubernetes Engine, and specify the appropriate node parameters for the evaluation step.
Answers
C.
Include the flag -runner=DataflowRunner in beam_pipeline_args to run the evaluation step on Dataflow.
C.
Include the flag -runner=DataflowRunner in beam_pipeline_args to run the evaluation step on Dataflow.
Answers
D.
Move the evaluation step out of your pipeline and run it on custom Compute Engine VMs with sufficient memory.
D.
Move the evaluation step out of your pipeline and run it on custom Compute Engine VMs with sufficient memory.
Answers
Suggested answer: C

Explanation:

The best option to stabilize the pipeline without downgrading the evaluation quality while minimizing infrastructure overhead is to use Dataflow as the runner for the evaluation step. Dataflow is a fully managed service for executing Apache Beam pipelines that can scale up and down according to the workload. Dataflow can handle large-scale, distributed data processing tasks such as model evaluation, and it can also integrate with Vertex AI Pipelines and TensorFlow Extended (TFX). By using the flag-runner=DataflowRunnerinbeam_pipeline_args, you can instruct the Evaluator component to run the evaluation step on Dataflow, instead of using the default DirectRunner, which runs locally and may cause out-of-memory errors. Option A is incorrect because addingtfma.MetricsSpec()to limit the number of metrics in the evaluation step may downgrade the evaluation quality, as some important metrics may be omitted. Moreover, reducing the number of metrics may not solve the out-of-memory error, as the evaluation step may still consume a lot of memory depending on the size and complexity of the data and the model. Option B is incorrect because migrating the pipeline to Kubeflow hosted on Google Kubernetes Engine (GKE) may increase the infrastructure overhead, as you need to provision, manage, and monitor the GKE cluster yourself. Moreover, you need to specify the appropriate node parameters for the evaluation step, which may require trial and error to find the optimal configuration. Option D is incorrect because moving the evaluation step out of the pipeline and running it on custom Compute Engine VMs may also increase the infrastructure overhead, as you need to create, configure, and delete the VMs yourself. Moreover, you need to ensure that the VMs have sufficient memory for the evaluation step, which may require trial and error to find the optimal machine type.Reference:

Dataflow documentation

Using DataflowRunner

Evaluator component documentation

Configuring the Evaluator component

You developed a BigQuery ML linear regressor model by using a training dataset stored in a BigQuery table. New data is added to the table every minute. You are using Cloud Scheduler and Vertex Al Pipelines to automate hourly model training, and use the model for direct inference. The feature preprocessing logic includes quantile bucketization and MinMax scaling on data received in the last hour. You want to minimize storage and computational overhead. What should you do?

A.
Create a component in the Vertex Al Pipelines directed acyclic graph (DAG) to calculate the required statistics, and pass the statistics on to subsequent components.
A.
Create a component in the Vertex Al Pipelines directed acyclic graph (DAG) to calculate the required statistics, and pass the statistics on to subsequent components.
Answers
B.
Preprocess and stage the data in BigQuery prior to feeding it to the model during training and inference.
B.
Preprocess and stage the data in BigQuery prior to feeding it to the model during training and inference.
Answers
C.
Create SQL queries to calculate and store the required statistics in separate BigQuery tables that are referenced in the CREATE MODEL statement.
C.
Create SQL queries to calculate and store the required statistics in separate BigQuery tables that are referenced in the CREATE MODEL statement.
Answers
D.
Use the TRANSFORM clause in the CREATE MODEL statement in the SQL query to calculate the required statistics.
D.
Use the TRANSFORM clause in the CREATE MODEL statement in the SQL query to calculate the required statistics.
Answers
Suggested answer: D

Explanation:

The best option to minimize storage and computational overhead is to use the TRANSFORM clause in the CREATE MODEL statement in the SQL query to calculate the required statistics. The TRANSFORM clause allows you to specify feature preprocessing logic that applies to both training and prediction. The preprocessing logic is executed in the same query as the model creation, which avoids the need to create and store intermediate tables. The TRANSFORM clause also supports quantile bucketization and MinMax scaling, which are the preprocessing steps required for this scenario. Option A is incorrect because creating a component in the Vertex AI Pipelines DAG to calculate the required statistics may increase the computational overhead, as the component needs to run separately from the model creation. Moreover, the component needs to pass the statistics to subsequent components, which may increase the storage overhead. Option B is incorrect because preprocessing and staging the data in BigQuery prior to feeding it to the model may also increase the storage and computational overhead, as you need to create and maintain additional tables for the preprocessed data. Moreover, you need to ensure that the preprocessing logic is consistent for both training and inference. Option C is incorrect because creating SQL queries to calculate and store the required statistics in separate BigQuery tables may also increase the storage and computational overhead, as you need to create and maintain additional tables for the statistics. Moreover, you need to ensure that the statistics are updated regularly to reflect the new data.Reference:

BigQuery ML documentation

Using the TRANSFORM clause

Feature preprocessing with BigQuery ML

You are creating a social media app where pet owners can post images of their pets. You have one million user uploaded images with hashtags. You want to build a comprehensive system that recommends images to users that are similar in appearance to their own uploaded images.

What should you do?

A.
Download a pretrained convolutional neural network, and fine-tune the model to predict hashtags based on the input images. Use the predicted hashtags to make recommendations.
A.
Download a pretrained convolutional neural network, and fine-tune the model to predict hashtags based on the input images. Use the predicted hashtags to make recommendations.
Answers
B.
Retrieve image labels and dominant colors from the input images using the Vision API. Use these properties and the hashtags to make recommendations.
B.
Retrieve image labels and dominant colors from the input images using the Vision API. Use these properties and the hashtags to make recommendations.
Answers
C.
Use the provided hashtags to create a collaborative filtering algorithm to make recommendations.
C.
Use the provided hashtags to create a collaborative filtering algorithm to make recommendations.
Answers
D.
Download a pretrained convolutional neural network, and use the model to generate embeddings of the input images. Measure similarity between embeddings to make recommendations.
D.
Download a pretrained convolutional neural network, and use the model to generate embeddings of the input images. Measure similarity between embeddings to make recommendations.
Answers
Suggested answer: D

Explanation:

The best option to build a comprehensive system that recommends images to users that are similar in appearance to their own uploaded images is to download a pretrained convolutional neural network (CNN), and use the model to generate embeddings of the input images. Embeddings are low-dimensional representations of high-dimensional data that capture the essential features and semantics of the data. By using a pretrained CNN, you can leverage the knowledge learned from large-scale image datasets, such as ImageNet, and apply it to your own domain. A pretrained CNN can be used as a feature extractor, where the output of the last hidden layer (or any intermediate layer) is taken as the embedding vector for the input image. You can then measure the similarity between embeddings using a distance metric, such as cosine similarity or Euclidean distance, and recommend images that have the highest similarity scores to the user's uploaded image. Option A is incorrect because downloading a pretrained CNN and fine-tuning the model to predict hashtags based on the input images may not capture the visual similarity of the images, as hashtags may not reflect the appearance of the images accurately. For example, two images of different breeds of dogs may have the same hashtag #dog, but they may not look similar to each other. Moreover, fine-tuning the model may require additional data and computational resources, and it may not generalize well to new images that have different or missing hashtags. Option B is incorrect because retrieving image labels and dominant colors from the input images using the Vision API may not capture the visual similarity of the images, as labels and colors may not reflect the fine-grained details of the images. For example, two images of the same breed of dog may have different labels and colors depending on the background, lighting, and angle of the image. Moreover, using the Vision API may incur additional costs and latency, and it may not be able to handle custom or domain-specific labels. Option C is incorrect because using the provided hashtags to create a collaborative filtering algorithm may not capture the visual similarity of the images, as collaborative filtering relies on the ratings or preferences of users, not the features of the images. For example, two images of different animals may have similar ratings or preferences from users, but they may not look similar to each other. Moreover, collaborative filtering may suffer from the cold start problem, where new images or users that have no ratings or preferences cannot be recommended.Reference:

Image similarity search with TensorFlow

Image embeddings documentation

Pretrained models documentation

Similarity metrics documentation

You are training a deep learning model for semantic image segmentation with reduced training time. While using a Deep Learning VM Image, you receive the following error: The resource 'projects/deeplearning-platforn/zones/europe-west4-c/acceleratorTypes/nvidia-tesla-k80' was not found. What should you do?

A.
Ensure that you have GPU quota in the selected region.
A.
Ensure that you have GPU quota in the selected region.
Answers
B.
Ensure that the required GPU is available in the selected region.
B.
Ensure that the required GPU is available in the selected region.
Answers
C.
Ensure that you have preemptible GPU quota in the selected region.
C.
Ensure that you have preemptible GPU quota in the selected region.
Answers
D.
Ensure that the selected GPU has enough GPU memory for the workload.
D.
Ensure that the selected GPU has enough GPU memory for the workload.
Answers
Suggested answer: B

Explanation:

The error message indicates that the selected GPU type (nvidia-tesla-k80) is not available in the selected region (europe-west4-c). This can happen when the GPU type is not supported in the region, or when the GPU quota is exhausted in the region. To avoid this error, you should ensure that the required GPU is available in the selected region before creating a Deep Learning VM Image. You can use the following steps to check the GPU availability and quota:

To check the GPU availability, you can use thegcloud compute accelerator-types listcommand with the--filterflag to specify the GPU type and the region. For example, to check the availability of nvidia-tesla-k80 in europe-west4-c, you can run:

gcloud compute accelerator-types list --filter='name=nvidia-tesla-k80 AND zone:europe-west4-c'

If the command returns an empty result, it means that the GPU type is not supported in the region. You can either choose a different GPU type or a different region that supports the GPU type. You can use the same command without the--filterflag to list all the available GPU types and regions. For example, to list all the available GPU types in europe-west4-c, you can run:

gcloud compute accelerator-types list --filter='zone:europe-west4-c'

To check the GPU quota, you can use thegcloud compute regions describecommand with the--formatflag to specify the region and the quota metric. For example, to check the quota for nvidia-tesla-k80 in europe-west4-c, you can run:

gcloud compute regions describe europe-west4-c --format='value(quotas.NVIDIA_K80_GPUS)'

If the command returns a value of 0, it means that the GPU quota is exhausted in the region. You can either request more quota from Google Cloud or choose a different region that has enough quota for the GPU type.

Troubleshooting | Deep Learning VM Images | Google Cloud

Checking GPU availability

Checking GPU quota


Total 285 questions
Go to page: of 29