ExamGecko
Home Home / Google / Professional Machine Learning Engineer

Google Professional Machine Learning Engineer Practice Test - Questions Answers, Page 5

Question list
Search
Search

List of questions

Search

Related questions











You have been asked to develop an input pipeline for an ML training model that processes images from disparate sources at a low latency. You discover that your input data does not fit in memory. How should you create a dataset following Google-recommended best practices?

A.
Create a tf.data.Dataset.prefetch transformation
A.
Create a tf.data.Dataset.prefetch transformation
Answers
B.
Convert the images to tf .Tensor Objects, and then run Dataset. from_tensor_slices{).
B.
Convert the images to tf .Tensor Objects, and then run Dataset. from_tensor_slices{).
Answers
C.
Convert the images to tf .Tensor Objects, and then run tf. data. Dataset. from_tensors ().
C.
Convert the images to tf .Tensor Objects, and then run tf. data. Dataset. from_tensors ().
Answers
D.
Convert the images Into TFRecords, store the images in Cloud Storage, and then use the tf. data API to read the images for training
D.
Convert the images Into TFRecords, store the images in Cloud Storage, and then use the tf. data API to read the images for training
Answers
Suggested answer: D

Explanation:

An input pipeline is a way to prepare and feed data to a machine learning model for training or inference. An input pipeline typically consists of several steps, such as reading, parsing, transforming, batching, and prefetching the data.An input pipeline can improve the performance and efficiency of the model, as it can handle large and complex datasets, optimize the data processing, and reduce the latency and memory usage1.

For the use case of developing an input pipeline for an ML training model that processes images from disparate sources at a low latency, the best option is to convert the images into TFRecords, store the images in Cloud Storage, and then use the tf.data API to read the images for training. This option involves using the following components and techniques:

TFRecords: TFRecords is a binary file format that can store a sequence of data records, such as images, text, or audio. TFRecords can help to compress, serialize, and store the data efficiently, and reduce the data loading and parsing time.TFRecords can also support data sharding and interleaving, which can improve the data throughput and parallelism2.

Cloud Storage: Cloud Storage is a service that allows you to store and access data on Google Cloud. Cloud Storage can help to store and manage large and distributed datasets, such as images from different sources, and provide high availability, durability, and scalability.Cloud Storage can also integrate with other Google Cloud services, such as Compute Engine, AI Platform, and Dataflow3.

tf.data API: tf.data API is a set of tools and methods that allow you to create and manipulate data pipelines in TensorFlow. tf.data API can help to read, transform, batch, and prefetch the data efficiently, and optimize the data processing for performance and memory. tf.data API can also support various data sources and formats, such as TFRecords, CSV, JSON, and images.

By using these components and techniques, the input pipeline can process large datasets of images from disparate sources that do not fit in memory, and provide low latency and high performance for the ML training model. Therefore, converting the images into TFRecords, storing the images in Cloud Storage, and using the tf.data API to read the images for training is the best option for this use case.

Build TensorFlow input pipelines | TensorFlow Core

TFRecord and tf.Example | TensorFlow Core

Cloud Storage documentation | Google Cloud

[tf.data: Build TensorFlow input pipelines | TensorFlow Core]

You are building an ML model to detect anomalies in real-time sensor data. You will use Pub/Sub to handle incoming requests. You want to store the results for analytics and visualization. How should you configure the pipeline?

A.
1 = Dataflow, 2 - Al Platform, 3 = BigQuery
A.
1 = Dataflow, 2 - Al Platform, 3 = BigQuery
Answers
B.
1 = DataProc, 2 = AutoML, 3 = Cloud Bigtable
B.
1 = DataProc, 2 = AutoML, 3 = Cloud Bigtable
Answers
C.
1 = BigQuery, 2 = AutoML, 3 = Cloud Functions
C.
1 = BigQuery, 2 = AutoML, 3 = Cloud Functions
Answers
D.
1 = BigQuery, 2 = Al Platform, 3 = Cloud Storage
D.
1 = BigQuery, 2 = Al Platform, 3 = Cloud Storage
Answers
Suggested answer: A

Explanation:

Dataflowis a fully managed service for executing Apache Beam pipelines that can process streaming or batch data1.

Al Platformis a unified platform that enables you to build and run machine learning applications across Google Cloud2.

BigQueryis a serverless, highly scalable, and cost-effective cloud data warehouse designed for business agility3.

These services are suitable for building an ML model to detect anomalies in real-time sensor data, as they can handle large-scale data ingestion, preprocessing, training, serving, storage, and visualization. The other options are not as suitable because:

DataProcis a service for running Apache Spark and Apache Hadoop clusters, which are not optimized for streaming data processing4.

AutoMLis a suite of machine learning products that enables developers with limited machine learning expertise to train high-quality models specific to their business needs5. However, it does not support custom models or real-time predictions.

Cloud Bigtableis a scalable, fully managed NoSQL database service for large analytical and operational workloads. However, it is not designed for ad hoc queries or interactive analysis.

Cloud Functionsis a serverless execution environment for building and connecting cloud services. However, it is not suitable for storing or visualizing data.

Cloud Storageis a service for storing and accessing data on Google Cloud. However, it is not a data warehouse and does not support SQL queries or visualization tools.

You have a functioning end-to-end ML pipeline that involves tuning the hyperparameters of your ML model using Al Platform, and then using the best-tuned parameters for training. Hypertuning is taking longer than expected and is delaying the downstream processes. You want to speed up the tuning job without significantly compromising its effectiveness. Which actions should you take?

Choose 2 answers

A.
Decrease the number of parallel trials
A.
Decrease the number of parallel trials
Answers
B.
Decrease the range of floating-point values
B.
Decrease the range of floating-point values
Answers
C.
Set the early stopping parameter to TRUE
C.
Set the early stopping parameter to TRUE
Answers
D.
Change the search algorithm from Bayesian search to random search.
D.
Change the search algorithm from Bayesian search to random search.
Answers
E.
Decrease the maximum number of trials during subsequent training phases.
E.
Decrease the maximum number of trials during subsequent training phases.
Answers
Suggested answer: C, E

Explanation:

Hyperparameter tuning is the process of finding the optimal values for the parameters of a machine learning model that affect its performance. AI Platform provides a service for hyperparameter tuning that can run multiple trials in parallel and use different search algorithms to find the best combination of hyperparameters. However, hyperparameter tuning can be time-consuming and costly, especially if the search space is large and the model training is complex. Therefore, it is important to optimize the tuning job to reduce the time and resources required.

One way to speed up the tuning job is to set the early stopping parameter to TRUE. This means that the tuning service will automatically stop trials that are unlikely to perform well based on the intermediate results. This can save time and resources by avoiding unnecessary computations for trials that are not promising. The early stopping parameter can be set in thetrainingInput.hyperparametersfield of the training job request1

Another way to speed up the tuning job is to decrease the maximum number of trials during subsequent training phases. This means that the tuning service will use fewer trials to refine the search space after the initial phase. This can reduce the time required for the tuning job to converge to the optimal solution. The maximum number of trials can be set in thetrainingInput.hyperparameters.maxTrialsfield of the training job request1

The other options are not effective ways to speed up the tuning job. Decreasing the number of parallel trials will reduce the concurrency of the tuning job and increase the overall time required. Decreasing the range of floating-point values will reduce the diversity of the search space and may miss some optimal solutions.Changing the search algorithm from Bayesian search to random search will reduce the efficiency of the tuning job and may require more trials to find the best solution1

You have written unit tests for a Kubeflow Pipeline that require custom libraries. You want to automate the execution of unit tests with each new push to your development branch in Cloud Source Repositories. What should you do?

A.
Write a script that sequentially performs the push to your development branch and executes the unit tests on Cloud Run
A.
Write a script that sequentially performs the push to your development branch and executes the unit tests on Cloud Run
Answers
B.
Using Cloud Build, set an automated trigger to execute the unit tests when changes are pushed to your development branch.
B.
Using Cloud Build, set an automated trigger to execute the unit tests when changes are pushed to your development branch.
Answers
C.
Set up a Cloud Logging sink to a Pub/Sub topic that captures interactions with Cloud Source Repositories Configure a Pub/Sub trigger for Cloud Run, and execute the unit tests on Cloud Run.
C.
Set up a Cloud Logging sink to a Pub/Sub topic that captures interactions with Cloud Source Repositories Configure a Pub/Sub trigger for Cloud Run, and execute the unit tests on Cloud Run.
Answers
D.
Set up a Cloud Logging sink to a Pub/Sub topic that captures interactions with Cloud Source Repositories. Execute the unit tests using a Cloud Function that is triggered when messages are sent to the Pub/Sub topic
D.
Set up a Cloud Logging sink to a Pub/Sub topic that captures interactions with Cloud Source Repositories. Execute the unit tests using a Cloud Function that is triggered when messages are sent to the Pub/Sub topic
Answers
Suggested answer: B

Explanation:

Cloud Build is a service that executes your builds on Google Cloud Platform infrastructure.Cloud Build can import source code from Cloud Source Repositories, Cloud Storage, GitHub, or Bitbucket, execute a build to your specifications, and produce artifacts such as Docker containers or Java archives1

Cloud Build allows you to set up automated triggers that start a build when changes are pushed to a source code repository.You can configure triggers to filter the changes based on the branch, tag, or file path2

To automate the execution of unit tests for a Kubeflow Pipeline that require custom libraries, you can use Cloud Build to set an automated trigger to execute the unit tests when changes are pushed to your development branch in Cloud Source Repositories. You can specify the steps of the build in a YAML or JSON file, such as installing the custom libraries, running the unit tests, and reporting the results.You can also use Cloud Build to build and deploy the Kubeflow Pipeline components if the unit tests pass3

The other options are not recommended or feasible. Writing a script that sequentially performs the push to your development branch and executes the unit tests on Cloud Run is not a good practice, as it does not leverage the benefits of Cloud Build and its integration with Cloud Source Repositories. Setting up a Cloud Logging sink to a Pub/Sub topic that captures interactions with Cloud Source Repositories and using a Pub/Sub trigger for Cloud Run or Cloud Function to execute the unit tests is unnecessarily complex and inefficient, as it adds extra steps and latency to the process.Cloud Run and Cloud Function are also not designed for executing unit tests, as they have limitations on the memory, CPU, and execution time45

You have trained a deep neural network model on Google Cloud. The model has low loss on the training data, but is performing worse on the validation data. You want the model to be resilient to overfitting. Which strategy should you use when retraining the model?

A.
Apply a dropout parameter of 0 2, and decrease the learning rate by a factor of 10
A.
Apply a dropout parameter of 0 2, and decrease the learning rate by a factor of 10
Answers
B.
Apply a L2 regularization parameter of 0.4, and decrease the learning rate by a factor of 10.
B.
Apply a L2 regularization parameter of 0.4, and decrease the learning rate by a factor of 10.
Answers
C.
Run a hyperparameter tuning job on Al Platform to optimize for the L2 regularization and dropout parameters
C.
Run a hyperparameter tuning job on Al Platform to optimize for the L2 regularization and dropout parameters
Answers
D.
Run a hyperparameter tuning job on Al Platform to optimize for the learning rate, and increase the number of neurons by a factor of 2.
D.
Run a hyperparameter tuning job on Al Platform to optimize for the learning rate, and increase the number of neurons by a factor of 2.
Answers
Suggested answer: C

Explanation:

Overfitting occurs when a model tries to fit the training data so closely that it does not generalize well to new data. Overfitting can be caused by having a model that is too complex for the data, such as having too many parameters or layers.Overfitting can lead to poor performance on the validation data, which reflects how the model will perform on unseen data1

To prevent overfitting, one strategy is to use regularization techniques that penalize the complexity of the model and encourage it to learn simpler patterns. Two common regularization techniques for deep neural networks are L2 regularization and dropout. L2 regularization adds a term to the loss function that is proportional to the squared magnitude of the model's weights. This term penalizes large weights and encourages the model to use smaller weights. Dropout randomly drops out some units in the network during training, which prevents co-adaptation of features and reduces the effective number of parameters.Both L2 regularization and dropout have hyperparameters that control the strength of the regularization effect23

Another strategy to prevent overfitting is to use hyperparameter tuning, which is the process of finding the optimal values for the parameters of the model that affect its performance. Hyperparameter tuning can help find the best combination of hyperparameters that minimize the validation loss and improve the generalization ability of the model. AI Platform provides a service for hyperparameter tuning that can run multiple trials in parallel and use different search algorithms to find the best solution.

Therefore, the best strategy to use when retraining the model is to run a hyperparameter tuning job on AI Platform to optimize for the L2 regularization and dropout parameters. This will allow the model to find the optimal balance between fitting the training data and generalizing to new data. The other options are not as effective, as they either use fixed values for the regularization parameters, which may not be optimal, or they do not address the issue of overfitting at all.

You are training a Resnet model on Al Platform using TPUs to visually categorize types of defects in automobile engines. You capture the training profile using the Cloud TPU profiler plugin and observe that it is highly input-bound. You want to reduce the bottleneck and speed up your model training process. Which modifications should you make to the tf .data dataset?

Choose 2 answers

A.
Use the interleave option for reading data
A.
Use the interleave option for reading data
Answers
B.
Reduce the value of the repeat parameter
B.
Reduce the value of the repeat parameter
Answers
C.
Increase the buffer size for the shuffle option.
C.
Increase the buffer size for the shuffle option.
Answers
D.
Set the prefetch option equal to the training batch size
D.
Set the prefetch option equal to the training batch size
Answers
E.
Decrease the batch size argument in your transformation
E.
Decrease the batch size argument in your transformation
Answers
Suggested answer: A, D

Explanation:

The tf.data dataset is a TensorFlow API that provides a way to create and manipulate data pipelines for machine learning. The tf.data dataset allows you to apply various transformations to the data, such as reading, shuffling, batching, prefetching, and interleaving.These transformations can affect the performance and efficiency of the model training process1

One of the common performance issues in model training is input-bound, which means that the model is waiting for the input data to be ready and is not fully utilizing the computational resources. Input-bound can be caused by slow data loading, insufficient parallelism, or large data size. Input-bound can be detected by using the Cloud TPU profiler plugin, which is a tool that helps you analyze the performance of your model on Cloud TPUs.The Cloud TPU profiler plugin can show you the percentage of time that the TPU cores are idle, which indicates input-bound2

To reduce the input-bound bottleneck and speed up the model training process, you can make some modifications to the tf.data dataset. Two of the modifications that can help are:

Use the interleave option for reading data. The interleave option allows you to read data from multiple files in parallel and interleave their records. This can improve the data loading speed and reduce the idle time of the TPU cores. The interleave option can be applied by using thetf.data.Dataset.interleavemethod, which takes a function that returns a dataset for each input element, and a number of parallel calls3

Set the prefetch option equal to the training batch size. The prefetch option allows you to prefetch the next batch of data while the current batch is being processed by the model. This can reduce the latency between batches and improve the throughput of the model training. The prefetch option can be applied by using thetf.data.Dataset.prefetchmethod, which takes a buffer size argument.The buffer size should be equal to the training batch size, which is the number of examples per batch4

The other options are not effective or counterproductive. Reducing the value of the repeat parameter will reduce the number of epochs, which is the number of times the model sees the entire dataset. This can affect the model's accuracy and convergence. Increasing the buffer size for the shuffle option will increase the randomness of the data, but also increase the memory usage and the data loading time. Decreasing the batch size argument in your transformation will reduce the number of examples per batch, which can affect the model's stability and performance.

You work for a public transportation company and need to build a model to estimate delay times for multiple transportation routes. Predictions are served directly to users in an app in real time. Because different seasons and population increases impact the data relevance, you will retrain the model every month. You want to follow Google-recommended best practices. How should you configure the end-to-end architecture of the predictive model?

A.
Configure Kubeflow Pipelines to schedule your multi-step workflow from training to deploying your model.
A.
Configure Kubeflow Pipelines to schedule your multi-step workflow from training to deploying your model.
Answers
B.
Use a model trained and deployed on BigQuery ML and trigger retraining with the scheduled query feature in BigQuery
B.
Use a model trained and deployed on BigQuery ML and trigger retraining with the scheduled query feature in BigQuery
Answers
C.
Write a Cloud Functions script that launches a training and deploying job on Ai Platform that is triggered by Cloud Scheduler
C.
Write a Cloud Functions script that launches a training and deploying job on Ai Platform that is triggered by Cloud Scheduler
Answers
D.
Use Cloud Composer to programmatically schedule a Dataflow job that executes the workflow from training to deploying your model
D.
Use Cloud Composer to programmatically schedule a Dataflow job that executes the workflow from training to deploying your model
Answers
Suggested answer: A

Explanation:

The end-to-end architecture of the predictive model for estimating delay times for multiple transportation routes should be configured using Kubeflow Pipelines. Kubeflow Pipelines is a platform for building and deploying scalable, portable, and reusable machine learning pipelines on Kubernetes. Kubeflow Pipelines allows you to orchestrate your multi-step workflow from data preparation, model training, model evaluation, model deployment, and model serving.Kubeflow Pipelines also provides a user interface for managing and tracking your pipeline runs, experiments, and artifacts1

Using Kubeflow Pipelines has several advantages for this use case:

Full automation: You can define your pipeline as a Python script that specifies the steps and dependencies of your workflow, and use the Kubeflow Pipelines SDK to compile and upload your pipeline to the Kubeflow Pipelines service.You can also use the Kubeflow Pipelines UI to create, run, and monitor your pipeline2

Scalability: You can leverage the power of Kubernetes to scale your pipeline components horizontally and vertically, and use distributed training frameworks such as TensorFlow or PyTorch to train your model on multiple nodes or GPUs3

Portability: You can package your pipeline components as Docker containers that can run on any Kubernetes cluster, and use the Kubeflow Pipelines SDK to export and import your pipeline packages across different environments4

Reusability: You can reuse your pipeline components across different pipelines, and share your components with other users through the Kubeflow Pipelines Component Store.You can also use pre-built components from the Kubeflow Pipelines library or other sources5

Schedulability: You can use the Kubeflow Pipelines UI or the Kubeflow Pipelines SDK to schedule recurring pipeline runs based on cron expressions or intervals. For example, you can schedule your pipeline to run every month to retrain your model on the latest data.

The other options are not as suitable for this use case. Using a model trained and deployed on BigQuery ML is not recommended, as BigQuery ML is mainly designed for simple and quick machine learning tasks on large-scale data, and does not support complex models or custom code. Writing a Cloud Functions script that launches a training and deploying job on AI Platform is not ideal, as Cloud Functions has limitations on the memory, CPU, and execution time, and does not provide a user interface for managing and tracking your pipeline. Using Cloud Composer to programmatically schedule a Dataflow job that executes the workflow from training to deploying your model is not optimal, as Dataflow is mainly designed for data processing and streaming analytics, and does not support model serving or monitoring.

You are an ML engineer at a global shoe store. You manage the ML models for the company's website. You are asked to build a model that will recommend new products to the user based on their purchase behavior and similarity with other users. What should you do?

A.
Build a classification model
A.
Build a classification model
Answers
B.
Build a knowledge-based filtering model
B.
Build a knowledge-based filtering model
Answers
C.
Build a collaborative-based filtering model
C.
Build a collaborative-based filtering model
Answers
D.
Build a regression model using the features as predictors
D.
Build a regression model using the features as predictors
Answers
Suggested answer: C

Explanation:

A recommender system is a type of machine learning system that suggests relevant items to users based on their preferences and behavior.Recommender systems are widely used in e-commerce, media, and entertainment industries to enhance user experience and increase revenue1

There are different types of recommender systems that use different filtering methods to generate recommendations. The most common types are:

Content-based filtering: This method uses the features of the items and the users to find the similarity between them.For example, a content-based recommender system for movies may use the genre, director, cast, and ratings of the movies, and the preferences, demographics, and history of the users, to recommend movies that are similar to the ones the user liked before2

Collaborative filtering: This method uses the feedback and ratings of the users to find the similarity between them and the items.For example, a collaborative filtering recommender system for books may use the ratings of the users for different books, and recommend books that are liked by other users who have similar ratings to the target user3

Hybrid method: This method combines content-based and collaborative filtering methods to overcome the limitations of each method and improve the accuracy and diversity of the recommendations.For example, a hybrid recommender system for music may use both the features of the songs and the artists, and the ratings and listening habits of the users, to recommend songs that match the user's taste and preferences4

Deep learning-based: This method uses deep neural networks to learn complex and non-linear patterns from the data and generate recommendations. Deep learning-based recommender systems can handle large-scale and high-dimensional data, and incorporate various types of information, such as text, images, audio, and video. For example, a deep learning-based recommender system for fashion may use the images and descriptions of the products, and the profiles and feedback of the users, to recommend products that suit the user's style and preferences.

For the use case of building a model that will recommend new products to the user based on their purchase behavior and similarity with other users, the best option is to build a collaborative-based filtering model. This is because collaborative filtering can leverage the implicit feedback and ratings of the users to find the items that are most likely to interest them.Collaborative filtering can also help discover new products that the user may not be aware of, and increase the diversity and serendipity of the recommendations3

The other options are not as suitable for this use case. Building a classification model or a regression model using the features as predictors is not a good idea, as these models are not designed for recommendation tasks, and may not capture the preferences and behavior of the users. Building a knowledge-based filtering model is not relevant, as this method uses the explicit knowledge and requirements of the users to find the items that meet their criteria, and does not rely on the purchase behavior or similarity with other users.

You are training an LSTM-based model on Al Platform to summarize text using the following job submission script:

You want to ensure that training time is minimized without significantly compromising the accuracy of your model. What should you do?

A.
Modify the 'epochs' parameter
A.
Modify the 'epochs' parameter
Answers
B.
Modify the 'scale-tier' parameter
B.
Modify the 'scale-tier' parameter
Answers
C.
Modify the batch size' parameter
C.
Modify the batch size' parameter
Answers
D.
Modify the 'learning rate' parameter
D.
Modify the 'learning rate' parameter
Answers
Suggested answer: B

Explanation:

The training time of a machine learning model depends on several factors, such as the complexity of the model, the size of the data, the hardware resources, and the hyperparameters. To minimize the training time without significantly compromising the accuracy of the model, one should optimize these factors as much as possible.

One of the factors that can have a significant impact on the training time is the scale-tier parameter, which specifies the type and number of machines to use for the training job on AI Platform.The scale-tier parameter can be one of the predefined values, such as BASIC, STANDARD_1, PREMIUM_1, or BASIC_GPU, or a custom value that allows you to configure the machine type, the number of workers, and the number of parameter servers1

To speed up the training of an LSTM-based model on AI Platform, one should modify the scale-tier parameter to use a higher tier or a custom configuration that provides more computational resources, such as more CPUs, GPUs, or TPUs. This can reduce the training time by increasing the parallelism and throughput of the model training.However, one should also consider the trade-off between the training time and the cost, as higher tiers or custom configurations may incur higher charges2

The other options are not as effective or may have adverse effects on the model accuracy. Modifying the epochs parameter, which specifies the number of times the model sees the entire dataset, may reduce the training time, but also affect the model's convergence and performance. Modifying the batch size parameter, which specifies the number of examples per batch, may affect the model's stability and generalization ability, as well as the memory usage and the gradient update frequency.Modifying the learning rate parameter, which specifies the step size of the gradient descent optimization, may affect the model's convergence and performance, as well as the risk of overshooting or getting stuck in local minima3

You are designing an ML recommendation model for shoppers on your company's ecommerce website. You will use Recommendations Al to build, test, and deploy your system. How should you develop recommendations that increase revenue while following best practices?

A.
Use the 'Other Products You May Like' recommendation type to increase the click-through rate
A.
Use the 'Other Products You May Like' recommendation type to increase the click-through rate
Answers
B.
Use the 'Frequently Bought Together' recommendation type to increase the shopping cart size for each order.
B.
Use the 'Frequently Bought Together' recommendation type to increase the shopping cart size for each order.
Answers
C.
Import your user events and then your product catalog to make sure you have the highest quality event stream
C.
Import your user events and then your product catalog to make sure you have the highest quality event stream
Answers
D.
Because it will take time to collect and record product data, use placeholder values for the product catalog to test the viability of the model.
D.
Because it will take time to collect and record product data, use placeholder values for the product catalog to test the viability of the model.
Answers
Suggested answer: B

Explanation:

Recommendations AI is a service that allows users to build, test, and deploy personalized product recommendations for their ecommerce websites. It uses Google's deep learning models to learn from user behavior and product data, and generate high-quality recommendations that can increase revenue, click-through rate, and customer satisfaction. One of the best practices for using Recommendations AI is to choose the right recommendation type for the business objective. The ''Frequently Bought Together'' recommendation type shows products that are often purchased together with the current product, and encourages users to add more items to their shopping cart. This can increase the average order value and the revenue for each transaction. The other options are not as effective or feasible for this objective. The ''Other Products You May Like'' recommendation type shows products that are similar to the current product, and may increase the click-through rate, but not necessarily the shopping cart size. Importing the user events and then the product catalog is not a recommended order, as it may cause data inconsistency and missing recommendations. The product catalog should be imported first, and then the user events. Using placeholder values for the product catalog is not a viable option, as it will not produce meaningful recommendations or reflect the real performance of the model.Reference:

Recommendations AI documentation

Choosing a recommendation type

Importing data to Recommendations AI

Total 285 questions
Go to page: of 29