List of questions
Related questions
Question 252 - Professional Machine Learning Engineer discussion
You work for a rapidly growing social media company. Your team builds TensorFlow recommender models in an on-premises CPU cluster. The data contains billions of historical user events and 100 000 categorical features. You notice that as the data increases the model training time increases. You plan to move the models to Google Cloud You want to use the most scalable approach that also minimizes training time. What should you do?
A.
Deploy the training jobs by using TPU VMs with TPUv3 Pod slices, and use the TPUEmbedding API.
B.
Deploy the training jobs in an autoscaling Google Kubernetes Engine cluster with CPUs
C.
Deploy a matrix factorization model training job by using BigQuery ML.
D.
Deploy the training jobs by using Compute Engine instances with A100 GPUs and use the t f. nn. embedding_lookup API.
Your answer:
0 comments
Sorted by
Leave a comment first