List of questions
Related questions
Question 257 - Professional Machine Learning Engineer discussion
You work for a food product company. Your company's historical sales data is stored in BigQuery You need to use Vertex Al's custom training service to train multiple TensorFlow models that read the data from BigQuery and predict future sales You plan to implement a data preprocessing algorithm that performs min-max scaling and bucketing on a large number of features before you start experimenting with the models. You want to minimize preprocessing time, cost and development effort How should you configure this workflow?
A.
Write the transformations into Spark that uses the spark-bigquery-connector and use Dataproc to preprocess the data.
B.
Write SQL queries to transform the data in-place in BigQuery.
C.
Add the transformations as a preprocessing layer in the TensorFlow models.
D.
Create a Dataflow pipeline that uses the BigQuerylO connector to ingest the data process it and write it back to BigQuery.
Your answer:
0 comments
Sorted by
Leave a comment first