List of questions
Related questions
Question 129 - Professional Machine Learning Engineer discussion
You work on a data science team at a bank and are creating an ML model to predict loan default risk. You have collected and cleaned hundreds of millions of records worth of training data in a BigQuery table, and you now want to develop and compare multiple models on this data using TensorFlow and Vertex AI. You want to minimize any bottlenecks during the data ingestion state while considering scalability. What should you do?
A.
Use the BigQuery client library to load data into a dataframe, and use tf.data.Dataset.from_tensor_slices() to read it.
B.
Export data to CSV files in Cloud Storage, and use tf.data.TextLineDataset() to read them.
C.
Convert the data into TFRecords, and use tf.data.TFRecordDataset() to read them.
D.
Use TensorFlow I/O's BigQuery Reader to directly read the data.
Your answer:
0 comments
Sorted by
Leave a comment first