List of questions
Related questions
Question 216 - MLS-C01 discussion
A machine learning (ML) specialist needs to extract embedding vectors from a text series. The goal is to provide a ready-to-ingest feature space for a data scientist to develop downstream ML predictive models. The text consists of curated sentences in English. Many sentences use similar words but in different contexts. There are questions and answers among the sentences, and the embedding space must differentiate between them.
Which options can produce the required embedding vectors that capture word context and sequential QA information? (Choose two.)
A.
Amazon SageMaker seq2seq algorithm
B.
Amazon SageMaker BlazingText algorithm in Skip-gram mode
C.
Amazon SageMaker Object2Vec algorithm
D.
Amazon SageMaker BlazingText algorithm in continuous bag-of-words (CBOW) mode
E.
Combination of the Amazon SageMaker BlazingText algorithm in Batch Skip-gram mode with a custom recurrent neural network (RNN)
Your answer:
0 comments
Sorted by
Leave a comment first