ExamGecko
Home Home / Microsoft / AI-900

Microsoft AI-900 Practice Test - Questions Answers, Page 11

Question list
Search
Search

List of questions

Search

Related questions











HOTSPOT

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.

Question 101
Correct answer: Question 101

Explanation:

Box 1: Yes

Custom Vision functionality can be divided into two features. Image classification applies one or more labels to an image. Object detection is similar, but it also returns the coordinates in the image where the applied label(s) can be found.

Box 2: Yes

The Custom Vision service uses a machine learning algorithm to analyze images. You, the developer, submit groups of images that feature and lack the characteristics in question. You label the images yourself at the time of submission. Then, the algorithm trains to this data and calculates its own accuracy by testing itself on those same images.

Box 3: No

Custom Vision service can be used only on graphic files.

Reference:

https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/overview

You are processing photos of runners in a race.

You need to read the numbers on the runners’ shirts to identity the runners in the photos.

Which type of computer vision should you use?

A.
facial recognition
A.
facial recognition
Answers
B.
optical character recognition (OCR)
B.
optical character recognition (OCR)
Answers
C.
semantic segmentation
C.
semantic segmentation
Answers
D.
object detection
D.
object detection
Answers
Suggested answer: B

Explanation:

Optical character recognition (OCR) allows you to extract printed or handwritten text from images and documents.

Reference:

https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview-ocr

DRAG DROP

Match the types of machine learning to the appropriate scenarios.

To answer, drag the appropriate machine learning type from the column on the left to its scenario on the right. Each machine learning type may be used once, more than once, or not at all.

NOTE: Each correct selection is worth one point.

Question 103
Correct answer: Question 103

Explanation:

Box 1: Image classification

Image classification is a supervised learning problem: define a set of target classes (objects to identify in images), and train a model to recognize them using labeled example photos.

Box 2: Object detection

Object detection is a computer vision problem. While closely related to image classification, object detection performs image classification at a more granular scale. Object detection both locates and categorizes entities within images.

Box 3: Semantic Segmentation

Semantic segmentation achieves fine-grained inference by making dense predictions inferring labels for every pixel, so that each pixel is labeled with the class of its enclosing object ore region.

Reference:

https://developers.google.com/machine-learning/practica/image-classification

https://docs.microsoft.com/en-us/dotnet/machine-learning/tutorials/object-detection-model-builder

https://nanonets.com/blog/how-to-do-semantic-segmentation-using-deep-learning/

You use drones to identify where weeds grow between rows of crops to send an instruction for the removal of the weeds.

This is an example of which type of computer vision?

A.
object detection
A.
object detection
Answers
B.
optical character recognition (OCR)
B.
optical character recognition (OCR)
Answers
C.
scene segmentation
C.
scene segmentation
Answers
Suggested answer: A

Explanation:

Object detection is similar to tagging, but the API returns the bounding box coordinates for each tag applied. For example, if an image contains a dog, cat and person, the Detect operation will list those objects together with their coordinates in the image.

Incorrect Answers:

B: Optical character recognition (OCR) allows you to extract printed or handwritten text from images and documents. C: Scene segmentation determines when a scene changes in video based on visual cues. A scene depicts a single event and it's composed by a series of consecutive shots, which are semantically related.

Reference:

https://docs.microsoft.com/en-us/ai-builder/object-detection-overview

https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview-ocr

https://docs.microsoft.com/en-us/azure/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-overview

In which two scenarios can you use a speech synthesis solution? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

A.
an automated voice that reads back a credit card number entered into a telephone by using a numeric keypad
A.
an automated voice that reads back a credit card number entered into a telephone by using a numeric keypad
Answers
B.
generating live captions for a news broadcast
B.
generating live captions for a news broadcast
Answers
C.
extracting key phrases from the audio recording of a meeting
C.
extracting key phrases from the audio recording of a meeting
Answers
D.
an AI character in a computer game that speaks audibly to a player
D.
an AI character in a computer game that speaks audibly to a player
Answers
Suggested answer: A, D

Explanation:

Azure Text to Speech is a Speech service feature that converts text to lifelike speech.

Incorrect Answers:

C: Extracting key phrases is not speech synthesis.

Reference:

https://azure.microsoft.com/en-in/services/cognitive-services/text-to-speech/

HOTSPOT

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.

Question 106
Correct answer: Question 106

Explanation:

The translator service provides multi-language support for text translation, transliteration, language detection, and dictionaries.

Speech-to-Text, also known as automatic speech recognition (ASR), is a feature of Speech Services that provides transcription.

Reference:

https://docs.microsoft.com/en-us/azure/cognitive-services/Translator/translator-info-overview

https://docs.microsoft.com/en-us/legal/cognitive-services/speech-service/speech-to-text/transparency-note

DRAG DROP

You need to scan the news for articles about your customers and alert employees when there is a negative article. Positive articles must be added to a press book.

Which natural language processing tasks should you use to complete the process? To answer, drag the appropriate tasks to the correct locations. Each task may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

Question 107
Correct answer: Question 107

Explanation:

Box 1: Entity recognition

the Named Entity Recognition module in Machine Learning Studio (classic), to identify the names of things, such as people, companies, or locations in a column of text.

Named entity recognition is an important area of research in machine learning and natural language processing (NLP), because it can be used to answer many real-world questions, such as:

Which companies were mentioned in a news article?

Does a tweet contain the name of a person? Does the tweet also provide his current location?

Were specified products mentioned in complaints or reviews?

Box 2: Sentiment Analysis

The Text Analytics API's Sentiment Analysis feature provides two ways for detecting positive and negative sentiment. If you send a Sentiment Analysis request, the API will return sentiment labels (such as "negative", "neutral" and "positive") and confidence scores at the sentence and document-level.

Reference:

https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/named-entity-recognition

https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-sentiment-analysis

You are building a knowledge base by using QnA Maker.

Which file format can you use to populate the knowledge base?

A.
PPTX
A.
PPTX
Answers
B.
XML
B.
XML
Answers
C.
ZIP
C.
ZIP
Answers
D.
PDF
D.
PDF
Answers
Suggested answer: D

Explanation:

D: Content types of documents you can add to a knowledge base:

Content types include many standard structured documents such as PDF, DOC, and TXT.

Note: The tool supports the following file formats for ingestion:

.tsv: QnA contained in the format Question(tab)Answer.

.txt, .docx, .pdf: QnA contained as regular FAQ content--that is, a sequence of questions and answers.

Incorrect Answers:

A: PPTX is the default presentation file format for new PowerPoint presentations.

B: It is not possible to ingest xml file directly.

Reference:

https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/concepts/data-sources-and-content

In which scenario should you use key phrase extraction?

A.
identifying whether reviews of a restaurant are positive or negative
A.
identifying whether reviews of a restaurant are positive or negative
Answers
B.
generating captions for a video based on the audio track
B.
generating captions for a video based on the audio track
Answers
C.
identifying which documents provide information about the same topics
C.
identifying which documents provide information about the same topics
Answers
D.
translating a set of documents from English to German
D.
translating a set of documents from English to German
Answers
Suggested answer: C

You have insurance claim reports that are stored as text.

You need to extract key terms from the reports to generate summaries.

Which type of AI workload should you use?

A.
natural language processing
A.
natural language processing
Answers
B.
conversational AI
B.
conversational AI
Answers
C.
anomaly detection
C.
anomaly detection
Answers
D.
computer vision
D.
computer vision
Answers
Suggested answer: A

Explanation:

Reference:

https://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-choices/natural-language-processing

Total 268 questions
Go to page: of 27