ExamGecko
Home Home / Microsoft / AI-900

Microsoft AI-900 Practice Test - Questions Answers, Page 3

Question list
Search
Search

List of questions

Search

Related questions











You need to predict the income range of a given customer by using the following dataset.

Which two fields should you use as features? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

A.
Education Level
A.
Education Level
Answers
B.
Last Name
B.
Last Name
Answers
C.
Age
C.
Age
Answers
D.
Income Range
D.
Income Range
Answers
E.
First Name
E.
First Name
Answers
Suggested answer: A, C

Explanation:

First Name, Last Name, Age and Education Level are features. Income range is a label (what you want to predict). First Name and Last Name are irrelevant in that they have no bearing on income. Age and Education level are the features you should use.

You need to develop a mobile app for employees to scan and store their expenses while travelling.

Which type of computer vision should you use?

A.
semantic segmentation
A.
semantic segmentation
Answers
B.
image classification
B.
image classification
Answers
C.
object detection
C.
object detection
Answers
D.
optical character recognition (OCR)
D.
optical character recognition (OCR)
Answers
Suggested answer: D

Explanation:

Azure's Computer Vision API includes Optical Character Recognition (OCR) capabilities that extract printed or handwritten text from images. You can extract text from images, such as photos of license plates or containers with serial numbers, as well as from documents - invoices, bills, financial reports, articles, and more.

Reference:

https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-recognizing-text

You need to determine the location of cars in an image so that you can estimate the distance between the cars. Which type of computer vision should you use?

A.
optical character recognition (OCR)
A.
optical character recognition (OCR)
Answers
B.
object detection
B.
object detection
Answers
C.
image classification
C.
image classification
Answers
D.
face detection
D.
face detection
Answers
Suggested answer: B

Explanation:

Object detection is similar to tagging, but the API returns the bounding box coordinates (in pixels) for each object found. For example, if an image contains a dog, cat and person, the Detect operation will list those objects together with their coordinates in the image. You can use this functionality to process the relationships between the objects in an image. It also lets you determine whether there are multiple instances of the same tag in an image. The Detect API applies tags based on the objects or living things identified in the image. There is currently no formal relationship between the tagging taxonomy and the object detection taxonomy. At a conceptual level, the Detect API only finds objects and living things, while the Tag API can also include contextual terms like "indoor", which can't be localized with bounding boxes.

Reference:

https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-object-detection

You send an image to a Computer Vision API and receive back the annotated image shown in the exhibit.

Which type of computer vision was used?

A.
object detection
A.
object detection
Answers
B.
face detection
B.
face detection
Answers
C.
optical character recognition (OCR)
C.
optical character recognition (OCR)
Answers
D.
image classification
D.
image classification
Answers
Suggested answer: A

Explanation:

Object detection is similar to tagging, but the API returns the bounding box coordinates (in pixels) for each object found. For example, if an image contains a dog, cat and person, the Detect operation will list those objects together with their coordinates in the image. You can use this functionality to process the relationships between the objects in an image. It also lets you determine whether there are multiple instances of the same tag in an image. The Detect API applies tags based on the objects or living things identified in the image. There is currently no formal relationship between the tagging taxonomy and the object detection taxonomy. At a conceptual level, the Detect API only finds objects and living things, while the Tag API can also include contextual terms like "indoor", which can't be localized with bounding boxes.

Reference:

https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-object-detection

What are two tasks that can be performed by using the Computer Vision service? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

A.
Train a custom image classification model.
A.
Train a custom image classification model.
Answers
B.
Detect faces in an image.
B.
Detect faces in an image.
Answers
C.
Recognize handwritten text.
C.
Recognize handwritten text.
Answers
D.
Translate the text in an image between languages.
D.
Translate the text in an image between languages.
Answers
Suggested answer: B, C

Explanation:

B: Azure's Computer Vision service provides developers with access to advanced algorithms that process images and return information based on the visual features you're interested in. For example, Computer Vision can determine whether an image contains adult content, find specific brands or objects, or find human faces.

C: Computer Vision includes Optical Character Recognition (OCR) capabilities. You can use the new Read API to extract printed and handwritten text from images and documents.

Reference:

https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/home

What is a use case for classification?

A.
predicting how many cups of coffee a person will drink based on how many hours the person slept the previous night.
A.
predicting how many cups of coffee a person will drink based on how many hours the person slept the previous night.
Answers
B.
analyzing the contents of images and grouping images that have similar colors
B.
analyzing the contents of images and grouping images that have similar colors
Answers
C.
predicting whether someone uses a bicycle to travel to work based on the distance from home to work
C.
predicting whether someone uses a bicycle to travel to work based on the distance from home to work
Answers
D.
predicting how many minutes it will take someone to run a race based on past race times
D.
predicting how many minutes it will take someone to run a race based on past race times
Answers
Suggested answer: D

What are two tasks that can be performed by using computer vision? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

A.
Predict stock prices.
A.
Predict stock prices.
Answers
B.
Detect brands in an image.
B.
Detect brands in an image.
Answers
C.
Detect the color scheme in an image
C.
Detect the color scheme in an image
Answers
D.
Translate text between languages.
D.
Translate text between languages.
Answers
E.
Extract key phrases.
E.
Extract key phrases.
Answers
Suggested answer: B, C

Explanation:

B: Identify commercial brands in images or videos from a database of thousands of global logos. You can use this feature, for example, to discover which brands are most popular on social media or most prevalent in media product placement.

C: Analyze color usage within an image. Computer Vision can determine whether an image is black & white or color and, for color images, identify the dominant and accent colors.

Reference:

https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview

Your company wants to build a recycling machine for bottles. The recycling machine must automatically identify bottles of the correct shape and reject all other items. Which type of AI workload should the company use?

A.
anomaly detection
A.
anomaly detection
Answers
B.
conversational AI
B.
conversational AI
Answers
C.
computer vision
C.
computer vision
Answers
D.
natural language processing
D.
natural language processing
Answers
Suggested answer: C

Explanation:

Azure's Computer Vision service gives you access to advanced algorithms that process images and return information based on the visual features you're interested in. For example, Computer Vision can determine whether an image contains adult content, find specific brands or objects, or find human faces.

Reference:

https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview

In which two scenarios can you use the Form Recognizer service? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

A.
Extract the invoice number from an invoice.
A.
Extract the invoice number from an invoice.
Answers
B.
Translate a form from French to English.
B.
Translate a form from French to English.
Answers
C.
Find image of product in a catalog.
C.
Find image of product in a catalog.
Answers
D.
Identify the retailer from a receipt.
D.
Identify the retailer from a receipt.
Answers
Suggested answer: A, D

Explanation:

Reference:

https://azure.microsoft.com/en-gb/services/cognitive-services/form-recognizer/#features

Your website has a chatbot to assist customers.

You need to detect when a customer is upset based on what the customer types in the chatbot.

Which type of AI workload should you use?

A.
anomaly detection
A.
anomaly detection
Answers
B.
semantic segmentation
B.
semantic segmentation
Answers
C.
regression
C.
regression
Answers
D.
natural language processing
D.
natural language processing
Answers
Suggested answer: D

Explanation:

Natural language processing (NLP) is used for tasks such as sentiment analysis, topic detection, language detection, key phrase extraction, and document categorization.

Sentiment Analysis is the process of determining whether a piece of writing is positive, negative or neutral.

Reference:

https://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-choices/natural-language-processing

Total 268 questions
Go to page: of 27