ExamGecko
Home Home / Microsoft / AI-102

Microsoft AI-102 Practice Test - Questions Answers

Question list
Search
Search

List of questions

Search

Related questions











You have a Video Indexer service that is used to provide a search interface over company videos on your company's website. You need to be able to search for videos based on who is present in the video.

What should you do?

A.
Create a person model and associate the model to the videos.
A.
Create a person model and associate the model to the videos.
Answers
B.
Create person objects and provide face images for each object
B.
Create person objects and provide face images for each object
Answers
C.
Invite the entire staff of the company to Video Indexer.
C.
Invite the entire staff of the company to Video Indexer.
Answers
D.
Edit the faces in the videos.
D.
Edit the faces in the videos.
Answers
E.
Upload names to a language model.
E.
Upload names to a language model.
Answers
Suggested answer: A

Explanation:

Video Indexer supports multiple Person models per account. Once a model is created, you can use it by providing the model ID of a specific Person model when uploading/indexing or reindexing a video. Training a newfacefor a video updates the specific custom model that the video was associated with.

Note: Video Indexer supports face detection and celebrity recognition for video content. The celebrity recognition feature covers about one million faces based on commonly requested data source such as IMDB, Wikipedia, and top Linkedln influencers. Faces that aren't recognized by the celebrity recognition feature are detected but left unnamed. Once you label a face with a name, the face and name get added to your account's Person model. Video Indexer will then recognize this face in your future videos and past videos.

Reference:

https://docs. mi crosoft. com/en-us/azu re/med ia -servi ces/vi deo-i ndexer/customize-pers on-mo del-with-api

You use the Custom Vision service to build a classifier.

After training is complete, you need to evaluate the classifier.

Which two metrics are available for review? Each correct answer presents a complete solution. (Choose two.) NOTE: Each correct selection is worth one point.

A.
recall
A.
recall
Answers
B.
F-score
B.
F-score
Answers
C.
weighted accuracy
C.
weighted accuracy
Answers
D.
precision
D.
precision
Answers
E.
area under the curve (AUC)
E.
area under the curve (AUC)
Answers
Suggested answer: A, D

Explanation:

Custom Vision provides three metrics regarding the performance of your model: precision, recall, and AP.

Reference:

https://www.tallan.com/blog/2020/05/19/azure-custom-vision/

You are developing a method that uses the Computer Vision client library. The method will perform optical character recognition (OCR) in images. The method has the following code.

During testing, you discover that the call to the GetReadResultAsync method occurs before the read operation is complete.

You need to prevent the GetReadResultAsync method from proceeding until the read operation is complete.

Which two actions should you perform? Each correct answer presents part of the solution. (Choose two.)

NOTE: Each correct selection is worth one point.

A.
Remove the Guid.Parse (operationid) parameter.
A.
Remove the Guid.Parse (operationid) parameter.
Answers
B.
Add code to verify the results.Status value.
B.
Add code to verify the results.Status value.
Answers
C.
Add code to verify the status of the txtHeaders. status value.
C.
Add code to verify the status of the txtHeaders. status value.
Answers
D.
Wrap the call to GetReadResultAsync within a loop that contains a delay.
D.
Wrap the call to GetReadResultAsync within a loop that contains a delay.
Answers
Suggested answer: B, D

Explanation:

Example code :

do

{

results = await client.GetReadResultAsync(Guid.Parse(operationId));

}

while ((results.Status == OperationStatusCodes.Running ||

results.Status == OperationStatusCodes.NotStarted));

Reference:

https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/ComputerVision/ComputerVisionQuickstart.cs

DRAG DROP

You are developing a call to the Face API. The call must find similar faces from an existing list named employeefaces. The employeefaces list contains 60,000 images.

How should you complete the body of the HTTP request? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

Question 4
Correct answer: Question 4

Explanation:

Box 1: LargeFaceListID

LargeFaceList: Add a face to a specified large face list, up to 1,000,000 faces.

Note: Given query face's faceId, to search the similar-looking faces from a faceId array, a face list or a large face list. A "faceListId" is created by FaceList - Create containing persistedFaceIds that will not expire. And a "largeFaceListId" is created by LargeFaceList - Create containing persistedFaceIds that will also not expire.

Incorrect Answers:

Not "faceListId": Add a face to a specified face list, up to 1,000 faces.

Box 2: matchFace

Find similar has two working modes, "matchPerson" and "matchFace". "matchPerson" is the default mode that it tries to find faces of the same person as possible by using internal same-person thresholds. It is useful to find a known person's other photos. Note that an empty list will be returned if no faces pass the internal thresholds. "matchFace" mode ignores same-person thresholds and returns ranked similar faces anyway, even the similarity is low. It can be used in the cases like searching celebrity-looking faces.

Reference:

https://docs.microsoft.com/en-us/rest/api/faceapi/face/findsimilar

DRAG DROP

You are developing a photo application that will find photos of a person based on a sample image by using the Face API.

You need to create a POST request to find the photos.

How should you complete the request? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

Question 5
Correct answer: Question 5

Explanation:

Box 1: detect

Face - Detect With Url: Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, and attributes.

POST {Endpoint}/face/v1.0/detect

Box 2: matchPerson

Find similar has two working modes, "matchPerson" and "matchFace". "matchPerson" is the default mode that it tries to find faces of the same person as possible by using internal same-person thresholds. It is useful to find a known person's other photos. Note that an empty list will be returned if no faces pass the internal thresholds. "matchFace" mode ignores same-person thresholds and returns ranked similar faces anyway, even the similarity is low. It can be used in the cases like searching celebrity-looking faces.

Reference:

https://docs.microsoft.com/en-us/rest/api/faceapi/face/detectwithurl

https://docs.microsoft.com/en-us/rest/api/faceapi/face/findsimilar

HOTSPOT

You develop a test method to verify the results retrieved from a call to the Computer Vision API. The call is used to analyze the existence of company logos in images. The call returns a collection of brands named brands.

You have the following code segment.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.

Question 6
Correct answer: Question 6

Explanation:

Box 1: Yes

Box 2: Yes

Coordinates of a rectangle in the API refer to the top left corner.

Box 3: No

Reference:

https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-brand-detection

HOTSPOT

You develop an application that uses the Face API.

You need to add multiple images to a person group.

How should you complete the code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Question 7
Correct answer: Question 7

Explanation:

Box 1: Stream

The File.OpenRead(String) method opens an existing file for reading.

Example: Open the stream and read it back.

using (FileStream fs = File.OpenRead(path))

Box 2: CreateAsync

Create the persons for the PersonGroup. Persons are created concurrently.

Example:

await faceClient.PersonGroupPerson.CreateAsync(personGroupId, personName);

Reference:

https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/how-to-add-faces

HOTSPOT

You are developing an application that will use the Computer Vision client library. The application has the following code.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.

Question 8
Correct answer: Question 8

Explanation:

Box 1: No

Box 2: Yes

The ComputerVision.analyzeImageInStreamAsync operation extracts a rich set of visual features based on the image content.

Box 3: No

Images will be read from a stream.

Reference:

https://docs.microsoft.com/en-us/java/api/com.microsoft.azure.cognitiveservices.vision.computervision.computervision.analyzeimageinstreamasync

HOTSPOT

You have a Computer Vision resource named contoso1 that is hosted in the West US Azure region.

You need to use contoso1 to make a different size of a product photo by using the smart cropping feature.

How should you complete the API URL? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Question 9
Correct answer: Question 9

Explanation:

Reference:

https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b

https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-generating-thumbnails#examples

DRAG DROP

You are developing a webpage that will use the Video Indexer service to display videos of internal company meetings.

You embed the Player widget and the Cognitive Insights widget into the page.

You need to configure the widgets to meet the following requirements:

Ensure that users can search for keywords.

Display the names and faces of people in the video.

Show captions in the video in English (United States).

How should you complete the URL for each widget? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

Question 10
Correct answer: Question 10

Explanation:

Reference:

https://docs.microsoft.com/en-us/azure/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-embed-widgets

Total 309 questions
Go to page: of 31