ExamGecko
Home Home / Microsoft / AI-102

Microsoft AI-102 Practice Test - Questions Answers, Page 11

Question list
Search
Search

List of questions

Search

Related questions











You are developing the knowledgebase by using Azure Cognitive Search.

You need to meet the knowledgebase requirements for searching equivalent terms.

What should you include in the solution?

A.
synonym map
A.
synonym map
Answers
B.
a suggester
B.
a suggester
Answers
C.
a custom analyzer
C.
a custom analyzer
Answers
D.
a built-in key phrase extraction skill
D.
a built-in key phrase extraction skill
Answers
Suggested answer: A

Explanation:

Within a search service, synonym maps are a global resource that associate equivalent terms, expanding the scope of a query without the user having to actually provide the term. For example, assuming "dog", "canine", and "puppy" are mapped synonyms, a query on "canine" will match on a document containing "dog".

Create synonyms: A synonym map is an asset that can be created once and used by many indexes.

Reference: https://docs.microsoft.com/en-us/azure/search/search-synonyms

You are developing the chatbot.

You create the following components:

A QnA Maker resource

A chatbot by using the Azure Bot Framework SDK

You need to add an additional component to meet the technical requirements and the chatbot requirements. What should you add?

A.
Microsoft Translator
A.
Microsoft Translator
Answers
B.
Language Understanding
B.
Language Understanding
Answers
C.
Dispatch
C.
Dispatch
Answers
D.
chatdown
D.
chatdown
Answers
Suggested answer: C

Explanation:

Scenario: All planned projects must support English, French, and Portuguese.

If a bot uses multiple LUIS models and QnA Maker knowledge bases (knowledge bases), you can use the Dispatch tool to determine which LUIS model or QnA Maker knowledge base best matches the user input. The dispatch tool does this by creating a single LUIS app to route user input to the correct model.

Reference:

https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-tutorial-dispatch

You are developing the chatbot.

You create the following components:

A QnA Maker resource

A chatbot by using the Azure Bot Framework SDK

You need to integrate the components to meet the chatbot requirements.

Which property should you use?

A.
QnAMakerOptions.StrictFilters
A.
QnAMakerOptions.StrictFilters
Answers
B.
QnADialogResponseOptions.CardNoMatchText
B.
QnADialogResponseOptions.CardNoMatchText
Answers
C.
QnAMakerOptions.RankerType
C.
QnAMakerOptions.RankerType
Answers
D.
QnAMakerOptions.ScoreThreshold
D.
QnAMakerOptions.ScoreThreshold
Answers
Suggested answer: C

Explanation:

Scenario: When the response confidence score is low, ensure that the chatbot can provide other response options to the customers. When no good match is found by the ranker, the confidence score of 0.0 or "None" is returned and the default response is "No good match found in the KB". You can override this default response in the bot or application code calling the endpoint.

Alternately, you can also set the override response in Azure and this changes the default for all knowledge bases deployed in a particular QnA Maker service.

Choosing Ranker type: By default, QnA Maker searches through questions and answers. If you want to search through questions only, to generate an answer, use the RankerType=QuestionOnly in the POST body of the GenerateAnswer request.

Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/concepts/best-practices

DRAG DROP

You are developing a solution for the Management-Bookkeepers group to meet the document processing requirements. The solution must contain the following components:

A From Recognizer resource

An Azure web app that hosts the Form Recognizer sample labeling tool

The Management-Bookkeepers group needs to create a custom table extractor by using the sample labeling tool. Which three actions should the Management-Bookkeepers group perform in sequence? To answer, move the appropriate cmdlets from the list of cmdlets to the answer area and arrange them in the correct order. Select and Place:

Question 104
Correct answer: Question 104

Explanation:

Step 1: Create a new project and load sample documents

Create a new project. Projects store your configurations and settings.

Step 2: Label the sample documents

When you create or open a project, the main tag editor window opens.

Step 3: Train a custom model. Finally, train a custom model.

Reference: https://docs.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/label-tool

You are developing the knowledgebase.

You use Azure Video Analyzer for Media (previously Video indexer) to obtain transcripts of webinars. You need to ensure that the solution meets the knowledgebase requirements.

What should you do?

A.
Create a custom language model
A.
Create a custom language model
Answers
B.
Configure audio indexing for videos only
B.
Configure audio indexing for videos only
Answers
C.
Enable multi-language detection for videos
C.
Enable multi-language detection for videos
Answers
D.
Build a custom Person model for webinar presenters
D.
Build a custom Person model for webinar presenters
Answers
Suggested answer: B

Explanation:

Can search content in different formats, including video

Audio and video insights (multi-channels). When indexing by one channel, partial result for those models will be available. Keywords extraction: Extracts keywords from speech and visual text.

Named entities extraction: Extracts brands, locations, and people from speech and visual text via natural language processing (NLP).

Topic inference: Makes inference of main topics from transcripts. The 2nd-level IPTC taxonomy is included. Artifacts: Extracts rich set of "next level of details" artifacts for each of the models.

Sentiment analysis: Identifies positive, negative, and neutral sentiments from speech and visual text. Incorrect Answers:

C: Webinars Videos are in English.

Reference: https://docs.microsoft.com/en-us/azure/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-overview

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You create a web app named app1 that runs on an Azure virtual machine named vm1. Vm1 is on an Azure virtual network named vnet1.

You plan to create a new Azure Cognitive Search service named service1.

You need to ensure that app1 can connect directly to service1 without routing traffic over the public internet. Solution: You deploy service1 and a private endpoint to vnet1.

Does this meet the goal?

A.
Yes
A.
Yes
Answers
B.
No
B.
No
Answers
Suggested answer: A

Explanation:

A private endpoint is a network interface that uses a private IP address from your virtual network. This network interface connects you privately and securely to a service powered by Azure Private Link. By enabling a private endpoint, you're bringing the service into your virtual network.

The service could be an Azure service such as:

Azure Storage

Azure Cosmos DB

Azure SQL Database

Your own service using a Private Link Service.

Reference:

https://docs.microsoft.com/en-us/azure/private-link/private-endpoint-overview

You have a Language Understanding resource named lu1.

You build and deploy an Azure bot named bot1 that uses lu1.

You need to ensure that bot1 adheres to the Microsoft responsible AI principle of inclusiveness. How should you extend bot1?

A.
Implement authentication for bot1.
A.
Implement authentication for bot1.
Answers
B.
Enable active learning for lu1.
B.
Enable active learning for lu1.
Answers
C.
Host lu1 in a container.
C.
Host lu1 in a container.
Answers
D.
Add Direct Line Speech to bot1.
D.
Add Direct Line Speech to bot1.
Answers
Suggested answer: D

Explanation:

Inclusiveness: AI systems should empower everyone and engage people.

Direct Line Speech is a robust, end-to-end solution for creating a flexible, extensible voice assistant. It is powered by the Bot Framework and its Direct Line Speech channel, that is optimized for voice-in, voice-out interaction with bots. Incorrect:

Not B: The Active learning suggestions feature allows you to improve the quality of your knowledge base by suggesting alternative questions, based on user-submissions, to your question and answer pair. You review those suggestions, either adding them to existing questions or rejecting them.

Reference:

https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/direct-line-speech


Your company uses an Azure Cognitive Services solution to detect faces in uploaded images. The method to detect the faces uses the following code.

You discover that the solution frequently fails to detect faces in blurred images and in images that contain sideways faces. You need to increase the likelihood that the solution can detect faces in blurred images and images that contain sideways faces.

What should you do?

A.
Use a different version of the Face API.
A.
Use a different version of the Face API.
Answers
B.
Use the Computer Vision service instead of the Face service.
B.
Use the Computer Vision service instead of the Face service.
Answers
C.
Use the Identify method instead of the Detect method.
C.
Use the Identify method instead of the Detect method.
Answers
D.
Change the detection model.
D.
Change the detection model.
Answers
Suggested answer: D

Explanation:

Evaluate different models.

The best way to compare the performances of the detection models is to use them on a sample dataset. We recommend calling the Face - Detect API on a variety of images, especially images of many faces or of faces that are difficult to see, using each detection model. Pay attention to the number of faces that each model returns.

The different face detection models are optimized for different tasks. See the following table for an overview of the differences.

Reference:

https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/specify-detection-model

You have the following C# method.

You need to deploy an Azure resource to the East US Azure region. The resource will be used to perform sentiment analysis.

How should you call the method?

A.
create_resource("res1", "ContentModerator", "S0", "eastus")
A.
create_resource("res1", "ContentModerator", "S0", "eastus")
Answers
B.
create_resource("res1", "TextAnalytics", "S0", "eastus")
B.
create_resource("res1", "TextAnalytics", "S0", "eastus")
Answers
C.
create_resource("res1", "ContentModerator", "Standard", "East US")
C.
create_resource("res1", "ContentModerator", "Standard", "East US")
Answers
D.
create_resource("res1", "TextAnalytics", "Standard", "East US")
D.
create_resource("res1", "TextAnalytics", "Standard", "East US")
Answers
Suggested answer: B

Explanation:

To perform sentiment analysis, we specify TextAnalytics, not ContentModerator.

Possible SKU names include: 'F0','F1','S0','S1','S2','S3','S4','S5','S6','S7','S8' Possible location names include: westus, eastus

Reference:

https://docs.microsoft.com/en-us/powershell/module/az.cognitiveservices/new-azcognitiveservicesaccount

You build a Language Understanding model by using the Language Understanding portal.

You export the model as a JSON file as shown in the following sample.

To what does the Weather.Historic entity correspond in the utterance?

A.
by month
A.
by month
Answers
B.
chicago
B.
chicago
Answers
C.
rain
C.
rain
Answers
D.
location
D.
location
Answers
Suggested answer: A
Total 309 questions
Go to page: of 31