ExamGecko
Home Home / Microsoft / AI-102

Microsoft AI-102 Practice Test - Questions Answers, Page 10

Question list
Search
Search

List of questions

Search

Related questions











HOTSPOT

You need to develop code to upload images for the product creation project. The solution must meet the accessibility requirements.

How should you complete the code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Question 91
Correct answer: Question 91

Explanation:

Reference:

https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/documentation-samples/quickstarts/ComputerVision/Program.cs

HOTSPOT

You are developing the shopping on-the-go project.

You are configuring access to the QnA Maker resources.

Which role should you assign to AllUsers and LeadershipTeam? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Question 92
Correct answer: Question 92

Explanation:

Box 1: QnA Maker Editor

Scenario: Provide all employees with the ability to edit Q&As.

The QnA Maker Editor (read/write) has the following permissions:

Create KB API

Update KB API

Replace KB API

Replace Alterations

"Train API" [in new service model v5]

Box 2: Contributor

Scenario: Only senior managers must be able to publish updates.

Contributor permission: All except ability to add new members to roles

Reference:

https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/reference-role-based-access-control

HOTSPOT

You are developing the shopping on-the-go project.

You need to build the Adaptive Card for the chatbot.

How should you complete the code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Question 93
Correct answer: Question 93

Explanation:

Box 1: name [language]

Chatbot must support interactions in English, Spanish, and Portuguese.

Box 2: "$when:${stockLevel != 'OK'}"

Product displays must include images and warnings when stock levels are low or out of stock.

Box 3: image.altText[language]

HOTSPOT

You are developing the shopping on-the-go project.

You need to build the Adaptive Card for the chatbot.

How should you complete the code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Question 94
Correct answer: Question 94

Explanation:

Box 1: name.en

Box 2: "$when": "${stockLevel != 'OK'}"

Product displays must include images and warnings when stock levels are low or out of stock.

Box 3:image.altText.en

You are developing the smart e-commerce project.

You need to implement autocompletion as part of the Cognitive Search solution.

Which three actions should you perform? Each correct answer presents part of the solution. (Choose three.) NOTE: Each correct selection is worth one point.

A.
Make API queries to the autocomplete endpoint and include suqqesterName in the body.
A.
Make API queries to the autocomplete endpoint and include suqqesterName in the body.
Answers
B.
Add a suggesterthat has the three product name fields as source fields.
B.
Add a suggesterthat has the three product name fields as source fields.
Answers
C.
Make API queries to the search endpoint and include the product name fields in the searchFieids query parameter.
C.
Make API queries to the search endpoint and include the product name fields in the searchFieids query parameter.
Answers
D.
Add a suggester for each of the three product name fields.
D.
Add a suggester for each of the three product name fields.
Answers
E.
Set the searchAnalyzer property for the three product name variants.
E.
Set the searchAnalyzer property for the three product name variants.
Answers
F.
Set the analyzer property for the three product name variants.
F.
Set the analyzer property for the three product name variants.
Answers
Suggested answer: A, B, F

Explanation:

Scenario: Support autocompletion and autosuggestion based on all product name variants.

A: Call a suggester-enabled query, in the form of a Suggestion request or Autocomplete request, using an API. API usage is illustrated in the following call to the Autocomplete REST API.

POST /indexes/myxboxgames/docs/autocomplete?search&api-version=2020-06-30

{

"search": "minecraf",

"suggesterName": "sg"

}

B: In Azure Cognitive Search, typeahead or "search-as-you-type" is enabled through a suggester. A suggester provides a list of fields that undergo additional tokenization, generating prefix sequences to support matches on partial terms. For example, a suggester that includes a City field with a value for "Seattle" will have prefix combinations of "sea", "seat", "seatt", and "seattl" to support typeahead.

F. Use the default standard Lucene analyzer ("analyzer": null) or a language analyzer (for example, "analyzer": "en.Microsoft") on the field.

Reference:

https://docs.microsoft.com/en-us/azure/search/index-add-suggesters

HOTSPOT

You are planning the product creation project.

You need to build the REST endpoint to create the multilingual product descriptions.

How should you complete the URI? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Question 96
Correct answer: Question 96

Explanation:

Box 1: api.cognitive.microsofttranslator.com

Translator 3.0: Translate. Send a POST request to:

https://api.cognitive.microsofttranslator.com/translate?api-version=3.0

Box 2: /translate

Reference:

https://docs.microsoft.com/en-us/azure/cognitive-services/translator/reference/v3-0-translate

DRAG DROP

You are developing the smart e-commerce project.

You need to design the skillset to include the contents of PDFs in searches.

How should you complete the skillset design diagram? To answer, drag the appropriate services to the correct stages. Each service may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

Question 97
Correct answer: Question 97

Explanation:

Box 1: Azure Blob storage

At the start of the pipeline, you have unstructured text or non-text content (such as images, scanned documents, or JPEG files). Data must exist in an Azure data storage service that can be accessed by an indexer.

Box 2: Computer Vision API

Scenario: Provide users with the ability to search insight gained from the images, manuals, and videos associated with the products.

The Computer Vision Read API is Azure's latest OCR technology (learn what's new) that extracts printed text (in several languages), handwritten text (English only), digits, and currency symbols from images and multi-page PDF documents.

Box 3: Translator API

Scenario: Product descriptions, transcripts, and alt text must be available in English, Spanish, and Portuguese.

Box 4: Azure Files

Scenario: Store all raw insight data that was generated, so the data can be processed later.

Incorrect Answers:

The custom vision API from Microsoft Azure learns to recognize specific content in imagery and becomes smarter with training and time.

Reference:

https://docs.microsoft.com/en-us/azure/search/cognitive-search-concept-intro

https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview-ocr

You are developing the document processing workflow.

You need to identify which API endpoints to use to extract text from the financial documents. The solution must meet the document processing requirements. Which two API endpoints should you identify? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

A.
/vision/v3.1/read/analyzeResults
A.
/vision/v3.1/read/analyzeResults
Answers
B.
/formrecognizer/v2.0/custom/models/{modelId}/analyze
B.
/formrecognizer/v2.0/custom/models/{modelId}/analyze
Answers
C.
/formrecognizer/v2.0/prebuilt/receipt/analyze
C.
/formrecognizer/v2.0/prebuilt/receipt/analyze
Answers
D.
/vision/v3.1/describe
D.
/vision/v3.1/describe
Answers
E.
/vision/v3.1/read/analyze
E.
/vision/v3.1/read/analyze
Answers
Suggested answer: C, E

Explanation:

C: Analyze Receipt - Get Analyze Receipt Result.

Query the status and retrieve the result of an Analyze Receipt operation.

Request URL: https://{endpoint}/formrecognizer/v2.0-preview/prebuilt/receipt/analyzeResults/{resultId} E: POST {Endpoint}/vision/v3.1/read/analyze

Use this interface to get the result of a Read operation, employing the state-of-the-art Optical Character Recognition (OCR) algorithms optimized for text-heavy documents.

Scenario: Contoso plans to develop a document processing workflow to extract information automatically from PDFs and images of financial documents The document processing solution must be able to process standardized financial documents that have the following characteristics: - Contain fewer than 20 pages.

- Be formatted as PDF or JPEG files.

- Have a distinct standard for each office.

*The document processing solution must be able to extract tables and text from the financial documents. The document processing solution must be able to extract information from receipt images.

Reference: https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2preview/operations/GetAnalyzeReceiptResult https://docs.microsoft.com/en-us/rest/api/computervision/3.1/read/read

HOTSPOT

You are developing the knowledgebase by using Azure Cognitive Search.

You need to build a skill that will be used by indexers.

How should you complete the code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Question 99
Correct answer: Question 99

Explanation:

Box 1: "categories": ["Locations", "Persons", "Organizations"], Locations, Persons, Organizations are in the outputs. Scenario: Contoso plans to develop a searchable knowledgebase of all the intellectual property Note: The categories parameter is an array of categories that should be extracted. Possible category types: "Person", "Location", "Organization", "Quantity", "Datetime", "URL", "Email". If no category is provided, all types are returned.

Box 2: {"name": " entities"}

The include wikis, so should include entities in the outputs.

Note: entities is an array of complex types that contains rich information about the entities extracted from text, with the following fields name (the actual entity name. This represents a "normalized" form) wikipediaId

wikipediaLanguage

wikipediaUrl (a link to Wikipedia page for the entity) etc.

Reference: https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-entity-recognition

You are developing the knowledgebase by using Azure Cognitive Search.

You need to process wiki content to meet the technical requirements.

What should you include in the solution?

A.
an indexer for Azure Blob storage attached to a skillset that contains the language detection skill and the text translation skill
A.
an indexer for Azure Blob storage attached to a skillset that contains the language detection skill and the text translation skill
Answers
B.
an indexer for Azure Blob storage attached to a skillset that contains the language detection skill
B.
an indexer for Azure Blob storage attached to a skillset that contains the language detection skill
Answers
C.
an indexer for Azure Cosmos DB attached to a skillset that contains the document extraction skill and the text translation skill
C.
an indexer for Azure Cosmos DB attached to a skillset that contains the document extraction skill and the text translation skill
Answers
D.
an indexer for Azure Cosmos DB attached to a skillset that contains the language detection skill and the text translation skill
D.
an indexer for Azure Cosmos DB attached to a skillset that contains the language detection skill and the text translation skill
Answers
Suggested answer: C

Explanation:

The wiki contains text in English, French and Portuguese.

Scenario: All planned projects must support English, French, and Portuguese.

The Document Extraction skill extracts content from a file within the enrichment pipeline. This allows you to take advantage of the document extraction step that normally happens before the skillset execution with files that may be generated by other skills.

Note: The Translator Text API will be used to determine the from language. The Language detection skill is not required. Incorrect Answers:

Not A, not B: The wiki is stored in Azure Cosmos DB.

Reference:

https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-document-extraction https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-text-translation

Total 309 questions
Go to page: of 31