ExamGecko

Microsoft DP-100 Practice Test - Questions Answers, Page 3

Question list
Search
Search

List of questions

Search

Related questions











You develop and train a machine learning model to predict fraudulent transactions for a hotel booking website.

Traffic to the site varies considerably. The site experiences heavy traffic on Monday and Friday and much lower traffic on other days. Holidays are also high web traffic days.

You need to deploy the model as an Azure Machine Learning real-time web service endpoint on compute that can dynamically scale up and down to support demand.

Which deployment compute option should you use?

A.
attached Azure Databricks cluster
A.
attached Azure Databricks cluster
Answers
B.
Azure Container Instance (ACI)
B.
Azure Container Instance (ACI)
Answers
C.
Azure Kubernetes Service (AKS) inference cluster
C.
Azure Kubernetes Service (AKS) inference cluster
Answers
D.
Azure Machine Learning Compute Instance
D.
Azure Machine Learning Compute Instance
Answers
E.
attached virtual machine in a different region
E.
attached virtual machine in a different region
Answers
Suggested answer: E

Explanation:

Azure Machine Learning compute cluster is a managed-compute infrastructure that allows you to easily create a single or multi-node compute. The compute is created within your workspace region as a resource that can be shared with other users in your workspace. The compute scales up automatically when a job is submitted, and can be put in an Azure Virtual Network.

Reference: https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-attach-compute-sdk Question Set 1

HOTSPOT

You are a lead data scientist for a project that tracks the health and migration of birds. You create a multi-image classification deep learning model that uses a set of labeled bird photos collected by experts. You plan to use the model to develop a cross-platform mobile app that predicts the species of bird captured by app users.

You must test and deploy the trained model as a web service. The deployed model must meet the following requirements:

An authenticated connection must not be required for testing.

The deployed model must perform with low latency during inferencing.

The REST endpoints must be scalable and should have a capacity to handle large number of requests when multiple end users are using the mobile application.

You need to verify that the web service returns predictions in the expected JSON format when a valid REST request is submitted.

Which compute resources should you use? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.


Question 22
Correct answer: Question 22

Explanation:

Box 1: ds-workstation notebook VM

An authenticated connection must not be required for testing.

On a Microsoft Azure virtual machine (VM), including a Data Science Virtual Machine (DSVM), you create local user accounts while provisioning the VM. Users then authenticate to the VM by using these credentials.

Box 2: gpu-compute cluster

Image classification is well suited for GPU compute clusters

Reference:

https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-common-identity

https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/ai/training-deep-learning

HOTSPOT

You deploy a model in Azure Container Instance.

You must use the Azure Machine Learning SDK to call the model API.

You need to invoke the deployed model using native SDK classes and methods.

How should you complete the command? To answer, select the appropriate options in the answer areas.

NOTE: Each correct selection is worth one point.


Question 23
Correct answer: Question 23

Explanation:

Box 1: from azureml.core.webservice import Webservice

The following code shows how to use the SDK to update the model, environment, and entry script for a web service to Azure Container Instances:

from azureml.core import Environment

from azureml.core.webservice import Webservice

from azureml.core.model import Model, InferenceConfig

Box 2: predictions = service.run(input_json)

Example: The following code demonstrates sending data to the service:

import json

test_sample = json.dumps({'data': [

[1, 2, 3, 4, 5, 6, 7, 8, 9, 10],

[10, 9, 8, 7, 6, 5, 4, 3, 2, 1]

]})

test_sample = bytes(test_sample, encoding='utf8')

prediction = service.run(input_data=test_sample)

print(prediction)

Reference:

https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/how-to-deploy-azure-container-instance

https://docs.microsoft.com/en-us/azure/machine-learning/how-to-troubleshoot-deployment

HOTSPOT

You use Azure Machine Learning to train and register a model.

You must deploy the model into production as a real-time web service to an inference cluster named service-compute that the IT department has created in the Azure Machine Learning workspace.

Client applications consuming the deployed web service must be authenticated based on their Azure Active Directory service principal.

You need to write a script that uses the Azure Machine Learning SDK to deploy the model. The necessary modules have been imported.

How should you complete the code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.


Question 24
Correct answer: Question 24

Explanation:

Box 1: AksCompute

Example:

aks_target = AksCompute(ws,"myaks")

# If deploying to a cluster configured for dev/test, ensure that it was created with enough

# cores and memory to handle this deployment configuration. Note that memory is also used by

# things such as dependencies and AML components.

deployment_config = AksWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)

service = Model.deploy(ws, "myservice", [model], inference_config, deployment_config, aks_target)

Box 2: AksWebservice

Box 3: token_auth_enabled=Yes

Whether or not token auth is enabled for the Webservice.

Note: A Service principal defined in Azure Active Directory (Azure AD) can act as a principal on which authentication and authorization policies can be enforced in Azure Databricks.

The Azure Active Directory Authentication Library (ADAL) can be used to programmatically get an Azure AD access token for a user.

Incorrect Answers:

auth_enabled (bool): Whether or not to enable key auth for this Webservice. Defaults to True.

Reference:

https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-azure-kubernetes-service

https://docs.microsoft.com/en-us/azure/databricks/dev-tools/api/latest/aad/service-prin-aad-token

DRAG DROP

You use Azure Machine Learning to deploy a model as a real-time web service.

You need to create an entry script for the service that ensures that the model is loaded when the service starts and is used to score new data as it is received.

Which functions should you include in the script? To answer, drag the appropriate functions to the correct actions. Each function may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.


Question 25
Correct answer: Question 25

Explanation:

Box 1: init()

The entry script has only two required functions, init() and run(data). These functions are used to initialize the service at startup and run the model using request data passed in by a client. The rest of the script handles loading and running the model(s).

Box 2: run()

Reference:

https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-existing-model

You use the designer to create a training pipeline for a classification model. The pipeline uses a dataset that includes the features and labels required for model training.

You create a real-time inference pipeline from the training pipeline. You observe that the schema for the generated web service input is based on the dataset and includes the label column that the model predicts. Client applications that use the service must not be required to submit this value.

You need to modify the inference pipeline to meet the requirement.

What should you do?

A.
Add a Select Columns in Dataset module to the inference pipeline after the dataset and use it to select all columns other than the label.
A.
Add a Select Columns in Dataset module to the inference pipeline after the dataset and use it to select all columns other than the label.
Answers
B.
Delete the dataset from the training pipeline and recreate the real-time inference pipeline.
B.
Delete the dataset from the training pipeline and recreate the real-time inference pipeline.
Answers
C.
Delete the Web Service Input module from the inference pipeline.
C.
Delete the Web Service Input module from the inference pipeline.
Answers
D.
Replace the dataset in the inference pipeline with an Enter Data Manually module that includes data for the feature columns but not the label column.
D.
Replace the dataset in the inference pipeline with an Enter Data Manually module that includes data for the feature columns but not the label column.
Answers
Suggested answer: A

Explanation:

By default, the Web Service Input will expect the same data schema as the module output data which connects to the same downstream port as it. You can remove the target variable column in the inference pipeline using Select Columns in Dataset module. Make sure that the output of Select Columns in Dataset removing target variable column is connected to the same port as the output of the Web Service Intput module.

Reference:

https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-designer-automobile-price-deploy

You use the Azure Machine Learning designer to create and run a training pipeline. You then create a real-time inference pipeline.

You must deploy the real-time inference pipeline as a web service.

What must you do before you deploy the real-time inference pipeline?

A.
Run the real-time inference pipeline.
A.
Run the real-time inference pipeline.
Answers
B.
Create a batch inference pipeline.
B.
Create a batch inference pipeline.
Answers
C.
Clone the training pipeline.
C.
Clone the training pipeline.
Answers
D.
Create an Azure Machine Learning compute cluster.
D.
Create an Azure Machine Learning compute cluster.
Answers
Suggested answer: D

Explanation:

You need to create an inferencing cluster.

Deploy the real-time endpoint

After your AKS service has finished provisioning, return to the real-time inferencing pipeline to complete deployment.

1. Select Deploy above the canvas.

2. Select Deploy new real-time endpoint.

3. Select the AKS cluster you created.

4. Select Deploy.

Reference:

https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-designer-automobile-price-deploy

You create an Azure Machine Learning workspace named ML-workspace. You also create an Azure Databricks workspace named DB-workspace. DB-workspace contains a cluster named DB-cluster.

You must use DB-cluster to run experiments from notebooks that you import into DB-workspace.

You need to use ML-workspace to track MLflow metrics and artifacts generated by experiments running on DB-cluster. The solution must minimize the need for custom code.

What should you do?

A.
From DB-cluster, configure the Advanced Logging option.
A.
From DB-cluster, configure the Advanced Logging option.
Answers
B.
From DB-workspace, configure the Link Azure ML workspace option.
B.
From DB-workspace, configure the Link Azure ML workspace option.
Answers
C.
From ML-workspace, create an attached compute.
C.
From ML-workspace, create an attached compute.
Answers
D.
From ML-workspace, create a compute cluster.
D.
From ML-workspace, create a compute cluster.
Answers
Suggested answer: B

Explanation:

Connect your Azure Databricks and Azure Machine Learning workspaces:

Linking your ADB workspace to your Azure Machine Learning workspace enables you to track your experiment data in the Azure Machine Learning workspace.

To link your ADB workspace to a new or existing Azure Machine Learning workspace

1. Sign in to Azure portal.

2. Navigate to your ADB workspace's Overview page.

3. Select the Link Azure Machine Learning workspace button on the bottom right.

Reference:

https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-mlflow-azure-databricks

HOTSPOT

You create an Azure Machine Learning workspace.

You need to detect data drift between a baseline dataset and a subsequent target dataset by using the DataDriftDetector class.

How should you complete the code segment? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.


Question 29
Correct answer: Question 29

Explanation:

Box 1: create_from_datasets

The create_from_datasets method creates a new DataDriftDetector object from a baseline tabular dataset and a target time series dataset.

Box 2: backfill

The backfill method runs a backfill job over a given specified start and end date.

Syntax: backfill(start_date, end_date, compute_target=None, create_compute_target=False)

Incorrect Answers:

List and update do not have datetime parameters.

Reference:

https://docs.microsoft.com/en-us/python/api/azureml-datadrift/azureml.datadrift.datadriftdetector(class)

You are planning to register a trained model in an Azure Machine Learning workspace.

You must store additional metadata about the model in a key-value format. You must be able to add new metadata and modify or delete metadata after creation.

You need to register the model.

Which parameter should you use?

A.
description
A.
description
Answers
B.
model_framework
B.
model_framework
Answers
C.
tags
C.
tags
Answers
D.
properties
D.
properties
Answers
Suggested answer: D

Explanation:

azureml.core.Model.properties:

Dictionary of key value properties for the Model. These properties cannot be changed after registration, however new key value pairs can be added.

Reference:

https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.model.model

Total 433 questions
Go to page: of 44