ExamGecko
Home / Microsoft / DP-100 / List of questions
Ask Question

Microsoft DP-100 Practice Test - Questions Answers, Page 22

List of questions

Question 211

Report Export Collapse

HOTSPOT

You create an Azure Databricks workspace and a linked Azure Machine Learning workspace.

You have the following Python code segment in the Azure Machine Learning workspace:

import mlflow

import mlflow.azureml

import azureml.mlflow

import azureml.core

from azureml.core import Workspace

subscription_id = 'subscription_id'

resourse_group = 'resource_group_name'

workspace_name = 'workspace_name'

ws = Workspace.get(name=workspace_name,

subscription_id=subscription_id,

resource_group=resource_group)

experimentName = "/Users/{user_name}/{experiment_folder}/{experiment_name}"

mlflow.set_experiment(experimentName)

uri = ws.get_mlflow_tracking_uri()

mlflow.set_tracking_uri(uri)

Instructions: For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.


Become a Premium Member for full access
  Unlock Premium Member

Question 212

Report Export Collapse

You create and register a model in an Azure Machine Learning workspace.

You must use the Azure Machine Learning SDK to implement a batch inference pipeline that uses a ParallelRunStep to score input data using the model. You must specify a value for the ParallelRunConfig compute_target setting of the pipeline step.

You need to create the compute target.

Which class should you use?

Become a Premium Member for full access
  Unlock Premium Member

Question 213

Report Export Collapse

DRAG DROP

You previously deployed a model that was trained using a tabular dataset named training-dataset, which is based on a folder of CSV files.

Over time, you have collected the features and predicted labels generated by the model in a folder containing a CSV file for each month. You have created two tabular datasets based on the folder containing the inference data: one named predictions-dataset with a schema that matches the training data exactly, including the predicted label; and another named features-dataset with a schema containing all of the feature columns and a timestamp column based on the filename, which includes the day, month, and year.

You need to create a data drift monitor to identify any changing trends in the feature data since the model was trained. To accomplish this, you must define the required datasets for the data drift monitor.

Which datasets should you use to configure the data drift monitor? To answer, drag the appropriate datasets to the correct data drift monitor options. Each source may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.


Become a Premium Member for full access
  Unlock Premium Member

Question 214

Report Export Collapse

You plan to run a Python script as an Azure Machine Learning experiment.

The script contains the following code:

import os, argparse, glob

from azureml.core import Run

parser = argparse.ArgumentParser()

parser.add_argument('--input-data',

type=str, dest='data_folder')

args = parser.parse_args()

data_path = args.data_folder

file_paths = glob.glob(data_path + "/*.jpg")

You must specify a file dataset as an input to the script. The dataset consists of multiple large image files and must be streamed directly from its source.

You need to write code to define a ScriptRunConfig object for the experiment and pass the ds dataset as an argument.

Which code segment should you use?

Become a Premium Member for full access
  Unlock Premium Member

Question 215

Report Export Collapse

DRAG DROP

You need to correct the model fit issue.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.


Become a Premium Member for full access
  Unlock Premium Member

Question 216

Report Export Collapse

You need to implement a scaling strategy for the local penalty detection data.

Which normalization type should you use?

Become a Premium Member for full access
  Unlock Premium Member

Question 217

Report Export Collapse

You need to implement a feature engineering strategy for the crowd sentiment local models.

What should you do?

Become a Premium Member for full access
  Unlock Premium Member

Question 218

Report Export Collapse

You need to implement a model development strategy to determine a user's tendency to respond to an ad.

Which technique should you use?

Become a Premium Member for full access
  Unlock Premium Member

Question 219

Report Export Collapse

You need to implement a new cost factor scenario for the ad response models as illustrated in the performance curve exhibit.

Which technique should you use?

Become a Premium Member for full access
  Unlock Premium Member

Question 220

Report Export Collapse

HOTSPOT

You need to use the Python language to build a sampling strategy for the global penalty detection models.

How should you complete the code segment? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.


Become a Premium Member for full access
  Unlock Premium Member
Total 433 questions
Go to page: of 44
Search

Related questions