ExamGecko
Home / Microsoft / DP-100 / List of questions
Ask Question

Microsoft DP-100 Practice Test - Questions Answers, Page 12

List of questions

Question 111

Report Export Collapse

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You are using Azure Machine Learning to run an experiment that trains a classification model.

You want to use Hyperdrive to find parameters that optimize the AUC metric for the model. You configure a HyperDriveConfig for the experiment by running the following code:

Microsoft DP-100 image Question 2 89102 10022024015825000000

You plan to use this configuration to run a script that trains a random forest model and then tests it with validation data. The label values for the validation data are stored in a variable named y_test variable, and the predicted probabilities from the model are stored in a variable named y_predicted.

You need to add logging to the script to allow Hyperdrive to optimize hyperparameters for the AUC metric.

Solution: Run the following code:

Microsoft DP-100 image Question 2 89102 10022024015825000000

Does the solution meet the goal?

Yes
Yes
No
No
Suggested answer: A
Explanation:

Python printing/logging example: logging.info(message)

Destination: Driver logs, Azure Machine Learning designer

Reference: https://docs.microsoft.com/en-us/azure/machine-learning/how-to-debug-pipelines

asked 02/10/2024
Cornel Sasu
41 questions

Question 112

Report Export Collapse

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You are using Azure Machine Learning to run an experiment that trains a classification model.

You want to use Hyperdrive to find parameters that optimize the AUC metric for the model. You configure a HyperDriveConfig for the experiment by running the following code:

Microsoft DP-100 image Question 3 89103 10022024015825000000

You plan to use this configuration to run a script that trains a random forest model and then tests it with validation data. The label values for the validation data are stored in a variable named y_test variable, and the predicted probabilities from the model are stored in a variable named y_predicted.

You need to add logging to the script to allow Hyperdrive to optimize hyperparameters for the AUC metric.

Solution: Run the following code:

Microsoft DP-100 image Question 3 89103 10022024015825000000

Does the solution meet the goal?

Yes
Yes
No
No
Suggested answer: B
Explanation:

Use a solution with logging.info(message) instead.

Note: Python printing/logging example: logging.info(message)

Destination: Driver logs, Azure Machine Learning designer

Reference: https://docs.microsoft.com/en-us/azure/machine-learning/how-to-debug-pipelines

asked 02/10/2024
Dang Xuan Bao
45 questions

Question 113

Report Export Collapse

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You are using Azure Machine Learning to run an experiment that trains a classification model.

You want to use Hyperdrive to find parameters that optimize the AUC metric for the model. You configure a HyperDriveConfig for the experiment by running the following code:

Microsoft DP-100 image Question 4 89104 10022024015825000000

You plan to use this configuration to run a script that trains a random forest model and then tests it with validation data. The label values for the validation data are stored in a variable named y_test variable, and the predicted probabilities from the model are stored in a variable named y_predicted.

You need to add logging to the script to allow Hyperdrive to optimize hyperparameters for the AUC metric.

Solution: Run the following code:

Microsoft DP-100 image Question 4 89104 10022024015825000000

Does the solution meet the goal?

Yes
Yes
No
No
Suggested answer: B
Explanation:

Use a solution with logging.info(message) instead.

Note: Python printing/logging example: logging.info(message)

Destination: Driver logs, Azure Machine Learning designer

Reference: https://docs.microsoft.com/en-us/azure/machine-learning/how-to-debug-pipelines

asked 02/10/2024
Jonathan Ang
40 questions

Question 114

Report Export Collapse

You use the following code to run a script as an experiment in Azure Machine Learning:

Microsoft DP-100 image Question 5 89105 10022024015825000000

You must identify the output files that are generated by the experiment run.

You need to add code to retrieve the output file names.

Which code segment should you add to the script?

files = run.get_properties()
files = run.get_properties()
files= run.get_file_names()
files= run.get_file_names()
files = run.get_details_with_logs()
files = run.get_details_with_logs()
files = run.get_metrics()
files = run.get_metrics()
files = run.get_details()
files = run.get_details()
Suggested answer: B
Explanation:

You can list all of the files that are associated with this run record by called run.get_file_names()

Reference: https://docs.microsoft.com/en-us/azure/machine-learning/how-to-track-experiments

asked 02/10/2024
Daniel Martos
46 questions

Question 115

Report Export Collapse

You write five Python scripts that must be processed in the order specified in Exhibit A – which allows the same modules to run in parallel, but will wait for modules with dependencies.

You must create an Azure Machine Learning pipeline using the Python SDK, because you want to script to create the pipeline to be tracked in your version control system. You have created five PythonScriptSteps and have named the variables to match the module names.

Microsoft DP-100 image Question 6 89106 10022024015825000000

You need to create the pipeline shown. Assume all relevant imports have been done.

Which Python code segment should you use?

Suggested answer: A
Explanation:

The steps parameter is an array of steps. To build pipelines that have multiple steps, place the steps in order in this array.

Reference: https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-parallel-run-step

asked 02/10/2024
Matteo Di Pomponio
43 questions

Question 116

Report Export Collapse

You create a datastore named training_data that references a blob container in an Azure Storage account. The blob container contains a folder named csv_files in which multiple comma-separated values (CSV) files are stored.

You have a script named train.py in a local folder named ./script that you plan to run as an experiment using an estimator. The script includes the following code to read data from the csv_files folder:

Microsoft DP-100 image Question 7 89107 10022024015825000000

You have the following script.

Microsoft DP-100 image Question 7 89107 10022024015825000000

You need to configure the estimator for the experiment so that the script can read the data from a data reference named data_ref that references the csv_files folder in the training_data datastore.

Which code should you use to configure the estimator?

Suggested answer: B
Explanation:

Besides passing the dataset through the input parameters in the estimator, you can also pass the dataset through script_params and get the data path (mounting point) in your training script via arguments. This way, you can keep your training script independent of azureml-sdk. In other words, you will be able use the same training script for local debugging and remote training on any cloud platform.

Example:

from azureml.train.sklearn import SKLearn

script_params = {

# mount the dataset on the remote compute and pass the mounted path as an argument to the training script

'--data-folder': mnist_ds.as_named_input('mnist').as_mount(),

'--regularization': 0.5

}

est = SKLearn(source_directory=script_folder,

script_params=script_params,

compute_target=compute_target,

environment_definition=env,

entry_script='train_mnist.py')

# Run the experiment

run = experiment.submit(est)

run.wait_for_completion(show_output=True)

Incorrect Answers:

A: Pandas DataFrame not used.

Reference:

https://docs.microsoft.com/es-es/azure/machine-learning/how-to-train-with-datasets

asked 02/10/2024
Koen Poos
46 questions

Question 117

Report Export Collapse

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

An IT department creates the following Azure resource groups and resources:

Microsoft DP-100 image Question 8 89108 10022024015825000000

The IT department creates an Azure Kubernetes Service (AKS)-based inference compute target named aks-cluster in the Azure Machine Learning workspace.

You have a Microsoft Surface Book computer with a GPU. Python 3.6 and Visual Studio Code are installed.

You need to run a script that trains a deep neural network (DNN) model and logs the loss and accuracy metrics.

Solution: Attach the mlvm virtual machine as a compute target in the Azure Machine Learning workspace. Install the Azure ML SDK on the Surface Book and run Python code to connect to the workspace. Run the training script as an experiment on the mlvm remote compute resource.

Does the solution meet the goal?

Yes
Yes
No
No
Suggested answer: A
Explanation:

Use the VM as a compute target.

Note: A compute target is a designated compute resource/environment where you run your training script or host your service deployment. This location may be your local machine or a cloud-based compute resource.

Reference:

https://docs.microsoft.com/en-us/azure/machine-learning/concept-compute-target

asked 02/10/2024
Kabi Bashala
33 questions

Question 118

Report Export Collapse

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

An IT department creates the following Azure resource groups and resources:

Microsoft DP-100 image Question 9 89109 10022024015825000000

The IT department creates an Azure Kubernetes Service (AKS)-based inference compute target named aks-cluster in the Azure Machine Learning workspace.

You have a Microsoft Surface Book computer with a GPU. Python 3.6 and Visual Studio Code are installed.

You need to run a script that trains a deep neural network (DNN) model and logs the loss and accuracy metrics.

Solution: Install the Azure ML SDK on the Surface Book. Run Python code to connect to the workspace and then run the training script as an experiment on local compute.

Does the solution meet the goal?

Yes
Yes
No
No
Suggested answer: B
Explanation:

Need to attach the mlvm virtual machine as a compute target in the Azure Machine Learning workspace.

Reference:

https://docs.microsoft.com/en-us/azure/machine-learning/concept-compute-target

asked 02/10/2024
Jesse Moisio
51 questions

Question 119

Report Export Collapse

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

An IT department creates the following Azure resource groups and resources:

Microsoft DP-100 image Question 10 89110 10022024015825000000

The IT department creates an Azure Kubernetes Service (AKS)-based inference compute target named aks-cluster in the Azure Machine Learning workspace.

You have a Microsoft Surface Book computer with a GPU. Python 3.6 and Visual Studio Code are installed.

You need to run a script that trains a deep neural network (DNN) model and logs the loss and accuracy metrics.

Solution: Install the Azure ML SDK on the Surface Book. Run Python code to connect to the workspace. Run the training script as an experiment on the aks-cluster compute target.

Does the solution meet the goal?

Yes
Yes
No
No
Suggested answer: B
Explanation:

Need to attach the mlvm virtual machine as a compute target in the Azure Machine Learning workspace.

Reference:

https://docs.microsoft.com/en-us/azure/machine-learning/concept-compute-target

asked 02/10/2024
Fu Sin
37 questions

Question 120

Report Export Collapse

You create a batch inference pipeline by using the Azure ML SDK. You configure the pipeline parameters by executing the following code:

Microsoft DP-100 image Question 11 89111 10022024015825000000

You need to obtain the output from the pipeline execution.

Where will you find the output?

the digit_identification.py script
the digit_identification.py script
the debug log
the debug log
the Activity Log in the Azure portal for the Machine Learning workspace
the Activity Log in the Azure portal for the Machine Learning workspace
the Inference Clusters tab in Machine Learning studio
the Inference Clusters tab in Machine Learning studio
a file named parallel_run_step.txt located in the output folder
a file named parallel_run_step.txt located in the output folder
Suggested answer: E
Explanation:

output_action (str): How the output is to be organized. Currently supported values are 'append_row' and 'summary_only'.

'append_row' - All values output by run() method invocations will be aggregated into one unique file named parallel_run_step.txt that is created in the output location. 'summary_only'

Reference:

https://docs.microsoft.com/en-us/python/api/azureml-contrib-pipeline-steps/azureml.contrib.pipeline.steps.parallelrunconfig

asked 02/10/2024
Friedrich Spies
32 questions
Total 433 questions
Go to page: of 44
Search

Related questions