ExamGecko
Home / Microsoft / DP-100 / List of questions
Ask Question

Microsoft DP-100 Practice Test - Questions Answers

Add to Whishlist

List of questions

Question 1

Report Export Collapse

You create a deep learning model for image recognition on Azure Machine Learning service using GPU-based training.

You must deploy the model to a context that allows for real-time GPU-based inferencing.

You need to configure compute resources for model inferencing.

Which compute type should you use?

Azure Container Instance

Azure Container Instance

Azure Kubernetes Service

Azure Kubernetes Service

Field Programmable Gate Array

Field Programmable Gate Array

Machine Learning Compute

Machine Learning Compute

Suggested answer: B
Explanation:

You can use Azure Machine Learning to deploy a GPU-enabled model as a web service. Deploying a model on Azure Kubernetes Service (AKS) is one option. The AKS cluster provides a GPU resource that is used by the model for inference.

Inference, or model scoring, is the phase where the deployed model is used to make predictions. Using GPUs instead of CPUs offers performance advantages on highly parallelizable computation.

Reference: https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-inferencing-gpus

asked 07/05/2025
Deniz Cimen
32 questions

Question 2

Report Export Collapse

You create a batch inference pipeline by using the Azure ML SDK. You run the pipeline by using the following code:

from azureml.pipeline.core import Pipeline

from azureml.core.experiment import Experiment

pipeline = Pipeline(workspace=ws, steps=[parallelrun_step])

pipeline_run = Experiment(ws, 'batch_pipeline').submit(pipeline)

You need to monitor the progress of the pipeline execution.

What are two possible ways to achieve this goal? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

Run the following code in a notebook:

Microsoft DP-100 image Question 181 Answer 1 63882177402739297434274

Run the following code in a notebook:

Microsoft DP-100 image Question 181 Answer 1 63882177402739297434274

Use the Inference Clusters tab in Machine Learning Studio.

Use the Inference Clusters tab in Machine Learning Studio.

Use the Activity log in the Azure portal for the Machine Learning workspace.

Use the Activity log in the Azure portal for the Machine Learning workspace.

Run the following code in a notebook:

Microsoft DP-100 image Question 181 Answer 4 63882177402739297434274

Run the following code in a notebook:

Microsoft DP-100 image Question 181 Answer 4 63882177402739297434274

Run the following code and monitor the console output from the PipelineRun object:

Microsoft DP-100 image Question 181 Answer 5 63882177402739297434274

Run the following code and monitor the console output from the PipelineRun object:

Microsoft DP-100 image Question 181 Answer 5 63882177402739297434274

Suggested answer: D, E
Explanation:

A batch inference job can take a long time to finish. This example monitors progress by using a Jupyter widget. You can also manage the job's progress by using:

Azure Machine Learning Studio.

Console output from the PipelineRun object.

from azureml.widgets import RunDetails

RunDetails(pipeline_run).show()

pipeline_run.wait_for_completion(show_output=True)

Reference:

https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-parallel-run-step#monitor-the-parallel-run-job

asked 07/05/2025
Cristi Savin
54 questions

Question 3

Report Export Collapse

You train and register a model in your Azure Machine Learning workspace.

You must publish a pipeline that enables client applications to use the model for batch inferencing. You must use a pipeline with a single ParallelRunStep step that runs a Python inferencing script to get predictions from the input data.

You need to create the inferencing script for the ParallelRunStep pipeline step.

Which two functions should you include? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

run(mini_batch)

run(mini_batch)

main()

main()

batch()

batch()

init()

init()

score(mini_batch)

score(mini_batch)

Suggested answer: A, D
Explanation:

Reference: https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/machine-learning-pipelines/parallel-run

asked 07/05/2025
Bruce Baynes
36 questions

Question 4

Report Export Collapse

You deploy a model as an Azure Machine Learning real-time web service using the following code.

Microsoft DP-100 image Question 183 63882177402754922098393

The deployment fails.

You need to troubleshoot the deployment failure by determining the actions that were performed during deployment and identifying the specific action that failed.

Which code segment should you run?

service.get_logs()

service.get_logs()

service.state

service.state

service.serialize()

service.serialize()

service.update_deployment_state()

service.update_deployment_state()

Suggested answer: A
Explanation:

You can print out detailed Docker engine log messages from the service object. You can view the log for ACI, AKS, and Local deployments. The following example demonstrates how to print the logs.

# if you already have the service object handy print(service.get_logs())

# if you only know the name of the service (note there might be multiple services with the same name but different version number) print(ws.webservices['mysvc'].get_logs())

Reference: https://docs.microsoft.com/en-us/azure/machine-learning/how-to-troubleshoot-deployment

asked 07/05/2025
Ioana Mihaila
54 questions

Question 5

Report Export Collapse

You create a multi-class image classification deep learning model.

You train the model by using PyTorch version 1.2.

You need to ensure that the correct version of PyTorch can be identified for the inferencing environment when the model is deployed.

What should you do?

Save the model locally as a.pt file, and deploy the model as a local web service.

Save the model locally as a.pt file, and deploy the model as a local web service.

Deploy the model on computer that is configured to use the default Azure Machine Learning conda environment.

Deploy the model on computer that is configured to use the default Azure Machine Learning conda environment.

Register the model with a .pt file extension and the default version property.

Register the model with a .pt file extension and the default version property.

Register the model, specifying the model_framework and model_framework_version properties.

Register the model, specifying the model_framework and model_framework_version properties.

Suggested answer: D
Explanation:

framework_version: The PyTorch version to be used for executing training code.

Reference: https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.dnn.pytorch?view=azure-ml-py

asked 07/05/2025
YASSIR EL GHAZY
65 questions

Question 6

Report Export Collapse

You train a machine learning model.

You must deploy the model as a real-time inference service for testing. The service requires low CPU utilization and less than 48 MB of RAM. The compute target for the deployed service must initialize automatically while minimizing cost and administrative overhead.

Which compute target should you use?

Azure Container Instance (ACI)

Azure Container Instance (ACI)

attached Azure Databricks cluster

attached Azure Databricks cluster

Azure Kubernetes Service (AKS) inference cluster

Azure Kubernetes Service (AKS) inference cluster

Azure Machine Learning compute cluster

Azure Machine Learning compute cluster

Suggested answer: A
Explanation:

Azure Container Instances (ACI) are suitable only for small models less than 1 GB in size. Use it for low-scale CPU-based workloads that require less than 48 GB of RAM.

Note: Microsoft recommends using single-node Azure Kubernetes Service (AKS) clusters for dev-test of larger models.

Reference: https://docs.microsoft.com/id-id/azure/machine-learning/how-to-deploy-and-where

asked 07/05/2025
Daniel Hernandez Villar
40 questions

Question 7

Report Export Collapse

You register a model that you plan to use in a batch inference pipeline.

The batch inference pipeline must use a ParallelRunStep step to process files in a file dataset. The script has the ParallelRunStep step runs must process six input files each time the inferencing function is called.

You need to configure the pipeline.

Which configuration setting should you specify in the ParallelRunConfig object for the PrallelRunStep step?

process_count_per_node= "6"

process_count_per_node= "6"

node_count= "6"

node_count= "6"

mini_batch_size= "6"

mini_batch_size= "6"

error_threshold= "6"

error_threshold= "6"

Suggested answer: B
Explanation:

node_count is the number of nodes in the compute target used for running the ParallelRunStep.

Incorrect Answers:

A: process_count_per_node

Number of processes executed on each node. (optional, default value is number of cores on node.)

C: mini_batch_size

For FileDataset input, this field is the number of files user script can process in one run() call. For TabularDataset input, this field is the approximate size of data the user script can process in one run() call. Example values are 1024, 1024KB, 10MB, and 1GB.

D: error_threshold

The number of record failures for TabularDataset and file failures for FileDataset that should be ignored during processing. If the error count goes above this value, then the job will be aborted.

Reference:

https://docs.microsoft.com/en-us/python/api/azureml-contrib-pipeline-steps/azureml.contrib.pipeline.steps.parallelrunconfig?view=azure-ml-py

asked 07/05/2025
walterio mendez
34 questions

Question 8

Report Export Collapse

You deploy a real-time inference service for a trained model.

The deployed model supports a business-critical application, and it is important to be able to monitor the data submitted to the web service and the predictions the data generates.

You need to implement a monitoring solution for the deployed model using minimal administrative effort.

What should you do?

View the explanations for the registered model in Azure ML studio.

View the explanations for the registered model in Azure ML studio.

Enable Azure Application Insights for the service endpoint and view logged data in the Azure portal.

Enable Azure Application Insights for the service endpoint and view logged data in the Azure portal.

View the log files generated by the experiment used to train the model.

View the log files generated by the experiment used to train the model.

Create an ML Flow tracking URI that references the endpoint, and view the data logged by ML Flow.


Create an ML Flow tracking URI that references the endpoint, and view the data logged by ML Flow.


Suggested answer: B
Explanation:

Explanation:

Configure logging with Azure Machine Learning studio

You can also enable Azure Application Insights from Azure Machine Learning studio. When you're ready to deploy your model as a web service, use the following steps to enable Application Insights:

1. Sign in to the studio at https://ml.azure.com.

2. Go to Models and select the model you want to deploy.

3. Select +Deploy.

4. Populate the Deploy model form.

5. Expand the Advanced menu.

6. Select Enable Application Insights diagnostics and data collection.

Microsoft DP-100 image Question 187 explanation 63882177402770546414756

Reference:

https://docs.microsoft.com/en-us/azure/machine-learning/how-to-enable-app-insights

asked 07/05/2025
Markus Hechtl
40 questions

Question 9

Report Export Collapse

An organization creates and deploys a multi-class image classification deep learning model that uses a set of labeled photographs.

The software engineering team reports there is a heavy inferencing load for the prediction web services during the summer. The production web service for the model fails to meet demand despite having a fully-utilized compute cluster where the web service is deployed.

You need to improve performance of the image classification web service with minimal downtime and minimal administrative effort.

What should you advise the IT Operations team to do?

Create a new compute cluster by using larger VM sizes for the nodes, redeploy the web service to that cluster, and update the DNS registration for the service endpoint to point to the new cluster.

Create a new compute cluster by using larger VM sizes for the nodes, redeploy the web service to that cluster, and update the DNS registration for the service endpoint to point to the new cluster.

Increase the node count of the compute cluster where the web service is deployed.

Increase the node count of the compute cluster where the web service is deployed.

Increase the minimum node count of the compute cluster where the web service is deployed.

Increase the minimum node count of the compute cluster where the web service is deployed.

Increase the VM size of nodes in the compute cluster where the web service is deployed.

Increase the VM size of nodes in the compute cluster where the web service is deployed.

Suggested answer: B
Explanation:

The Azure Machine Learning SDK does not provide support scaling an AKS cluster. To scale the nodes in the cluster, use the UI for your AKS cluster in the Azure Machine Learning studio. You can only change the node count, not the VM size of the cluster.

Reference:

https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-attach-kubernetes

asked 07/05/2025
Aparecido Primo
45 questions

Question 10

Report Export Collapse

You use Azure Machine Learning designer to create a real-time service endpoint. You have a single Azure Machine Learning service compute resource.

You train the model and prepare the real-time pipeline for deployment.

You need to publish the inference pipeline as a web service.

Which compute type should you use?

a new Machine Learning Compute resource

a new Machine Learning Compute resource

Azure Kubernetes Services

Azure Kubernetes Services

HDInsight

HDInsight

the existing Machine Learning Compute resource

the existing Machine Learning Compute resource

Azure Databricks

Azure Databricks

Suggested answer: B
Explanation:

Azure Kubernetes Service (AKS) can be used real-time inference.

Reference:

https://docs.microsoft.com/en-us/azure/machine-learning/concept-compute-target

asked 07/05/2025
Ann Nacua
59 questions
Total 452 questions
Go to page: of 46
Search

Related questions