ExamGecko
Home Home / Microsoft / DP-600

Microsoft DP-600 Practice Test - Questions Answers, Page 2

Question list
Search
Search

List of questions

Search

Related questions











You have a Fabric warehouse that contains a table named Staging.Sales. Staging.Sales contains the following columns.

You need to write a T-SQL query that will return data for the year 2023 that displays ProductID and ProductName arxl has a summarized Amount that is higher than 10,000. Which query should you use?

A)

B)

C)

D)

A.
Option A
A.
Option A
Answers
B.
Option B
B.
Option B
Answers
C.
Option C
C.
Option C
Answers
D.
Option D
D.
Option D
Answers
Suggested answer: B

Explanation:

The correct query to use in order to return data for the year 2023 that displays ProductID, ProductName, and has a summarized Amount greater than 10,000 is Option B. The reason is that it uses the GROUP BY clause to organize the data by ProductID and ProductName and then filters the result using the HAVING clause to only include groups where the sum of Amount is greater than 10,000. Additionally, the DATEPART(YEAR, SaleDate) = '2023' part of the HAVING clause ensures that only records from the year 2023 are included. Reference = For more information, please visit the official documentation on T-SQL queries and the GROUP BY clause at T-SQL GROUP BY.

HOTSPOT

You have a data warehouse that contains a table named Stage. Customers. Stage-Customers contains all the customer record updates from a customer relationship management (CRM) system. There can be multiple updates per customer

You need to write a T-SQL query that will return the customer ID, name, postal code, and the last updated time of the most recent row for each customer ID.

How should you complete the code? To answer, select the appropriate options in the answer area,

NOTE Each correct selection is worth one point.



Question 12
Correct answer: Question 12

HOTSPOT

You have a Fabric tenant.

You plan to create a Fabric notebook that will use Spark DataFrames to generate Microsoft Power Bl visuals.

You run the following code.

For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.


Question 13
Correct answer: Question 13

You are the administrator of a Fabric workspace that contains a lakehouse named Lakehouse1. Lakehouse1 contains the following tables:

* Table1: A Delta table created by using a shortcut

* Table2: An external table created by using Spark

* Table3: A managed table

You plan to connect to Lakehouse1 by using its SQL endpoint. What will you be able to do after connecting to Lakehouse1?

A.
ReadTable3.
A.
ReadTable3.
Answers
B.
Update the data Table3.
B.
Update the data Table3.
Answers
C.
ReadTable2.
C.
ReadTable2.
Answers
D.
Update the data in Table1.
D.
Update the data in Table1.
Answers
Suggested answer: D

You have a Fabric tenant that contains a warehouse.

You use a dataflow to load a new dataset from OneLake to the warehouse.

You need to add a Power Query step to identify the maximum values for the numeric columns.

Which function should you include in the step?

A.
Table. MaxN
A.
Table. MaxN
Answers
B.
Table.Max
B.
Table.Max
Answers
C.
Table.Range
C.
Table.Range
Answers
D.
Table.Profile
D.
Table.Profile
Answers
Suggested answer: B

Explanation:

The Table.Max function should be used in a Power Query step to identify the maximum values for the numeric columns. This function is designed to calculate the maximum value across each column in a table, which suits the requirement of finding maximum values for numeric columns. Reference = For detailed information on Power Query functions, including Table.Max, please refer to Power Query M function reference.

You have a Fabric tenant that contains a machine learning model registered in a Fabric workspace. You need to use the model to generate predictions by using the predict function in a fabric notebook. Which two languages can you use to perform model scoring? Each correct answer presents a complete solution. NOTE: Each correct answer is worth one point.

A.
T-SQL
A.
T-SQL
Answers
B.
DAX EC.
B.
DAX EC.
Answers
C.
Spark SQL
C.
Spark SQL
Answers
D.
PySpark
D.
PySpark
Answers
Suggested answer: C, D

Explanation:

The two languages you can use to perform model scoring in a Fabric notebook using the predict function are Spark SQL (option C) and PySpark (option D). These are both part of the Apache Spark ecosystem and are supported for machine learning tasks in a Fabric environment. Reference = You can find more information about model scoring and supported languages in the context of Fabric notebooks in the official documentation on Azure Synapse Analytics.

You are analyzing the data in a Fabric notebook.

You have a Spark DataFrame assigned to a variable named df.

You need to use the Chart view in the notebook to explore the data manually.

Which function should you run to make the data available in the Chart view?

A.
displayMTML
A.
displayMTML
Answers
B.
show
B.
show
Answers
C.
write
C.
write
Answers
D.
display
D.
display
Answers
Suggested answer: D

Explanation:

The display function is the correct choice to make the data available in the Chart view within a Fabric notebook. This function is used to visualize Spark DataFrames in various formats including charts and graphs directly within the notebook environment. Reference = Further explanation of the display function can be found in the official documentation on Azure Synapse Analytics notebooks.

You have a Fabric tenant that contains a Microsoft Power Bl report named Report 1. Report1 includes a Python visual. Data displayed by the visual is grouped automatically and duplicate rows are NOT displayed. You need all rows to appear in the visual. What should you do?

A.
Reference the columns in the Python code by index.
A.
Reference the columns in the Python code by index.
Answers
B.
Modify the Sort Column By property for all columns.
B.
Modify the Sort Column By property for all columns.
Answers
C.
Add a unique field to each row.
C.
Add a unique field to each row.
Answers
D.
Modify the Summarize By property for all columns.
D.
Modify the Summarize By property for all columns.
Answers
Suggested answer: C

Explanation:

To ensure all rows appear in the Python visual within a Power BI report, option C, adding a unique field to each row, is the correct solution. This will prevent automatic grouping by unique values and allow for all instances of data to be represented in the visual. Reference = For more on Power BI Python visuals and how they handle data, please refer to the Power BI documentation.

DRAG DROP

You have a Fabric tenant that contains a semantic model. The model contains data about retail stores.

You need to write a DAX query that will be executed by using the XMLA endpoint The query must return a table of stores that have opened since December 1,2023.

How should you complete the DAX expression? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.


Question 19
Correct answer: Question 19

Explanation:

DAX FILTER Function

DAX SUMMARIZE Function

You have a Fabric workspace named Workspace 1 that contains a dataflow named Dataflow1. Dataflow! has a query that returns 2.000 rows. You view the query in Power Query as shown in the following exhibit.

What can you identify about the pickupLongitude column?

A.
The column has duplicate values.
A.
The column has duplicate values.
Answers
B.
All the table rows are profiled.
B.
All the table rows are profiled.
Answers
C.
The column has missing values.
C.
The column has missing values.
Answers
D.
There are 935 values that occur only once.
D.
There are 935 values that occur only once.
Answers
Suggested answer: B

Explanation:

The pickupLongitude column has duplicate values. This can be inferred because the 'Distinct count' is 935 while the 'Count' is 1000, indicating that there are repeated values within the column. Reference = Microsoft Power BI documentation on data profiling could provide further insights into understanding and interpreting column statistics like these.

Total 102 questions
Go to page: of 11