ExamGecko
Home Home / Microsoft / DP-600

Microsoft DP-600 Practice Test - Questions Answers, Page 4

Question list
Search
Search

List of questions

Search

Related questions











You have a Fabric tenant that contains a semantic model. The model uses Direct Lake mode.

You suspect that some DAX queries load unnecessary columns into memory.

You need to identify the frequently used columns that are loaded into memory.

What are two ways to achieve the goal? Each correct answer presents a complete solution.

NOTE: Each correct answer is worth one point.

A.
Use the Analyze in Excel feature.
A.
Use the Analyze in Excel feature.
Answers
B.
Use the Vertipaq Analyzer tool.
B.
Use the Vertipaq Analyzer tool.
Answers
C.
Query the $system.discovered_STORAGE_TABLE_COLUMN-iN_SEGMeNTS dynamic management view (DMV).
C.
Query the $system.discovered_STORAGE_TABLE_COLUMN-iN_SEGMeNTS dynamic management view (DMV).
Answers
D.
Query the discover_hehory6Rant dynamic management view (DMV).
D.
Query the discover_hehory6Rant dynamic management view (DMV).
Answers
Suggested answer: B, C

Explanation:

The Vertipaq Analyzer tool (B) and querying the $system.discovered_STORAGE_TABLE_COLUMNS_IN_SEGMENTS dynamic management view (DMV) (C) can help identify which columns are frequently loaded into memory. Both methods provide insights into the storage and retrieval aspects of the semantic model. Reference = The Power BI documentation on Vertipaq Analyzer and DMV queries offers detailed guidance on how to use these tools for performance analysis.

You have a Fabric tenant that contains a semantic model named Model1. Model1 uses Import mode. Model1 contains a table named Orders. Orders has 100 million rows and the following fields.

You need to reduce the memory used by Model! and the time it takes to refresh the model. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct answer is worth one point.

A.
Split OrderDateTime into separate date and time columns.
A.
Split OrderDateTime into separate date and time columns.
Answers
B.
Replace TotalQuantity with a calculated column.
B.
Replace TotalQuantity with a calculated column.
Answers
C.
Convert Quantity into the Text data type.
C.
Convert Quantity into the Text data type.
Answers
D.
Replace TotalSalesAmount with a measure.
D.
Replace TotalSalesAmount with a measure.
Answers
Suggested answer: A, D

Explanation:

To reduce memory usage and refresh time, splitting the OrderDateTime into separate date and time columns (A) can help optimize the model because date/time data types can be more memory-intensive than separate date and time columns. Moreover, replacing TotalSalesAmount with a measure (D) instead of a calculated column ensures that the calculation is performed at query time, which can reduce the size of the model as the value is not stored but calculated on the fly. Reference = The best practices for optimizing Power BI models are detailed in the Power BI documentation, which recommends using measures for calculations that don't need to be stored and adjusting data types to improve performance.

You have a Fabric tenant that contains a semantic model.

You need to prevent report creators from populating visuals by using implicit measures.

What are two tools that you can use to achieve the goal? Each correct answer presents a complete solution.

NOTE: Each correct answer is worth one point.

A.
Microsoft Power BI Desktop
A.
Microsoft Power BI Desktop
Answers
B.
Tabular Editor
B.
Tabular Editor
Answers
C.
Microsoft SQL Server Management Studio (SSMS)
C.
Microsoft SQL Server Management Studio (SSMS)
Answers
D.
DAX Studio
D.
DAX Studio
Answers
Suggested answer: A, B

Explanation:

Microsoft Power BI Desktop (A) and Tabular Editor (B) are the tools you can use to prevent report creators from using implicit measures. In Power BI Desktop, you can define explicit measures which can be used in visuals. Tabular Editor allows for advanced model editing, where you can enforce the use of explicit measures. Reference = Guidance on using explicit measures and preventing implicit measures in reports can be found in the Power BI and Tabular Editor official documentation.

You have a Fabric tenant that contains a lakehouse named lakehouse1. Lakehouse1 contains a table named Table1.

You are creating a new data pipeline.

You plan to copy external data to Table1. The schema of the external data changes regularly.

You need the copy operation to meet the following requirements:

* Replace Table1 with the schema of the external data.

* Replace all the data in Table1 with the rows in the external data.

You add a Copy data activity to the pipeline. What should you do for the Copy data activity?

A.
From the Source tab, add additional columns.
A.
From the Source tab, add additional columns.
Answers
B.
From the Destination tab, set Table action to Overwrite.
B.
From the Destination tab, set Table action to Overwrite.
Answers
C.
From the Settings tab, select Enable staging
C.
From the Settings tab, select Enable staging
Answers
D.
From the Source tab, select Enable partition discovery
D.
From the Source tab, select Enable partition discovery
Answers
E.
From the Source tab, select Recursively
E.
From the Source tab, select Recursively
Answers
Suggested answer: B

Explanation:

For the Copy data activity, from the Destination tab, setting Table action to Overwrite (B) will ensure that Table1 is replaced with the schema and rows of the external data, meeting the requirements of replacing both the schema and data of the destination table. Reference = Information about Copy data activity and table actions in Azure Data Factory, which can be applied to data pipelines in Fabric, is available in the Azure Data Factory documentation.

You have a Fabric tenant that contains a lakehouse.

You plan to query sales data files by using the SQL endpoint. The files will be in an Amazon Simple Storage Service (Amazon S3) storage bucket.

You need to recommend which file format to use and where to create a shortcut.

Which two actions should you include in the recommendation? Each correct answer presents part of the solution.

NOTE: Each correct answer is worth one point.

A.
Create a shortcut in the Files section.
A.
Create a shortcut in the Files section.
Answers
B.
Use the Parquet format
B.
Use the Parquet format
Answers
C.
Use the CSV format.
C.
Use the CSV format.
Answers
D.
Create a shortcut in the Tables section.
D.
Create a shortcut in the Tables section.
Answers
E.
Use the delta format.
E.
Use the delta format.
Answers
Suggested answer: B, D

Explanation:

You should use the Parquet format (B) for the sales data files because it is optimized for performance with large datasets in analytical processing and create a shortcut in the Tables section (D) to facilitate SQL queries through the lakehouse's SQL endpoint. Reference = The best practices for working with file formats and shortcuts in a lakehouse environment are covered in the lakehouse and SQL endpoint documentation provided by the cloud data platform services.

You have a Fabric tenant that contains a lakehouse named Lakehouse1. Lakehouse1 contains a subfolder named Subfolder1 that contains CSV files. You need to convert the CSV files into the delta format that has V-Order optimization enabled. What should you do from Lakehouse explorer?

A.
Use the Load to Tables feature.
A.
Use the Load to Tables feature.
Answers
B.
Create a new shortcut in the Files section.
B.
Create a new shortcut in the Files section.
Answers
C.
Create a new shortcut in the Tables section.
C.
Create a new shortcut in the Tables section.
Answers
D.
Use the Optimize feature.
D.
Use the Optimize feature.
Answers
Suggested answer: D

Explanation:

To convert CSV files into the delta format with Z-Order optimization enabled, you should use the Optimize feature (D) from Lakehouse Explorer. This will allow you to optimize the file organization for the most efficient querying. Reference = The process for converting and optimizing file formats within a lakehouse is discussed in the lakehouse management documentation.

You have a Fabric tenant that contains a lakehouse named lakehouse1. Lakehouse1 contains an unpartitioned table named Table1.

You plan to copy data to Table1 and partition the table based on a date column in the source data.

You create a Copy activity to copy the data to Table1.

You need to specify the partition column in the Destination settings of the Copy activity.

What should you do first?

A.
From the Destination tab, set Mode to Append.
A.
From the Destination tab, set Mode to Append.
Answers
B.
From the Destination tab, select the partition column,
B.
From the Destination tab, select the partition column,
Answers
C.
From the Source tab, select Enable partition discovery
C.
From the Source tab, select Enable partition discovery
Answers
D.
From the Destination tab, set Mode to Overwrite.
D.
From the Destination tab, set Mode to Overwrite.
Answers
Suggested answer: B

Explanation:

Before specifying the partition column in the Destination settings of the Copy activity, you should set Mode to Append (A). This will allow the Copy activity to add data to the table while taking the partition column into account. Reference = The configuration options for Copy activities and partitioning in Azure Data Factory, which are applicable to Fabric dataflows, are outlined in the official Azure Data Factory documentation.

You have source data in a folder on a local computer.

You need to create a solution that will use Fabric to populate a data store. The solution must meet the following requirements:

* Support the use of dataflows to load and append data to the data store.

* Ensure that Delta tables are V-Order optimized and compacted automatically.

Which type of data store should you use?

A.
a lakehouse
A.
a lakehouse
Answers
B.
an Azure SQL database
B.
an Azure SQL database
Answers
C.
a warehouse
C.
a warehouse
Answers
D.
a KQL database
D.
a KQL database
Answers
Suggested answer: A

Explanation:

A lakehouse (A) is the type of data store you should use. It supports dataflows to load and append data and ensures that Delta tables are Z-Order optimized and compacted automatically. Reference = The capabilities of a lakehouse and its support for Delta tables are described in the lakehouse and Delta table documentation.

You have a Fabric workspace named Workspace1 that contains a data flow named Dataflow1. Dataflow1 contains a query that returns the data shown in the following exhibit.

You need to transform the date columns into attribute-value pairs, where columns become rows.

You select the VendorlD column.

Which transformation should you select from the context menu of the VendorlD column?

A.
Group by
A.
Group by
Answers
B.
Unpivot columns
B.
Unpivot columns
Answers
C.
Unpivot other columns
C.
Unpivot other columns
Answers
D.
Split column
D.
Split column
Answers
E.
Remove other columns
E.
Remove other columns
Answers
Suggested answer: B

Explanation:

The transformation you should select from the context menu of the VendorID column to transform the date columns into attribute-value pairs, where columns become rows, is Unpivot columns (B). This transformation will turn the selected columns into rows with two new columns, one for the attribute (the original column names) and one for the value (the data from the cells). Reference = Techniques for unpivoting columns are covered in the Power Query documentation, which explains how to use the transformation in data modeling.

You have a Fabric tenant that contains a data pipeline.

You need to ensure that the pipeline runs every four hours on Mondays and Fridays.

To what should you set Repeat for the schedule?

A.
Daily
A.
Daily
Answers
B.
By the minute
B.
By the minute
Answers
C.
Weekly
C.
Weekly
Answers
D.
Hourly
D.
Hourly
Answers
Suggested answer: C

Explanation:

You should set Repeat for the schedule to Weekly (C). This allows you to specify the pipeline to run on specific days of the week, in this case, every four hours on Mondays and Fridays. Reference = Scheduling options for data pipelines are available in the Azure Data Factory documentation, which includes details on configuring recurring triggers.

Total 102 questions
Go to page: of 11