ExamGecko
Home / Microsoft / DP-700 / List of questions
Ask Question

Microsoft DP-700 Practice Test - Questions Answers, Page 2

List of questions

Question 11

Report Export Collapse

You have a Fabric workspace that contains a warehouse named Warehouse1.

You have an on-premises Microsoft SQL Server database named Database1 that is accessed by using an on-premises data gateway.

You need to copy data from Database1 to Warehouse1.

Which item should you use?

an Apache Spark job definition

an Apache Spark job definition

a data pipeline

a data pipeline

a Dataflow Gen1 dataflow

a Dataflow Gen1 dataflow

an eventstream

an eventstream

Suggested answer: B
Explanation:

To copy data from an on-premises Microsoft SQL Server database (Database1) to a warehouse (Warehouse1) in Fabric, a data pipeline is the most appropriate tool. A data pipeline in Fabric is designed to move data between various data sources and destinations, including on-premises databases like SQL Server, and cloud-based storage like Fabric warehouses. The data pipeline can handle the connection through an on-premises data gateway, which is required to access on-premises data. This solution facilitates the orchestration of data movement and transformations if needed.

asked 08/01/2025
Sterling White
50 questions

Question 12

Report Export Collapse

You have a Fabric F32 capacity that contains a workspace. The workspace contains a warehouse named DW1 that is modelled by using MD5 hash surrogate keys.

DW1 contains a single fact table that has grown from 200million rows to 500million rows during the past year.

You have Microsoft Power BI reports that are based on Direct Lake. The reports show year-over-year values.

Users report that the performance of some of the reports has degraded over time and some visuals show errors.

You need to resolve the performance issues. The solution must meet the following requirements:

Provide the best query performance.

Minimize operational costs.

Which should you do?

Change the MD5 hash to SHA256.

Change the MD5 hash to SHA256.

Increase the capacity. C Enable V-Order

Increase the capacity. C Enable V-Order

Modify the surrogate keys to use a different data type.

Modify the surrogate keys to use a different data type.

Create views.

Create views.

Suggested answer: D
Explanation:

In this case, the key issue causing performance degradation likely stems from the use of MD5 hash surrogate keys. MD5 hashes are 128-bit values, which can be inefficient for large datasets like the 500 million rows in your fact table. Using a more efficient data type for surrogate keys (such as integer or bigint) would reduce the storage and processing overhead, leading to better query performance. This approach will improve performance while minimizing operational costs because it reduces the complexity of querying and indexing, as smaller data types are generally faster and more efficient to process.

asked 08/01/2025
Darshak Ramdevputra
36 questions

Question 13

Report Export Collapse

HOTSPOT

You have a Fabric workspace that contains a warehouse named DW1. DW1 contains the following tables and columns.

Microsoft DP-700 image Question 5 134714 01082025005422000000

You need to create an output that presents the summarized values of all the order quantities by year and product. The results must include a summary of the order quantities at the year level for all the products.

How should you complete the code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.


Microsoft DP-700 image Question 13 134714 01082025125422000
Correct answer: Microsoft DP-700 image answer Question 13 134714 01082025125422000
asked 08/01/2025
Joseph McCray
47 questions

Question 14

Report Export Collapse

You have a Fabric workspace that contains a lakehouse named Lakehouse1. Data is ingested into Lakehouse1 as one flat table. The table contains the following columns.

Microsoft DP-700 image Question 6 134715 01082025005422000000

You plan to load the data into a dimensional model and implement a star schema. From the original flat table, you create two tables named FactSales and DimProduct. You will track changes in DimProduct.

You need to prepare the data.

Which three columns should you include in the DimProduct table? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

Date

Date

ProductName

ProductName

ProductColor

ProductColor

TransactionID

TransactionID

SalesAmount

SalesAmount

ProductID

ProductID

Suggested answer: B, C, F
Explanation:

In a star schema, the DimProduct table serves as a dimension table that contains descriptive attributes about products. It will provide context for the FactSales table, which contains transactional data. The following columns should be included in the DimProduct table:

ProductName: The ProductName is an important descriptive attribute of the product, which is needed for analysis and reporting in a dimensional model.

ProductColor: ProductColor is another descriptive attribute of the product. In a star schema, it makes sense to include attributes like color in the dimension table to help categorize products in the analysis.

ProductID: ProductID is the primary key for the DimProduct table, which will be used to join the FactSales table to the product dimension. It's essential for uniquely identifying each product in the model.

asked 08/01/2025
Gianmarco Salvaticchio
28 questions

Question 15

Report Export Collapse

You have a Fabric workspace named Workspace1 that contains a notebook named Notebook1.

In Workspace1, you create a new notebook named Notebook2.

You need to ensure that you can attach Notebook2 to the same Apache Spark session as Notebook1.

What should you do?

Enable high concurrency for notebooks.

Enable high concurrency for notebooks.

Enable dynamic allocation for the Spark pool.

Enable dynamic allocation for the Spark pool.

Change the runtime version.

Change the runtime version.

Increase the number of executors.

Increase the number of executors.

Suggested answer: A
Explanation:

To ensure that Notebook2 can attach to the same Apache Spark session as Notebook1, you need to enable high concurrency for notebooks. High concurrency allows multiple notebooks to share a Spark session, enabling them to run within the same Spark context and thus share resources like cached data, session state, and compute capabilities. This is particularly useful when you need notebooks to run in sequence or together while leveraging shared resources.

asked 08/01/2025
bebo here
49 questions

Question 16

Report Export Collapse

You have a Fabric workspace named Workspace1 that contains a lakehouse named Lakehouse1. Lakehouse1 contains the following tables:

Orders

Customer

Employee

The Employee table contains Personally Identifiable Information (PII).

A data engineer is building a workflow that requires writing data to the Customer table, however, the user does NOT have the elevated permissions required to view the contents of the Employee table.

You need to ensure that the data engineer can write data to the Customer table without reading data from the Employee table.

Which three actions should you perform? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

Share Lakehouse1 with the data engineer.

Share Lakehouse1 with the data engineer.

Assign the data engineer the Contributor role for Workspace2.

Assign the data engineer the Contributor role for Workspace2.

Assign the data engineer the Viewer role for Workspace2.

Assign the data engineer the Viewer role for Workspace2.

Assign the data engineer the Contributor role for Workspace1.

Assign the data engineer the Contributor role for Workspace1.

Migrate the Employee table from Lakehouse1 to Lakehouse2.

Migrate the Employee table from Lakehouse1 to Lakehouse2.

Create a new workspace named Workspace2 that contains a new lakehouse named Lakehouse2.

Create a new workspace named Workspace2 that contains a new lakehouse named Lakehouse2.

Assign the data engineer the Viewer role for Workspace1.

Assign the data engineer the Viewer role for Workspace1.

Suggested answer: A, D, E
Explanation:

To meet the requirements of ensuring that the data engineer can write data to the Customer table without reading data from the Employee table (which contains Personally Identifiable Information, or PII), you can implement the following steps:

Share Lakehouse1 with the data engineer.

By sharing Lakehouse1 with the data engineer, you provide the necessary access to the data within the lakehouse. However, this access should be controlled through roles and permissions, which will allow writing to the Customer table but prevent reading from the Employee table.

Assign the data engineer the Contributor role for Workspace1.

Assigning the Contributor role for Workspace1 grants the data engineer the ability to perform actions such as writing to tables (e.g., the Customer table) within the workspace. This role typically allows users to modify and manage data without necessarily granting them access to view all data (e.g., PII data in the Employee table).

Migrate the Employee table from Lakehouse1 to Lakehouse2.

To prevent the data engineer from accessing the Employee table (which contains PII), you can migrate the Employee table to a separate lakehouse (Lakehouse2) or workspace (Workspace2). This separation of sensitive data ensures that the data engineer's access is restricted to the Customer table in Lakehouse1, while the Employee table can be managed separately and protected under different access controls.

asked 08/01/2025
Jason Wang
43 questions

Question 17

Report Export Collapse

You have a Fabric warehouse named DW1. DW1 contains a table that stores sales data and is used by multiple sales representatives.

You plan to implement row-level security (RLS).

You need to ensure that the sales representatives can see only their respective data.

Which warehouse object do you require to implement RLS?

ISTORED PROCEDURE

ISTORED PROCEDURE

CONSTRAINT

CONSTRAINT

SCHEMA

SCHEMA

FUNCTION

FUNCTION

Suggested answer: D
Explanation:

To implement Row-Level Security (RLS) in a Fabric warehouse, you need to use a function that defines the security logic for filtering the rows of data based on the user's identity or role. This function can be used in conjunction with a security policy to control access to specific rows in a table.

In the case of sales representatives, the function would define the filtering criteria (e.g., based on a column such as SalesRepID or SalesRepName), ensuring that each representative can only see their respective data.

asked 08/01/2025
Bruno Colussi
29 questions

Question 18

Report Export Collapse

HOTSPOT

You have a Fabric workspace named Workspace1_DEV that contains the following items:

You create a deployment pipeline named Pipeline1 to move items from Workspace1_DEV to a new workspace named Workspace1_TEST.

You deploy all the items from Workspace1_DEV to Workspace1_TEST.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.


Microsoft DP-700 image Question 18 134719 01082025125422000
Correct answer: Microsoft DP-700 image answer Question 18 134719 01082025125422000
asked 08/01/2025
Donovan Rodriguez
37 questions

Question 19

Report Export Collapse

You have a Fabric deployment pipeline that uses three workspaces named Dev, Test, and Prod.

You need to deploy an eventhouse as part of the deployment process.

What should you use to add the eventhouse to the deployment process?

GitHub Actions

GitHub Actions

a deployment pipeline

a deployment pipeline

an Azure DevOps pipeline

an Azure DevOps pipeline

Suggested answer: B
Explanation:

A deployment pipeline in Fabric is designed to automate the process of deploying assets (such as reports, datasets, eventhouses, and other objects) between environments like Dev, Test, and Prod. Since you need to deploy an eventhouse as part of the deployment process, a deployment pipeline is the appropriate tool to move this asset through the different stages of your environment.

asked 08/01/2025
Peter Lilley
56 questions

Question 20

Report Export Collapse

You have a Fabric workspace named Workspace1 that contains a warehouse named Warehouse1.

You plan to deploy Warehouse1 to a new workspace named Workspace2.

As part of the deployment process, you need to verify whether Warehouse1 contains invalid references. The solution must minimize development effort.

What should you use?

a database project

a database project

a deployment pipeline

a deployment pipeline

a Python script

a Python script

a T-SQL script

a T-SQL script

Suggested answer: C
Explanation:

A deployment pipeline in Fabric allows you to deploy assets like warehouses, datasets, and reports between different workspaces (such as from Workspace1 to Workspace2). One of the key features of a deployment pipeline is the ability to check for invalid references before deployment. This can help identify issues with assets, such as broken links or dependencies, ensuring the deployment is successful without introducing errors. This is the most efficient way to verify references and manage the deployment with minimal development effort.

asked 08/01/2025
EDUARDO LEE
44 questions
Total 67 questions
Go to page: of 7
Search

Related questions