ExamGecko
Home / Microsoft / DP-500 / List of questions
Ask Question

Microsoft DP-500 Practice Test - Questions Answers, Page 6

Add to Whishlist

List of questions

Question 51

Report Export Collapse

You are building a Power Bl dataset that will use two data sources.

The dataset has a query that uses a web data source. The web data source uses anonymous authentication.

You need to ensure that the query can be used by all the other queries in the dataset.

Which privacy level should you select for the data source?

Public
Public
Organizational
Organizational
Private
Private
None
None
Suggested answer: A
Explanation:

A Public data source gives everyone visibility to the data contained in the data source. Only files, internet data sources, or workbook data can be marked Public. Data from a Public data source may be freely folded to other sources.

Reference: https://docs.microsoft.com/en-us/power-bi/enterprise/desktop-privacy-levels

asked 02/10/2024
Heidar Heidari nia
41 questions

Question 52

Report Export Collapse

You have a file named File1.txt that has the following characteristics:

β€’ A header row

β€’ Tab delimited values

β€’ UNIX-style line endings

You need to read File1.txt by using an Azure Synapse Analytics serverless SQL pool.

Which query should you execute?

Suggested answer: A
Explanation:

Use FIELDTERMINATOR ='\t' for tab.

Use ROWTERMINATOR ='\0x0A ' for UNIX-style line endings

Use FIRSTROW= 2 for a header row

Note: Using Row Terminators

The row terminator can be the same character as the terminator for the last field. Generally, however, a distinct row terminator is useful. For example, to produce tabular output, terminate the last field in each row with the newline character (\n) and all other fields with the tab character (\t).

If you want to output a line feed character only (LF) as the row terminator - as is typical on Unix and

Linux computers - use hexadecimal notation to specify the LF row terminator. For example:

bcp -r '0x0A'

FIRSTROW

FIRSTROW =first_row Specifies the number of the first row to load. The default is 1. This indicates the first row in the specified data file. The row numbers are determined by counting the row terminators.

FIRSTROW is 1-based.

Reference: https://docs.microsoft.com/en-us/sql/relational-databases/import-export/specify-fieldand-row-terminators-sql-server

https://docs.microsoft.com/en-us/sql/t-sql/functions/openrowset-transact-sql

asked 02/10/2024
Srecka Kovacevic
43 questions

Question 53

Report Export Collapse

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You are using an Azure Synapse Analytics serverless SQL pool to query a collection of Apache Parquet files by using automatic schema inference. The files contain more than 40 million rows of UTF-8- encoded business names, survey names, and participant counts. The database is configured to use the default collation.

The queries use open row set and infer the schema shown in the following table.

Microsoft DP-500 image Question 27 90304 10022024015932000000

You need to recommend changes to the queries to reduce I/O reads and tempdb usage.

Solution: You recommend using openrowset with to explicitly define the collation for businessName and surveyName as Latim_Generai_100_BiN2_UTF8.

Does this meet the goal?

Yes
Yes
No
No
Suggested answer: A
Explanation:

Query Parquet files using serverless SQL pool in Azure Synapse Analytics.

Important Ensure you are using a UTF-8 database collation (for example Latin1_General_100_BIN2_UTF8) because string values in PARQUET files are encoded using UTF-8 encoding. A mismatch between the text encoding in the PARQUET file and the collation may cause unexpected conversion errors. You can easily change the default collation of the current database using the following T-SQL statement: alter database current collate Latin1_General_100_BIN2_UTF8'.

Note: If you use the Latin1_General_100_BIN2_UTF8 collation you will get an additional performance boost compared to the other collations. The Latin1_General_100_BIN2_UTF8 collation is compatible with parquet string sorting rules. The SQL pool is able to eliminate some parts of the parquet files that will not contain data needed in the queries (file/column-segment pruning). If you use other collations, all data from the parquet files will be loaded into Synapse SQL and the filtering is happening within the SQL process. The Latin1_General_100_BIN2_UTF8 collation has additional performance optimization that works only for parquet and CosmosDB. The downside is that you lose fine-grained comparison rules like case insensitivity.

Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/query-parquet-files

asked 02/10/2024
Bruce Tablada
40 questions

Question 54

Report Export Collapse

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You are using an Azure Synapse Analytics serverless SQL pool to query a collection of Apache Parquet files by using automatic schema inference. The files contain more than 40 million rows of UTF-8- encoded business names, survey names, and participant counts. The database is configured to use the default collation.

The queries use open row set and infer the schema shown in the following table.

Microsoft DP-500 image Question 28 90305 10022024015932000000

You need to recommend changes to the queries to reduce I/O reads and tempdb usage.

Solution: You recommend using openrowset with to explicitly specify the maximum length for businessName and surveyName.

Does this meet the goal?

Yes
Yes
No
No
Suggested answer: B
Explanation:

Instead use Solution: You recommend using OPENROWSET WITH to explicitly define the collation for businessName and surveyName as Latin1_General_100_BIN2_UTF8.

Query Parquet files using serverless SQL pool in Azure Synapse Analytics.

Important

Ensure you are using a UTF-8 database collation (for example Latin1_General_100_BIN2_UTF8) because string values in PARQUET files are encoded using UTF-8 encoding. A mismatch between the text encoding in the PARQUET file and the collation may cause unexpected conversion errors. You can easily change the default collation of the current database using the following T-SQL statement: alter database current collate Latin1_General_100_BIN2_UTF8'.

Note: If you use the Latin1_General_100_BIN2_UTF8 collation you will get an additional performance boost compared to the other collations. The Latin1_General_100_BIN2_UTF8 collation is compatible with parquet string sorting rules. The SQL pool is able to eliminate some parts of the parquet files that will not contain data needed in the queries (file/column-segment pruning). If you use other collations, all data from the parquet files will be loaded into Synapse SQL and the filtering is happening within the SQL process. The Latin1_General_100_BIN2_UTF8 collation has additional performance optimization that works only for parquet and CosmosDB. The downside is that you lose fine-grained comparison rules like case insensitivity.

Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/query-parquet-files

asked 02/10/2024
Aparecido Lemos
41 questions

Question 55

Report Export Collapse

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You are using an Azure Synapse Analytics serverless SQL pool to query a collection of Apache Parquet files by using automatic schema inference. The files contain more than 40 million rows of UTF-8- encoded business names, survey names, and participant counts. The database is configured to use the default collation.

The queries use open row set and infer the schema shown in the following table.

Microsoft DP-500 image Question 29 90306 10022024015932000000

You need to recommend changes to the queries to reduce I/O reads and tempdb usage.

Solution: You recommend defining a data source and view for the Parquet files. You recommend updating the query to use the view.

Does this meet the goal?

Yes
Yes
No
No
Suggested answer: B
Explanation:

Solution: You recommend using OPENROWSET WITH to explicitly specify the maximum length for businessName and surveyName.

The size of the varchar(8000) columns are too big. Better reduce their size.

A SELECT...FROM OPENROWSET(BULK...) statement queries the data in a file directly, without importing the data into a table. SELECT...FROM OPENROWSET(BULK...) statements can also list bulkcolumn aliases by using a format file to specify column names, and also data types.

Reference: https://docs.microsoft.com/en-us/sql/t-sql/functions/openrowset-transact-sql

asked 02/10/2024
David Aghaegbuna
50 questions

Question 56

Report Export Collapse

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have the Power Bl data model shown in the exhibit. (Click the Exhibit tab.)

Microsoft DP-500 image Question 30 90307 10022024015932000000

Users indicate that when they build reports from the data model, the reports take a long time to load.

You need to recommend a solution to reduce the load times of the reports.

Solution: You recommend moving all the measures to a calculation group.

Does this meet the goal?

Yes
Yes
No
No
Suggested answer: B
Explanation:

Instead denormalize For Performance.

Even though it might mean storing a bit of redundant data, schema denormalization can sometimes provide better query performance. The only question then becomes is the extra space used worth the performance benefit.

Reference: https://www.mssqltips.com/sqlservertutorial/3211/denormalize-for-performance/

asked 02/10/2024
Letlhogonolo Phiri
40 questions

Question 57

Report Export Collapse

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have the Power BI data model shown in the exhibit (Click the Exhibit tab.)

Microsoft DP-500 image Question 31 90308 10022024015932000000

Users indicate that when they build reports from the data model, the reports take a long time to load.

You need to recommend a solution to reduce the load times of the reports.

Solution: You recommend denormalizing the data model.

Does this meet the goal?

Yes
Yes
No
No
Suggested answer: A
Explanation:

Denormalize For Performance.

Even though it might mean storing a bit of redundant data, schema denormalization can sometimes provide better query performance. The only question then becomes is the extra space used worth the performance benefit.

Reference: https://www.mssqltips.com/sqlservertutorial/3211/denormalize-for-performance/

asked 02/10/2024
Jimmie Campbell
42 questions

Question 58

Report Export Collapse

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have the Power Bl data model shown in the exhibit. (Click the Exhibit tab.)

Microsoft DP-500 image Question 32 90309 10022024015932000000

Users indicate that when they build reports from the data model, the reports take a long time to load.

You need to recommend a solution to reduce the load times of the reports.

Solution: You recommend normalizing the data model.

Does this meet the goal?

Yes
Yes
No
No
Suggested answer: B
Explanation:

Instead denormalize For Performance.

Even though it might mean storing a bit of redundant data, schema denormalization can sometimes provide better query performance. The only question then becomes is the extra space used worth the performance benefit.

Reference: https://www.mssqltips.com/sqlservertutorial/3211/denormalize-for-performance/

asked 02/10/2024
Frans Gafane
36 questions

Question 59

Report Export Collapse

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have a Power Bl dataset named Datasetl.

In Datasetl, you currently have 50 measures that use the same time intelligence logic.

You need to reduce the number of measures, while maintaining the current functionality.

Solution: From Power Bl Desktop, you group the measures in a display folder.

Does this meet the goal?

Yes
Yes
No
No
Suggested answer: B
Explanation:

Solution: From DAX Studio, you write a query that uses grouping sets.

A grouping is a set of discrete values that are used to group measure fields.

Reference: https://docs.microsoft.com/en-us/power-bi/developer/visuals/capabilities

asked 02/10/2024
asia Mattaccini
46 questions

Question 60

Report Export Collapse

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have a Power Bl dataset named Dataset1.

In Dataset1, you currently have 50 measures that use the same time intelligence logic.

You need to reduce the number of measures, while maintaining the current functionality.

Solution: From Tabular Editor, you create a calculation group.

Does this meet the goal?

Yes
Yes
No
No
Suggested answer: B
Explanation:

Solution: From DAX Studio, you write a query that uses grouping sets.

A grouping is a set of discrete values that are used to group measure fields.

Reference: https://docs.microsoft.com/en-us/power-bi/developer/visuals/capabilities

asked 02/10/2024
STEVE MESSANGA
32 questions
Total 162 questions
Go to page: of 17
Search

Related questions