ExamGecko
Home / Microsoft / DP-500 / List of questions
Ask Question

Microsoft DP-500 Practice Test - Questions Answers, Page 5

Add to Whishlist

List of questions

Question 41

Report Export Collapse

You have a Power Bl workspace that contains one dataset and four reports that connect to the dataset. The dataset uses Import storage mode and contains the following data sources:

β€’ A CSV file in an Azure Storage account

β€’ An Azure Database for PostgreSQL database

You plan to use deployment pipelines to promote the content from development to test to production. There will be different data source locations for each stage. What should you include in the deployment pipeline to ensure that the appropriate data source locations are used during each stage?

parameter rules
parameter rules
selective deployment
selective deployment
auto-binding across pipelines
auto-binding across pipelines
data source rules
data source rules
Suggested answer: A
Explanation:

Note: Create deployment rules

When working in a deployment pipeline, different stages may have different configurations. For example, each stage can have different databases or different query parameters. The development stage might query sample data from the database, while the test and production stages query the entire database.

When you deploy content between pipeline stages, configuring deployment rules enables you to allow changes to content, while keeping some settings intact. For example, if you want a dataset in a production stage to point to a production database, you can define a rule for this. The rule is defined in the production stage, under the appropriate dataset. Once the rule is defined, content deployed from test to production, will inherit the value as defined in the deployment rule, and will always apply as long as the rule is unchanged and valid.

asked 02/10/2024
Janina Loveria
47 questions

Question 42

Report Export Collapse

You need to provide users with a reproducible method to connect to a data source and transform the data by using an Al function. The solution must meet the following requirement

β€’ Minimize development effort.

β€’ Avoid including data in the file.

Which type of file should you create?

PBIDS
PBIDS
PBIX
PBIX
PBIT
PBIT
Suggested answer: C
Explanation:


asked 02/10/2024
Elias Lopez III
51 questions

Question 43

Report Export Collapse

You are planning a Power Bl solution for a customer.

The customer will have 200 Power Bl users. The customer identifies the following requirements:

β€’ Ensure that all the users can create paginated reports.

β€’ Ensure that the users can create reports containing Al visuals.

β€’ Provide autoscaling of the CPU resources during heavy usage spikes.

You need to recommend a Power Bl solution for the customer. The solution must minimize costs.

What should you recommend?

Power Bl Premium per user
Power Bl Premium per user
a Power Bl Premium per capacity
a Power Bl Premium per capacity
Power Bl Pro per user
Power Bl Pro per user
Power Bl Report Server
Power Bl Report Server
Suggested answer: A
Explanation:

Announcing Power BI Premium Per User general availability and autoscale preview for Gen2.

Power BI Premium per user features and capabilities

* Pixel perfect paginated reports are available for operational reporting capabilities based on SSRS technology. Users can create highly formatted reports in various formats such as PDF and PPT, which are embeddable in applications and are designed to be printed or shared.

* Automated machine learning (AutoML) in Power BI enables business users to build ML models to predict outcomes without having to write any code.

* Etc.

Note:

Power BI empowers every business user and business analyst to get amazing insights with AI infused experiences. With Power BI Premium, we enable business analysts to not only analyze and visualize their data, but to also build an end-to-end data platform through drag and drop experiences.

Everything from ingesting and transforming data at scale, to building automated machine learning models, and analyzing massive volumes of data is now possible for our millions of business analysts.

Reference: https://powerbi.microsoft.com/nl-be/blog/announcing-power-bi-premium-per-usergeneral-availability-and-autoscale-preview-for-gen2/

asked 02/10/2024
Sukhpal Singh
38 questions

Question 44

Report Export Collapse

You have a 2-GB Power Bl dataset.

You need to ensure that you can redeploy the dataset by using Tabular Editor. The solution must minimize how long it will take to apply changes to the dataset from powerbi.com.

Which two actions should you perform in powerbi.com? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point

Enable service principal authentication for read-only admin APIs.
Enable service principal authentication for read-only admin APIs.
Turn on Large dataset storage format.
Turn on Large dataset storage format.
Connect the target workspace to an Azure Data Lake Storage Gen2 account.
Connect the target workspace to an Azure Data Lake Storage Gen2 account.
Enable XMLA read-write.
Enable XMLA read-write.
Suggested answer: B, D
Explanation:

Optimize datasets for write operations by enabling large models

When using the XMLA endpoint for dataset management with write operations, it's recommended you enable the dataset for large models. This reduces the overhead of write operations, which can make them considerably faster. For datasets over 1 GB in size (after compression), the difference can be significant.

Tabular Editor supports Azure Analysis Services and Power BI Premium Datasets through XMLA read/write.

Note: Tabular Editor - An open-source tool for creating, maintaining, and managing tabular models using an intuitive, lightweight editor. A hierarchical view shows all objects in your tabular model.

Objects are organized by display folders with support for multi-select property editing and DAX syntax highlighting. XMLA read-only is required for query operations. Read-write is required for metadata operations.

Reference: https://docs.microsoft.com/en-us/power-bi/enterprise/service-premium-connect-tools

https://tabulareditor.github.io/

asked 02/10/2024
Ishan Patel
47 questions

Question 45

Report Export Collapse

You have five Power Bl reports that contain R script data sources and R visuals.

You need to publish the reports to the Power Bl service and configure a daily refresh of datasets.

What should you include in the solution?

a Power Bl Embedded capacity
a Power Bl Embedded capacity
an on-premises data gateway (standard mode)
an on-premises data gateway (standard mode)
a workspace that connects to an Azure Data Lake Storage Gen2 account
a workspace that connects to an Azure Data Lake Storage Gen2 account
an on-premises data gateway (personal mode)
an on-premises data gateway (personal mode)
Suggested answer: B
Explanation:


asked 02/10/2024
William Macy
59 questions

Question 46

Report Export Collapse

You have new security and governance protocols for Power Bl reports and datasets. The new protocols must meet the following requirements.

β€’ New reports can be embedded only in locations that require authentication.

β€’ Live connections are permitted only for workspaces that use Premium capacity datasets.

Which three actions should you recommend performing in the Power Bl Admin portal? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

From Tenant settings, disable Allow XMLA endpoints and Analyze in Excel with on-premises datasets.
From Tenant settings, disable Allow XMLA endpoints and Analyze in Excel with on-premises datasets.
From the Premium per user settings, set XMLA Endpoint to Off.
From the Premium per user settings, set XMLA Endpoint to Off.
From Embed Codes, delete all the codes.
From Embed Codes, delete all the codes.
From Capacity settings, set XMLA Endpoint to Read Write.
From Capacity settings, set XMLA Endpoint to Read Write.
From Tenant settings, set Publish to web to Disable.
From Tenant settings, set Publish to web to Disable.
Suggested answer: A, D, E
Explanation:

Reference: https://docs.microsoft.com/en-us/power-bi/enterprise/service-premium-connect-tools

https://powerbi.microsoft.com/en-us/blog/power-bi-february-service-update

asked 02/10/2024
Fai Malali
41 questions

Question 47

Report Export Collapse

You have an Azure Synapse Analytics serverless SQL pool.

You need to catalog the serverless SQL pool by using Azure Purview.

Which three actions should you perform? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

Create a managed identity in Azure Active Directory (Azure AD).
Create a managed identity in Azure Active Directory (Azure AD).
Assign the Storage Blob Data Reader role to the Azure Purview managed service identity (MSI) for the storage account associated to the Synapse Analytics workspace.
Assign the Storage Blob Data Reader role to the Azure Purview managed service identity (MSI) for the storage account associated to the Synapse Analytics workspace.
Assign the Owner role to the Azure Purview managed service identity (MSI) for the Azure Purview resource group.
Assign the Owner role to the Azure Purview managed service identity (MSI) for the Azure Purview resource group.
Register a data source.
Register a data source.
Assign the Reader role to the Azure Purview managed service identity (MSI) for the Synapse Analytics workspace.
Assign the Reader role to the Azure Purview managed service identity (MSI) for the Synapse Analytics workspace.
Suggested answer: A, B, E
Explanation:

Authentication for enumerating serverless SQL database resources

There are three places you'll need to set authentication to allow Microsoft Purview to enumerate your serverless SQL database resources:

The Azure Synapse workspace

The associated storage

The Azure Synapse serverless databases

The steps below will set permissions for all three.

Azure Synapse workspace

In the Azure portal, go to the Azure Synapse workspace resource.

On the left pane, select Access Control (IAM).

Select the Add button.

Set the Reader role and enter your Microsoft Purview account name, which represents its managed service identity (MSI).

Select Save to finish assigning the role

Azure Synapse Analytics serverless SQL pool catalog Purview Azure Purview managed service identity

Storage account

In the Azure portal, go to the Resource group or Subscription that the storage account associated with the Azure Synapse workspace is in.

On the left pane, select Access Control (IAM).

Select the Add button.

Set the Storage blob data reader role and enter your Microsoft Purview account name (which represents its MSI) in the Select box.

Select Save to finish assigning the role.

Azure Synapse serverless database

Go to your Azure Synapse workspace and open the Synapse Studio.

Select the Data tab on the left menu.

Select the ellipsis (...) next to one of your databases, and then start a new SQL script.

Add the Microsoft Purview account MSI (represented by the account name) on the serverless SQL databases. You do so by running the following command in your SQL script:

SQL

CREATE LOGIN [PurviewAccountName] FROM EXTERNAL PROVIDER;

Apply permissions to scan the contents of the workspace

You can set up authentication for an Azure Synapse source in either of two ways. Select your scenario below for steps to apply permissions.

asked 02/10/2024
Gary Corley
42 questions

Question 48

Report Export Collapse

You are running a diagnostic against a query as shown in the following exhibit.

Microsoft DP-500 image Question 22 90299 10022024015932000000

What can you identify from the diagnostics query?

All the query steps are folding.
All the query steps are folding.
Elevated permissions are being used to query records.
Elevated permissions are being used to query records.
The query is timing out.
The query is timing out.
Some query steps are folding.
Some query steps are folding.
Suggested answer: A
Explanation:

Understanding folding with Query Diagnostics

One of the most common reasons to use Query Diagnostics is to have a better understanding of what operations were 'pushed down' by Power Query to be performed by the back-end data source, which is also known as 'folding'. If we want to see what folded, we can look at what is the 'most specific' query, or queries, that get sent to the back-end data source. We can look at this for both ODATA and

SQL.

Reference: https://docs.microsoft.com/en-us/power-query/querydiagnosticsfolding

asked 02/10/2024
Max Archer
46 questions

Question 49

Report Export Collapse

You are creating an external table by using an Apache Spark pool in Azure Synapse Analytics. The table will contain more than 20 million rows partitioned by date. The table will be shared with the SQL engines.

You need to minimize how long it takes for a serverless SQL pool to execute a query data against the table.

In which file format should you recommend storing the table data?

JSON
JSON
Apache Parquet
Apache Parquet
CSV
CSV
Delta
Delta
Suggested answer: B
Explanation:

Prepare files for querying

If possible, you can prepare files for better performance:

* Convert large CSV and JSON files to Parquet. Parquet is a columnar format. Because it's compressed, its file sizes are smaller than CSV or JSON files that contain the same data. Serverless

SQL pool skips the columns and rows that aren't needed in a query if you're reading Parquet files.

Serverless SQL pool needs less time and fewer storage requests to read it.

Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/best-practices-serverlesssql-pool

https://stackoverflow.com/questions/65320949/parquet-vs-delta-format-in-azure-data-lake-gen-2store

asked 02/10/2024
Devon Marsham
35 questions

Question 50

Report Export Collapse

You have a Power Bl dataset named Dataset1 that uses DirectQuery against an Azure SQL database named DB1. DB1 is a transactional database in the third normal form.

You need to recommend a solution to minimize how long it takes to execute the query. The solution must maintain the current functionality. What should you include in the recommendation?

Create calculated columns in Dataset1.
Create calculated columns in Dataset1.
Remove the relationships from Dataset1.
Remove the relationships from Dataset1.
Normalize the tables in DB1.
Normalize the tables in DB1.
Denormalize the tables in DB1.
Denormalize the tables in DB1.
Suggested answer: D
Explanation:

Denormalize to improve query performance.

Note: Normalization prevents data duplications, preserves disk space, and improves the performance of the disk I/O operations. The downside of the normalization is that the queries based on these normalized tables require more table joins.

Schema denormalization (i.e. consolidation of some dimension tables) for such databases can significantly reduce costs of the analytical queries and improve the performance. Reference: https://www.mssqltips.com/sqlservertip/7114/denormalization-dimensions-synapsemapping-data-flow/

asked 02/10/2024
Joe Mon
35 questions
Total 162 questions
Go to page: of 17
Search

Related questions