ExamGecko
Home / Microsoft / DP-420 / List of questions
Ask Question

Microsoft DP-420 Practice Test - Questions Answers, Page 3

Add to Whishlist

List of questions

Question 21

Report Export Collapse

The following is a sample of a document in orders.

Microsoft DP-420 image Question 13 90145 10022024015919000000

The orders container uses customerId as the partition key.

You need to provide a report of the total items ordered per month by item type. The solution must meet the following requirements:

Ensure that the report can run as quickly as possible.

Minimize the consumption of request units (RUs).

What should you do?

Configure the report to query orders by using a SQL query.
Configure the report to query orders by using a SQL query.
Configure the report to query a new aggregate container. Populate the aggregates by using the change feed.
Configure the report to query a new aggregate container. Populate the aggregates by using the change feed.
Configure the report to query orders by using a SQL query through a dedicated gateway.
Configure the report to query orders by using a SQL query through a dedicated gateway.
Configure the report to query a new aggregate container. Populate the aggregates by using SQL queries that run daily.
Configure the report to query a new aggregate container. Populate the aggregates by using SQL queries that run daily.
Suggested answer: B
Explanation:


asked 02/10/2024
Susan Brady
52 questions

Question 22

Report Export Collapse

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.

You need to make the contents of container1 available as reference data for an Azure Stream Analytics job.

Solution: You create an Azure Synapse pipeline that uses Azure Cosmos DB Core (SQL) API as the input and Azure Blob Storage as the output.

Does this meet the goal?

Yes
Yes
No
No
Suggested answer: B
Explanation:

Instead create an Azure function that uses Azure Cosmos DB Core (SQL) API change feed as a trigger and Azure event hub as the output.

The Azure Cosmos DB change feed is a mechanism to get a continuous and incremental feed of records from an Azure Cosmos container as those records are being created or modified. Change feed support works by listening to container for any changes. It then outputs the sorted list of documents that were changed in the order in which they were modified.

The following diagram represents the data flow and components involved in the solution:

Microsoft DP-420 image Question 14 explanation 90146 10022024015919000000

Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/sql/changefeed-ecommerce-solution

asked 02/10/2024
Martin Ng
46 questions

Question 23

Report Export Collapse

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.

You need to make the contents of container1 available as reference data for an Azure Stream Analytics job.

Solution: You create an Azure Data Factory pipeline that uses Azure Cosmos DB Core (SQL) API as the input and Azure Blob Storage as the output.

Does this meet the goal?

Yes
Yes
No
No
Suggested answer: B
Explanation:

Instead create an Azure function that uses Azure Cosmos DB Core (SQL) API change feed as a trigger and Azure event hub as the output.

The Azure Cosmos DB change feed is a mechanism to get a continuous and incremental feed of records from an Azure Cosmos container as those records are being created or modified. Change feed support works by listening to container for any changes. It then outputs the sorted list of documents that were changed in the order in which they were modified.

The following diagram represents the data flow and components involved in the solution:

Microsoft DP-420 image Question 15 explanation 90147 10022024015919000000

Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/sql/changefeed-ecommerce-solution

asked 02/10/2024
Dina Elizabeth Perez de Paz
46 questions

Question 24

Report Export Collapse

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.

You need to make the contents of container1 available as reference data for an Azure Stream Analytics job.

Solution: You create an Azure function that uses Azure Cosmos DB Core (SQL) API change feed as a trigger and Azure event hub as the output.

Does this meet the goal?

Yes
Yes
No
No
Suggested answer: A
Explanation:


asked 02/10/2024
GULNUR FICICILAR
36 questions

Question 25

Report Export Collapse

You have a container named container1 in an Azure Cosmos DB Core (SQL) API account. Upserts of items in container1 occur every three seconds.

You have an Azure Functions app named function1 that is supposed to run whenever items are inserted or replaced in container1.

You discover that function1 runs, but not on every upsert.

You need to ensure that function1 processes each upsert within one second of the upsert.

Which property should you change in the Function.json file of function1?

checkpointInterval
checkpointInterval
leaseCollectionsThroughput
leaseCollectionsThroughput
maxItemsPerInvocation
maxItemsPerInvocation
feedPollDelay
feedPollDelay
Suggested answer: D
Explanation:

With an upsert operation we can either insert or update an existing record at the same time.

FeedPollDelay: The time (in milliseconds) for the delay between polling a partition for new changes on the feed, after all current changes are drained. Default is 5,000 milliseconds, or 5 seconds.

Incorrect Answers:

A: checkpointInterval: When set, it defines, in milliseconds, the interval between lease checkpoints.

Default is always after each Function call.

C: maxItemsPerInvocation: When set, this property sets the maximum number of items received per Function call. If operations in the monitored collection are performed through stored procedures, transaction scope is preserved when reading items from the change feed. As a result, the number of items received could be higher than the specified value so that the items changed by the same transaction are returned as part of one atomic batch.

Reference: https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdbv2-trigger

asked 02/10/2024
Sumit Sengupta
48 questions

Question 26

Report Export Collapse

You have the following query.

SELECT * FROM ?

WHERE c.sensor = "TEMP1"

AND c.value < 22

AND c.timestamp >= 1619146031231

You need to recommend a composite index strategy that will minimize the request units (RUs) consumed by the query.

What should you recommend?

a composite index for (sensor ASC, value ASC) and a composite index for (sensor ASC, timestamp ASC)
a composite index for (sensor ASC, value ASC) and a composite index for (sensor ASC, timestamp ASC)
a composite index for (sensor ASC, value ASC, timestamp ASC) and a composite index for (sensor DESC, value DESC, timestamp DESC)
a composite index for (sensor ASC, value ASC, timestamp ASC) and a composite index for (sensor DESC, value DESC, timestamp DESC)
a composite index for (value ASC, sensor ASC) and a composite index for (timestamp ASC, sensor ASC)
a composite index for (value ASC, sensor ASC) and a composite index for (timestamp ASC, sensor ASC)
a composite index for (sensor ASC, value ASC, timestamp ASC)
a composite index for (sensor ASC, value ASC, timestamp ASC)
Suggested answer: A
Explanation:

If a query has a filter with two or more properties, adding a composite index will improve performance.

Consider the following query:

SELECT * FROM c WHERE c.name = "Tim" and c.age > 18

In the absence of a composite index on (name ASC, and age ASC), we will utilize a range index for this query. We can improve the efficiency of this query by creating a composite index for name and age.

Queries with multiple equality filters and a maximum of one range filter (such as >,<, <=, >=, !=) will utilize the composite index.

Reference: https://azure.microsoft.com/en-us/blog/three-ways-to-leverage-composite-indexes-inazure-cosmos-db/

asked 02/10/2024
Chan Sai Yu
51 questions

Question 27

Report Export Collapse

You plan to create an Azure Cosmos DB Core (SQL) API account that will use customer-managed keys stored in Azure Key Vault.

You need to configure an access policy in Key Vault to allow Azure Cosmos DB access to the keys.

Which three permissions should you enable in the access policy? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

Become a Premium Member for full access
  Unlock Premium Member

Question 28

Report Export Collapse

You need to configure an Apache Kafka instance to ingest data from an Azure Cosmos DB Core (SQL) API account. The data from a container named telemetry must be added to a Kafka topic named iot.

The solution must store the data in a compact binary format.

Which three configuration items should you include in the solution? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

Become a Premium Member for full access
  Unlock Premium Member

Question 29

Report Export Collapse

You are implementing an Azure Data Factory data flow that will use an Azure Cosmos DB (SQL API) sink to write a dataset. The data flow will use 2,000 Apache Spark partitions.

You need to ensure that the ingestion from each Spark partition is balanced to optimize throughput.

Which sink setting should you configure?

Become a Premium Member for full access
  Unlock Premium Member

Question 30

Report Export Collapse

You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.

You need to provide a user named User1 with the ability to insert items into container1 by using rolebased access control (RBAC). The solution must use the principle of least privilege.

Which roles should you assign to User1?

Become a Premium Member for full access
  Unlock Premium Member
Total 139 questions
Go to page: of 14
Search

Related questions