ExamGecko
Home Home / Microsoft / DP-300

Microsoft DP-300 Practice Test - Questions Answers, Page 14

Question list
Search
Search

List of questions

Search

Related questions











You have SQL Server 2019 on an Azure virtual machine that runs Windows Server 2019. The virtual machine has 4 vCPUs and 28 GB of memory. You scale up the virtual machine to 16 vCPUSs and 64 GB of memory.

You need to provide the lowest latency for tempdb.

What is the total number of data files that tempdb should contain?

A.
2
A.
2
Answers
B.
4
B.
4
Answers
C.
8
C.
8
Answers
D.
64
D.
64
Answers
Suggested answer: D

Explanation:

The number of files depends on the number of (logical) processors on the machine. As a general rule, if the number of logical processors is less than or equal to eight, use the same number of data files as logical processors. If the number of logical processors is greater than eight, use eight data files and then if contention continues, increase the number of data files by multiples of 4 until the contention is reduced to acceptable levels or make changes to the workload/code.

Reference: https://docs.microsoft.com/en-us/sql/relational-databases/databases/tempdb-database

You have 50 Azure SQL databases.

You need to notify the database owner when the database settings, such as the database size and pricing tier, are modified in Azure. What should you do?

A.
Create a diagnostic setting for the activity log that has the Security log enabled.
A.
Create a diagnostic setting for the activity log that has the Security log enabled.
Answers
B.
For the database, create a diagnostic setting that has the InstanceAndAppAdvanced metric enabled.
B.
For the database, create a diagnostic setting that has the InstanceAndAppAdvanced metric enabled.
Answers
C.
Create an alert rule that uses a Metric signal type.
C.
Create an alert rule that uses a Metric signal type.
Answers
D.
Create an alert rule that uses an Activity Log signal type.
D.
Create an alert rule that uses an Activity Log signal type.
Answers
Suggested answer: D

Explanation:

Activity log events - An alert can trigger on every event, or, only when a certain number of events occur. Incorrect Answers:

C: Metric values - The alert triggers when the value of a specified metric crosses a threshold you assign in either direction. That is, it triggers both when the condition is first met and then afterwards when that condition is no longer being met.

Reference:

https://docs.microsoft.com/en-us/azure/azure-sql/database/alerts-insights-configure-portal

You have several Azure SQL databases on the same Azure SQL Database server in a resource group named ResourceGroup1. You must be alerted when CPU usage exceeds 80 percent for any database. The solution must apply to any additional databases that are created on the Azure SQL server. Which resource type should you use to create the alert?

A.
Resource Groups
A.
Resource Groups
Answers
B.
SQL Servers
B.
SQL Servers
Answers
C.
SQL Databases
C.
SQL Databases
Answers
D.
SQL Virtual Machines
D.
SQL Virtual Machines
Answers
Suggested answer: C

Explanation:

There are resource types related to application code, compute infrastructure, networking, storage + databases. You can deploy up to 800 instances of a resource type in each resource group.

Some resources can exist outside of a resource group. These resources are deployed to the subscription, management group, or tenant. Only specific resource types are supported at these scopes.

Reference:

https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/resource-providers-and-types

You have SQL Server 2019 on an Azure virtual machine that runs Windows Server 2019. The virtual machine has 4 vCPUs and 28 GB of memory. You scale up the virtual machine to 8 vCPUSs and 64 GB of memory.

You need to provide the lowest latency for tempdb.

What is the total number of data files that tempdb should contain?

A.
2
A.
2
Answers
B.
4
B.
4
Answers
C.
8
C.
8
Answers
D.
64
D.
64
Answers
Suggested answer: C

Explanation:

The number of files depends on the number of (logical) processors on the machine. As a general rule, if the number of logical processors is less than or equal to eight, use the same number of data files as logical processors. If the number of logical processors is greater than eight, use eight data files and then if contention continues, increase the number of data files by multiples of 4 until the contention is reduced to acceptable levels or make changes to the workload/code.

Reference: https://docs.microsoft.com/en-us/sql/relational-databases/databases/tempdb-database

DRAG DROP

You are building an Azure virtual machine.

You allocate two 1-TiB, P30 premium storage disks to the virtual machine. Each disk provides 5,000 IOPS.

You plan to migrate an on-premises instance of Microsoft SQL Server to the virtual machine. The instance has a database that contains a 1.2-TiB data file. The database requires 10,000 IOPS.

You need to configure storage for the virtual machine to support the database.

Which three objects should you create in sequence? To answer, move the appropriate objects from the list of objects to the answer area and arrange them in the correct order.


Question 135
Correct answer: Question 135

Explanation:

Follow these same steps to create striped virtual disk:

Create Log Storage Pool.

Create Virtual Disk

Create Volume

Box 1: a storage pool

Box 2: a virtual disk that uses stripe layout

Disk Striping: Use multiple disks and stripe them together to get a combined higher IOPS and Throughput limit. The combined limit per VM should be higher than the combined limits of attached premium disks.

Box 3: a volume

Reference:

https://hanu.com/hanu-how-to-striping-of-disks-for-azure-sql-server/

HOTSPOT

You have an Azure SQL database named db1.

You need to retrieve the resource usage of db1 from the last week.

How should you complete the statement? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.


Question 136
Correct answer: Question 136

Explanation:

Box 1: sys.resource_stats

sys.resource_stats returns CPU usage and storage data for an Azure SQL Database. It has database_name and start_time columns.

Box 2: DateAdd

The following example returns all databases that are averaging at least 80% of compute utilization over the last one week.

DECLARE @s datetime;

DECLARE @e datetime;

SET @s= DateAdd(d,-7,GetUTCDate());

SET @e= GETUTCDATE();

SELECT database_name, AVG(avg_cpu_percent) AS Average_Compute_Utilization

FROM sys.resource_stats

WHERE start_time BETWEEN @s AND @e

GROUP BY database_name

HAVING AVG(avg_cpu_percent) >= 80

Incorrect Answers:

sys.dm_exec_requests:

sys.dm_exec_requests returns information about each request that is executing in SQL Server. It does not have a column named database_name.

sys.dm_db_resource_stats:

sys.dm_db_resource_stats does not have any start_time column.

Note: sys.dm_db_resource_stats returns CPU, I/O, and memory consumption for an Azure SQL Database database. One row exists for every 15 seconds, even if there is no activity in the database. Historical data is maintained for approximately one hour.

Sys.dm_user_db_resource_governance returns actual configuration and capacity settings used by resource governance mechanisms in the current database or elastic pool. It does not have any start_time column.

Reference:

https://docs.microsoft.com/en-us/sql/relational-databases/system-catalog-views/sys-resource-stats-azure-sql-database

A company plans to use Apache Spark analytics to analyze intrusion detection data.

You need to recommend a solution to analyze network and system activity data for malicious activities and policy violations. The solution must minimize administrative efforts. What should you recommend?

A.
Azure Data Lake Storage
A.
Azure Data Lake Storage
Answers
B.
Azure Databricks
B.
Azure Databricks
Answers
C.
Azure HDInsight
C.
Azure HDInsight
Answers
D.
Azure Data Factory
D.
Azure Data Factory
Answers
Suggested answer: C

Explanation:

Azure HDInsight offers pre-made, monitoring dashboards in the form of solutions that can be used to monitor the workloads running on your clusters. There are solutions for Apache Spark, Hadoop, Apache Kafka, live long and process (LLAP), Apache HBase, and Apache Storm available in the Azure Marketplace.

Note: With Azure HDInsight you can set up Azure Monitor alerts that will trigger when the value of a metric or the results of a query meet certain conditions. You can condition on a query returning a record with a value that is greater than or less than a certain threshold, or even on the number of results returned by a query. For example, you could create an alert to send an email if a Spark job fails or if a Kafka disk usage becomes over 90 percent full.

Reference:

https://azure.microsoft.com/en-us/blog/monitoring-on-azure-hdinsight-part-4-workload-metrics-and-logs/

You have an Azure data solution that contains an enterprise data warehouse in Azure Synapse Analytics named DW1. Several users execute adhoc queries to DW1 concurrently.

You regularly perform automated data loads to DW1.

You need to ensure that the automated data loads have enough memory available to complete quickly and successfully when the adhoc queries run. What should you do?

A.
Assign a smaller resource class to the automated data load queries.
A.
Assign a smaller resource class to the automated data load queries.
Answers
B.
Create sampled statistics to every column in each table of DW1.
B.
Create sampled statistics to every column in each table of DW1.
Answers
C.
Assign a larger resource class to the automated data load queries.
C.
Assign a larger resource class to the automated data load queries.
Answers
D.
Hash distribute the large fact tables in DW1 before performing the automated data loads.
D.
Hash distribute the large fact tables in DW1 before performing the automated data loads.
Answers
Suggested answer: C

Explanation:

The performance capacity of a query is determined by the user's resource class.

Smaller resource classes reduce the maximum memory per query, but increase concurrency. Larger resource classes increase the maximum memory per query, but reduce concurrency.

Reference:

https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/resource-classes-for-workload-management


You are monitoring an Azure Stream Analytics job.

You discover that the Backlogged input Events metric is increasing slowly and is consistently non-zero. You need to ensure that the job can handle all the events.

What should you do?

A.
Remove any named consumer groups from the connection and use $default.
A.
Remove any named consumer groups from the connection and use $default.
Answers
B.
Change the compatibility level of the Stream Analytics job.
B.
Change the compatibility level of the Stream Analytics job.
Answers
C.
Create an additional output stream for the existing input stream.
C.
Create an additional output stream for the existing input stream.
Answers
D.
Increase the number of streaming units (SUs).
D.
Increase the number of streaming units (SUs).
Answers
Suggested answer: D

Explanation:

Backlogged Input Events: Number of input events that are backlogged. A non-zero value for this metric implies that your job isn't able to keep up with the number of incoming events. If this value is slowly increasing or consistently non-zero, you should scale out your job, by increasing the SUs.

Reference:

https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-monitoring

You have an Azure Stream Analytics job.

You need to ensure that the job has enough streaming units provisioned.

You configure monitoring of the SU % Utilization metric.

Which two additional metrics should you monitor? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

A.
Late Input Events
A.
Late Input Events
Answers
B.
Out of order Events
B.
Out of order Events
Answers
C.
Backlogged Input Events
C.
Backlogged Input Events
Answers
D.
Watermark Delay
D.
Watermark Delay
Answers
E.
Function Events
E.
Function Events
Answers
Suggested answer: C, D

Explanation:

To react to increased workloads and increase streaming units, consider setting an alert of 80% on the SU Utilization metric. Also, you can use watermark delay and backlogged events metrics to see if there is an impact. Note: Backlogged Input Events: Number of input events that are backlogged. A non-zero value for this metric implies that your job isn't able to keep up with the number of incoming events. If this value is slowly increasing or consistently nonzero, you should scale out your job, by increasing the SUs.

Reference:

https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-monitoring

Total 338 questions
Go to page: of 34