ExamGecko
Home Home / Microsoft / DP-300

Microsoft DP-300 Practice Test - Questions Answers, Page 17

Question list
Search
Search

List of questions

Search

Related questions











HOTSPOT

You have an on-premises Microsoft SQL Server 2016 server named Server1 that contains a database named DB1.

You need to perform an online migration of DB1 to an Azure SQL Database managed instance by using Azure Database Migration Service.

How should you configure the backup of DB1? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.


Question 161
Correct answer: Question 161

Explanation:

Box 1: Full and log backups only

Make sure to take every backup on a separate backup media (backup files). Azure Database Migration Service doesn't support backups that are appended to a single backup file. Take full backup and log backups to separate backup files.

Box 2: WITH CHECKSUM

Azure Database Migration Service uses the backup and restore method to migrate your on-premises databases to SQL Managed Instance. Azure Database Migration Service only supports backups created using checksum.

Incorrect Answers:

NOINIT Indicates that the backup set is appended to the specified media set, preserving existing backup sets. If a media password is defined for the media set, the password must be supplied. NOINIT is the default.

UNLOAD

Specifies that the tape is automatically rewound and unloaded when the backup is finished. UNLOAD is the default when a session begins.

Reference:

https://docs.microsoft.com/en-us/azure/dms/known-issues-azure-sql-db-managed-instance-online

DRAG DROP

You have a resource group named App1Dev that contains an Azure SQL Database server named DevServer1. DevServer1 contains an Azure SQL database named DB1. The schema and permissions for DB1 are saved in a Microsoft SQL Server Data Tools (SSDT) database project.

You need to populate a new resource group named App1Test with the DB1 database and an Azure SQL Server named TestServer1. The resources in App1Test must have the same configurations as the resources in App1Dev.

Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.


Question 162
Correct answer: Question 162

DRAG DROP

You have SQL Server 2019 on an Azure virtual machine that contains an SSISDB database.

A recent failure causes the master database to be lost.

You discover that all Microsoft SQL Server integration Services (SSIS) packages fail to run on the virtual machine.

Which four actions should you perform in sequence to resolve the issue? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct.


Question 163
Correct answer: Question 163

Explanation:

Step 1: Attach the SSISDB database

Step 2: Turn on the TRUSTWORTHY property and the CLR property

If you are restoring the SSISDB database to an SQL Server instance where the SSISDB catalog was never created, enable common language runtime (clr)

Step 3: Open the master key for the SSISDB database

Restore the master key by this method if you have the original password that was used to create SSISDB.

open master key decryption by password = 'LS1Setup!' --'Password used when creating SSISDB'

Alter Master Key Add encryption by Service Master Key

Step 4: Encrypt a copy of the mater key by using the service master key

Reference:

https://docs.microsoft.com/en-us/sql/integration-services/backup-restore-and-move-the-ssis-catalog

You are designing a streaming data solution that will ingest variable volumes of data.

You need to ensure that you can change the partition count after creation.

Which service should you use to ingest the data?

A.
Azure Event Hubs Standard
A.
Azure Event Hubs Standard
Answers
B.
Azure Stream Analytics
B.
Azure Stream Analytics
Answers
C.
Azure Data Factory
C.
Azure Data Factory
Answers
D.
Azure Event Hubs Dedicated
D.
Azure Event Hubs Dedicated
Answers
Suggested answer: D

Explanation:

The partition count for an event hub in a dedicated Event Hubs cluster can be increased after the event hub has been created. Incorrect Answers:

A: For Azure Event standard hubs, the partition count isn't changeable, so you should consider long-term scale when setting partition count.

Reference:

https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-features#partitions

You have an Azure Synapse Analytics Apache Spark pool named Pool1.

You plan to load JSON files from an Azure Data Lake Storage Gen2 container into the tables in Pool1. The structure and data types vary by file. You need to load the files into the tables. The solution must maintain the source data types.

What should you do?

A.
Load the data by using PySpark.
A.
Load the data by using PySpark.
Answers
B.
Load the data by using the OPENROWSET Transact-SQL command in an Azure Synapse Analytics serverless SQL pool.
B.
Load the data by using the OPENROWSET Transact-SQL command in an Azure Synapse Analytics serverless SQL pool.
Answers
C.
Use a Get Metadata activity in Azure Data Factory.
C.
Use a Get Metadata activity in Azure Data Factory.
Answers
D.
Use a Conditional Split transformation in an Azure Synapse data flow.
D.
Use a Conditional Split transformation in an Azure Synapse data flow.
Answers
Suggested answer: B

Explanation:

Serverless SQL pool can automatically synchronize metadata from Apache Spark. A serverless SQL pool database will be created for each database existing in serverless Apache Spark pools. Serverless SQL pool enables you to query data in your data lake. It offers a T-SQL query surface area that accommodates semi-structured and unstructured data queries.

To support a smooth experience for in place querying of data that's located in Azure Storage files, serverless SQL pool uses the OPENROWSET function with additional capabilities.

The easiest way to see to the content of your JSON file is to provide the file URL to the OPENROWSET function, specify csv FORMAT.

Reference:

https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/query-json-files

https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/query-data-storage

You are designing a date dimension table in an Azure Synapse Analytics dedicated SQL pool. The date dimension table will be used by all the fact tables. Which distribution type should you recommend to minimize data movement?

A.
HASH
A.
HASH
Answers
B.
REPLICATE
B.
REPLICATE
Answers
C.
ROUND_ROBIN
C.
ROUND_ROBIN
Answers
Suggested answer: B

Explanation:

A replicated table has a full copy of the table available on every Compute node. Queries run fast on replicated tables since joins on replicated tables don't require data movement. Replication requires extra storage, though, and isn't practical for large tables.

Incorrect Answers:

C: A round-robin distributed table distributes table rows evenly across all distributions. The assignment of rows to distributions is random. Unlike hash-distributed tables, rows with equal values are not guaranteed to be assigned to the same distribution.

As a result, the system sometimes needs to invoke a data movement operation to better organize your data before it can resolve a query.

Reference:

https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute

You have an Azure Synapse Analytics workspace named WS1 that contains an Apache Spark pool named Pool1. You plan to create a database named DB1 in Pool1.

You need to ensure that when tables are created in DB1, the tables are available automatically as external tables to the built-in serverless SQL pool. Which format should you use for the tables in DB1?

A.
JSON
A.
JSON
Answers
B.
CSV
B.
CSV
Answers
C.
Parquet
C.
Parquet
Answers
D.
ORC
D.
ORC
Answers
Suggested answer: C

Explanation:

Serverless SQL pool can automatically synchronize metadata from Apache Spark. A serverless SQL pool database will be created for each database existing in serverless Apache Spark pools. For each Spark external table based on Parquet and located in Azure Storage, an external table is created in a serverless SQL pool database. As such, you can shut down your Spark pools and still query Spark external tables from serverless SQL pool.

Reference:

https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/develop-storage-files-spark-tables

You are designing an anomaly detection solution for streaming data from an Azure IoT hub. The solution must meet the following requirements:

Send the output to an Azure Synapse.

Identify spikes and dips in time series data.

Minimize development and configuration effort.

Which should you include in the solution?

A.
Azure SQL Database
A.
Azure SQL Database
Answers
B.
Azure Databricks
B.
Azure Databricks
Answers
C.
Azure Stream Analytics
C.
Azure Stream Analytics
Answers
Suggested answer: C

Explanation:

Anomalies can be identified by routing data via IoT Hub to a built-in ML model in Azure Stream Analytics

Reference:

https://docs.microsoft.com/en-us/learn/modules/data-anomaly-detection-using-azure-iot-hub/ https://docs.microsoft.com/en-us/azure/stream-analytics/azure-synapse-analytics-output

You are creating a new notebook in Azure Databricks that will support R as the primary language but will also support Scala and SQL. Which switch should you use to switch between languages?

A.
\\[<language>]
A.
\\[<language>]
Answers
B.
%<language>
B.
%<language>
Answers
C.
\\[<language>]
C.
\\[<language>]
Answers
D.
@<language>
D.
@<language>
Answers
Suggested answer: B

Explanation:

You can override the default language by specifying the language magic command %<language> at the beginning of a cell. The supported magic commands are: %python, %r, %scala, and %sql.

Reference:

https://docs.microsoft.com/en-us/azure/databricks/notebooks/notebooks-use

You plan to build a structured streaming solution in Azure Databricks. The solution will count new events in five-minute intervals and report only events that arrive during the interval. The output will be sent to a Delta Lake table.

Which output mode should you use?

A.
complete
A.
complete
Answers
B.
append
B.
append
Answers
C.
update
C.
update
Answers
Suggested answer: A

Explanation:

Complete mode: You can use Structured Streaming to replace the entire table with every batch.

Incorrect Answers:

B: By default, streams run in append mode, which adds new records to the table.

Reference:

https://docs.databricks.com/delta/delta-streaming.html

Total 338 questions
Go to page: of 34