ExamGecko
Home Home / Microsoft / DP-900

Microsoft DP-900 Practice Test - Questions Answers, Page 2

Question list
Search
Search

List of questions

Search

Related questions











DRAG DROP

Match the types of workloads to the appropriate scenarios.

To answer, drag the appropriate workload type from the column on the left to its scenario on the right. Each workload type may be used once, more than once, or not at all.

NOTE: Each correct match is worth one point.


Question 11
Correct answer: Question 11

Explanation:

Box 1: Batch

The batch processing model requires a set of data that is collected over time while the stream processing model requires data to be fed into an analytics tool, often in micro-batches, and in real-time.

The batch Processing model handles a large batch of data while the Stream processing model handles individual records or micro-batches of few records.

In Batch Processing, it processes over all or most of the data but in Stream Processing, it processes over data on a rolling window or most recent record.

Box 2: Batch

Box 3: Streaming

Reference:

https://k21academy.com/microsoft-azure/dp-200/batch-processing-vs-stream-processing

DRAG DROP

Match the Azure services to the appropriate requirements.

To answer, drag the appropriate service from the column on the left to its requirement on the right. Each service may be used once, more than once, or not at all.

NOTE: Each correct match is worth one point.


Question 12
Correct answer: Question 12

Explanation:

Box 1: Azure Data Factory

Box 2: Azure Data Lake Storage

Azure Data Lake Storage (ADLA) now natively supports Parquet files. ADLA adds a public preview of the native extractor and outputter for the popular Parquet file format

Box 3: Azure Synapse Analytics

Use Azure Synapse Analytics Workspaces.

Reference:

https://docs.microsoft.com/en-us/azure/data-factory/supported-file-formats-and-compression-codecs

DRAG DROP

Match the Azure services to the appropriate locations in the architecture.

To answer, drag the appropriate service from the column on the left to its location on the right. Each service may be used once, more than once, or not at all.

NOTE: Each correct match is worth one point.


Question 13
Correct answer: Question 13

Explanation:

Box 1: Azure Data factory

Relevant Azure service for the three ETL phases are Azure Data Factory and SQL Server Integration Services (SSIS).

Box 2: Azure Synapse Analytics

You can copy and transform data in Azure Synapse Analytics by using Azure Data Factory

Note: Azure Synapse Analytics connector is supported for the following activities:

Copy activity with supported source/sink matrix table

Mapping data flow

Lookup activity

GetMetadata activity

Reference:

https://docs.microsoft.com/en-us/azure/architecture/data-guide/relational-data/etl

https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-sql-data-warehouse

DRAG DROP

Match the types of data to the appropriate Azure data services.

To answer, drag the appropriate data type from the column on the left to its service on the right. Each data type may be used once, more than once, or not at all.

NOTE: Each correct match is worth one point.


Question 14
Correct answer: Question 14

Explanation:

Box 1: Image files

Azure Blob storage is suitable for image files.

Box 2:Key/value pairs

Azure CosmosDB table API is a key-value storage hosted in the cloud.

Box 3: Relationship between employees

One-to-many relationships between business domain objects occur frequently: for example, one department has many employees. There are several ways to implement one-to-many relationships in the Azure Table service.

Reference:

https://docs.microsoft.com/en-us/azure/storage/tables/table-storage-design-modeling

DRAG DROP

Match the Azure Data Lake Storage terms to the appropriate levels in the hierarchy.

To answer, drag the appropriate term from the column on the left to its level on the right. Each term may be used once, more than once, or not at all.

NOTE: Each correct match is worth one point.


Question 15
Correct answer: Question 15

Explanation:

Box 1: Azure Storage account

Azure file shares are deployed into storage accounts, which are top-level objects that represent a shared pool of storage.

Box 2: File share

Reference:

https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-create-file-share

DRAG DROP

Match the Azure Data Factory components to the appropriate descriptions.

To answer, drag the appropriate component from the column on the left to its description on the right. Each component may be used once, more than once, or not at all.

NOTE: Each correct match is worth one point.


Question 16
Correct answer: Question 16

Explanation:

Box 1: Dataset

Datasets must be created from paths in Azure datastores or public web URLs, for the data to be accessible by Azure Machine Learning.

Box 2: Linked service

Linked services are much like connection strings, which define the connection information needed for Data Factory to connect to external resources.

Box 3: Pipeline

A pipeline is a logical grouping of activities that together perform a task.

Reference:

https://k21academy.com/microsoft-azure/dp-100/datastores-and-datasets-in-azure/

https://docs.microsoft.com/en-us/azure/data-factory/concepts-linked-services

https://docs.microsoft.com/en-us/azure/data-factory/concepts-pipelines-activities

DRAG DROP

Match the types of workloads to the appropriate scenarios.

To answer, drag the appropriate workload type from the column on the left to its scenario on the right. Each workload type may be used once, more than once, or not at all.

NOTE: Each correct match is worth one point.


Question 17
Correct answer: Question 17

Explanation:

Box 1: Batch

Batch processing refers to the processing of blocks of data that have already been stored over a period of time.

Box 2: Streaming

Stream processing is a big data technology that allows us to process data in real-time as they arrive and detect conditions within a small period of time from the point of receiving the data. It allows us to feed data into analytics tools as soon as they get generated and get instant analytics results.

Box 3: Batch

Reference:

https://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-choices/batch-processing

DRAG DROP

You have a table named Sales that contains the following data.

You need to query the table to return the average sales amount per day. The output must produce the following results.

How should you complete the query? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.


Question 18
Correct answer: Question 18

Explanation:

Box 1: SELECT

Box 2: GROUP BY

Example:

When used with a GROUP BY clause, each aggregate function produces a single value covering each group, instead of a single value covering the whole table. The following example produces summary values for each sales territory in the AdventureWorks2012 database. The summary lists the average bonus received by the sales people in each territory, and the sum of year-to-date sales for each territory.

SELECT TerritoryID, AVG(Bonus)as 'Average bonus', SUM(SalesYTD) as 'YTD sales'

FROM Sales.SalesPerson

GROUP BY TerritoryID;

Reference:

https://docs.microsoft.com/en-us/sql/t-sql/functions/avg-transact-sql

DRAG DROP

Match the datastore services to the appropriate descriptions.

To answer, drag the appropriate service from the column on the left to its description on the right. Each service may be used once, more than once, or not at all.

NOTE: Each correct match is worth one point.


Question 19
Correct answer: Question 19

Explanation:

Box 1: Azure Cosmos DB

In Azure Cosmos DB's SQL (Core) API, items are stored as JSON. The type system and expressions are restricted to deal only with JSON types.

Box 2: Azure Files

Azure Files offers native cloud file sharing services based on the SMB protocol.

Reference:

https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-working-with-json

https://cloud.netapp.com/blog/azure-smb-server-message-block-in-the-cloud-for-azure-files

DRAG DROP

Your company plans to load data from a customer relationship management (CRM) system to a data warehouse by using an extract, load, and transform (ELT) process.

Where does data processing occur for each stage of the ELT process? To answer, drag the appropriate locations to the correct stages. Each location may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.


Question 20
Correct answer: Question 20

Explanation:

Box 1: The CRM system

Data is extracted from the CRM system.

Box 2: The data warehouse

Data is loaded to the data warehouse.

Box 3: A standalone data analysis tool

The data transformation that takes place usually involves various operations, such as filtering, sorting, aggregating, joining data, cleaning data, deduplicating, and validating data.

Reference:

https://docs.microsoft.com/en-us/azure/architecture/data-guide/relational-data/etl

Total 285 questions
Go to page: of 29