ExamGecko
Home Home / Snowflake / DEA-C01

Snowflake DEA-C01 Practice Test - Questions Answers, Page 7

Question list
Search
Search

List of questions

Search

Which are the two ways to access elements in a JSON object?

A.
Use SemiColon notation to traverse a path in a JSON object: <column>:< level1_element>;<level2_element>;<level3_element>.
A.
Use SemiColon notation to traverse a path in a JSON object: <column>:< level1_element>;<level2_element>;<level3_element>.
Answers
B.
use Curly bracket notation to traverse the path in an object: <column>{'< level1_element>'}{'<level2_element>'}.
B.
use Curly bracket notation to traverse the path in an object: <column>{'< level1_element>'}{'<level2_element>'}.
Answers
C.
Use dot notation to traverse a path in a JSON object: <column>:< level1_element>.<level2_element>.<level3_element>.
C.
Use dot notation to traverse a path in a JSON object: <column>:< level1_element>.<level2_element>.<level3_element>.
Answers
D.
use bracket notation to traverse the path in an object: <column>['< level1_element>']['<level2_element>'].
D.
use bracket notation to traverse the path in an object: <column>['< level1_element>']['<level2_element>'].
Answers
Suggested answer: C, D

Jeff, a Data Engineer, accessing elements in JSON object in its 3 data loading scripts, he unknow-ingly use the upper case while accessing the elements. e.g.

Script 1 --> fruits:apple.sweet Script 2 --> FRUITS:apple.sweet Script 3 --> FRUITS:Apple.Sweet Which are the correct statements?

A.
Script 1 & Script 2 traverse path twill be treated same, but Script 2 will not.
A.
Script 1 & Script 2 traverse path twill be treated same, but Script 2 will not.
Answers
B.
Script 1,2,3 traverse path will treat in same way.
B.
Script 1,2,3 traverse path will treat in same way.
Answers
C.
Script 1 & 3 traverse path will be treated in same way.
C.
Script 1 & 3 traverse path will be treated in same way.
Answers
D.
Script 2&3 traverse path will be same.
D.
Script 2&3 traverse path will be same.
Answers
Suggested answer: A

Explanation:

There are two ways to access elements in a JSON object:

Dot Notation Bracket Notation Regardless of which notation you use, the column name is case-insensitive but element names are case-sensitive.

In Which Data Modelling Technique, Data Engineer generally refer the terms Hubs & Satellites?

A.
Data Hub
A.
Data Hub
Answers
B.
Data Vault
B.
Data Vault
Answers
C.
Star Schema
C.
Star Schema
Answers
D.
Snowflake Schema
D.
Snowflake Schema
Answers
Suggested answer: B

Explanation:

In Data Vault modelling, Hubs are entities of interest to the business.

They contain just a distinct list of business keys and metadata about when each key was first loaded and from where.

In Data Vault modelling, Satellites connect to Hubs or Links. They are Point in Time: so we can ask and answer the question, "what did we know when?"

Satellites contain data about their parent Hub or Link and metadata about when the data was loaded, from where, and a business effectivity date.

Which are false statements about Star Schema?

A.
Star schemas are denormalized.
A.
Star schemas are denormalized.
Answers
B.
The star schema separates business process data into facts, which hold the measurable, quantitative data about a business, and dimensions which are descriptive attributes re-lated to fact data.
B.
The star schema separates business process data into facts, which hold the measurable, quantitative data about a business, and dimensions which are descriptive attributes re-lated to fact data.
Answers
C.
Star schema is more flexible in terms of analytical needs compared to Data Vault Mod-elling.
C.
Star schema is more flexible in terms of analytical needs compared to Data Vault Mod-elling.
Answers
D.
The star schema is an important special case of the snowflake schema and is more effec-tive for handling simpler queries.
D.
The star schema is an important special case of the snowflake schema and is more effec-tive for handling simpler queries.
Answers
Suggested answer: C

Which of the following System keeps the following characteristics?

A.
It will keep in it all the raw data.
A.
It will keep in it all the raw data.
Answers
B.
Generally, the users of it is data scientists and data developers.
B.
Generally, the users of it is data scientists and data developers.
Answers
C.
Flat architecture d. Highly agile
C.
Flat architecture d. Highly agile
Answers
D.
Data Warehouse
D.
Data Warehouse
Answers
E.
Data Mart
E.
Data Mart
Answers
F.
Data Lake
F.
Data Lake
Answers
G.
Data Hub
G.
Data Hub
Answers
Suggested answer: C

As Data Engineer, you have been asked to access data held in AWS Glacier Deep Archive storage class for Historical Data Analysis, which one is the correct statement to recommend?

A.
You cannot access data held in archival cloud storage classes that requires restoration before it can be retrieved.
A.
You cannot access data held in archival cloud storage classes that requires restoration before it can be retrieved.
Answers
B.
Loading data from AWS cloud storage services is supported regardless of the cloud platform that hosts your Snowflake account.
B.
Loading data from AWS cloud storage services is supported regardless of the cloud platform that hosts your Snowflake account.
Answers
C.
Data can be accessed from External stage using AWS Private link in this case.
C.
Data can be accessed from External stage using AWS Private link in this case.
Answers
D.
We can simply access AWS Glacier Deep Archive storage External Stage data using PUT command.
D.
We can simply access AWS Glacier Deep Archive storage External Stage data using PUT command.
Answers
E.
Upload (i.e. stage) files to your cloud storage account using the tools provided by the cloud storage service.
E.
Upload (i.e. stage) files to your cloud storage account using the tools provided by the cloud storage service.
Answers
Suggested answer: A

Explanation:

External stage

Reference data files stored in a location outside of Snowflake. Currently, the following cloud stor-age services are supported:

Amazon S3 buckets Google Cloud Storage buckets Microsoft Azure containers The storage location can be either private/protected or public.

You cannot access data held in archival cloud storage classes that requires restoration before it can be retrieved. These archival storage classes include, for example, the Amazon S3 Glacier Flexible Retrieval or Glacier Deep Archive storage class, or Microsoft Azure Archive Storage.

In one of your created Schema, you have been required to create Internal Stages, what are the Incorrect considerations you can noticed from the below options? [Select All that Apply]

A.
User stages can be altered or dropped just like Table Stage.
A.
User stages can be altered or dropped just like Table Stage.
Answers
B.
Table stage type is designed to store files that are staged and managed by one or more users but only loaded into a single table.
B.
Table stage type is designed to store files that are staged and managed by one or more users but only loaded into a single table.
Answers
C.
A named internal stage type can store files that are staged and managed by one or more users and loaded into one or more tables.
C.
A named internal stage type can store files that are staged and managed by one or more users and loaded into one or more tables.
Answers
D.
A table stage is available for each table created in Snowflake.
D.
A table stage is available for each table created in Snowflake.
Answers
Suggested answer: A

Explanation:

A stage specifies where data files are stored (i.e. "staged") so that the data in the files can be loaded into a table.

Types of Internal Stages

· User Stages

· Table Stages

· Named Stages

By default, each user and table in Snowflake is automatically allocated an internal stage for staging data files to be loaded. In addition, you can create named internal stages.

File staging information is required during both steps in the data loading process:

You must specify an internal stage in the PUT command when uploading files to Snowflake.

You must specify the same stage in the COPY INTO <table> command when loading data into a table from the staged files.

Consider the best type of stage for specific data files. Each option provides benefits and potential drawbacks.

User Stages

Each user has a Snowflake stage allocated to them by default for storing files. This stage is a convenient option if your files will only be accessed by a single user, but need to be copied into multi-ple tables.

User stages have the following characteristics and limitations:

User stages are referenced using @~; e.g. use LIST @~ to list the files in a user stage.

Unlike named stages, user stages cannot be altered or dropped.

User stages do not support setting file format options. Instead, you must specify file format and copy options as part of the COPY INTO <table> command.

This option is not appropriate if:

Multiple users require access to the files.

The current user does not have INSERT privileges on the tables the data will be loaded into.

Table Stages

Each table has a Snowflake stage allocated to it by default for storing files. This stage is a conven-ient option if your files need to be accessible to multiple users and only need to be copied into a sin-gle table.

Table stages have the following characteristics and limitations:

Table stages have the same name as the table; e.g. a table named mytable has a stage referenced as @%mytable.

Unlike named stages, table stages cannot be altered or dropped.

Table stages do not support transforming data while loading it (i.e. using a query as the source for the COPY command).

Note that a table stage is not a separate database object; rather, it is an implicit stage tied to the table itself. A table stage has no grantable privileges of its own. To stage files to a table stage, list the files, query them on the stage, or drop them, you must be the table owner (have the role with the OWNERSHIP privilege on the table).

This option is not appropriate if you need to copy the data in the files into multiple tables.

Named Stages

Named stages are database objects that provide the greatest degree of flexibility for data loading:

Users with the appropriate privileges on the stage can load data into any table.

Because the stage is a database object, the security/access rules that apply to all objects apply. The privileges to use a stage can be granted or revoked from roles. In addition, ownership of the stage can be transferred to another role.

If you plan to stage data files that will be loaded only by you, or will be loaded only into a single table, then you may prefer to simply use either your user stage or the stage for the table into which you will be loading data.

Named stages are optional but recommended when you plan regular data loads that could involve multiple users and/or tables.

As a Data Engineer, you have requirement to query most recent data from the Large Dataset that reside in the external cloud storage, how would you design your data pipelines keeping in mind fastest time to delivery?

A.
Data pipelines would be created to first load data into internal stages & then into Per-manent table with SCD Type 2 transformation.
A.
Data pipelines would be created to first load data into internal stages & then into Per-manent table with SCD Type 2 transformation.
Answers
B.
Direct Querying External tables on top of existing data stored in external cloud storage for analysis without first loading it into Snowflake.
B.
Direct Querying External tables on top of existing data stored in external cloud storage for analysis without first loading it into Snowflake.
Answers
C.
Unload data into SnowFlake Internal data storage using PUT command.
C.
Unload data into SnowFlake Internal data storage using PUT command.
Answers
D.
Snowpipe can be leveraged with streams to load data in micro batch fashion with CDC streams that capture most recent data only.
D.
Snowpipe can be leveraged with streams to load data in micro batch fashion with CDC streams that capture most recent data only.
Answers
E.
External tables with Materialized views can be created in Snowflake.
E.
External tables with Materialized views can be created in Snowflake.
Answers
Suggested answer: E

Explanation:

In a typical table, the data is stored in the database; however, in an external table, the data is stored in files in an external stage. External tables store file-level metadata about the data files, such as the filename, a version identifier and related properties. This enables querying data stored in files in an external stage as if it were inside a database. External tables can access data stored in any format supported by COPY INTO <table> statements.

External tables are read-only, therefore no DML operations can be performed on them; however, external tables can be used for query and join operations. Views can be created against external tables.

Querying data stored external to the database is likely to be slower than querying native database tables; however, materialized views based on external tables can improve query performance.

Creating External tables enable user for querying existing data stored in external cloud storage for analysis without first loading it into Snowflake. The source of truth for the data remains in the external cloud storage. Data sets materialized in Snowflake via materialized views are read-only.

This solution is especially beneficial to accounts that have a large amount of data stored in external cloud storage and only want to query a portion of the data; for example, the most recent data. Users can create materialized views on subsets of this data for improved query performance.

Mark the incorrect statement in case Data engineer using the COPY INTO <table> command to load data from files into Snowflake tables?

A.
For Data loading of files with semi-structured file formats (JSON, Avro, etc.), the only supported character set is UTF-16.
A.
For Data loading of files with semi-structured file formats (JSON, Avro, etc.), the only supported character set is UTF-16.
Answers
B.
For loading data from all semi-structured supported file formats (JSON, Avro, etc.), as well as unloading data, UTF-8 is the only supported character set.
B.
For loading data from all semi-structured supported file formats (JSON, Avro, etc.), as well as unloading data, UTF-8 is the only supported character set.
Answers
C.
For Local environment, Files are first copied ("staged") to an internal (Snowflake) stage, then loaded into a table.
C.
For Local environment, Files are first copied ("staged") to an internal (Snowflake) stage, then loaded into a table.
Answers
D.
UTF-32 & UTF-16 both encoding character sets supported for loading data from de-limited files (CSV, TSV, etc.)
D.
UTF-32 & UTF-16 both encoding character sets supported for loading data from de-limited files (CSV, TSV, etc.)
Answers
Suggested answer: A

Explanation:

For Data Loading of delimited files (CSV, TSV, etc.), the default character set is UTF-8. To use any other characters sets, you must explicitly specify the encoding to use for loading.

For semi-structured file formats (JSON, Avro, etc.), the only supported character set is UTF-8.

Rest of the statements are correct.

Let us say you have List of 50 Source files, which needs to be loaded into Snowflake internal stage. All these Source system files are already Brotli-compressed files. Which statement is correct with respect to Compression of Staged Files?

A.
Even though Source files are already compressed, Snowflake do apply default gzip2 Compression to optimize the storage cost.
A.
Even though Source files are already compressed, Snowflake do apply default gzip2 Compression to optimize the storage cost.
Answers
B.
Snowflake automatically detect Brotli Compression, will skip further compression of all 50 files.
B.
Snowflake automatically detect Brotli Compression, will skip further compression of all 50 files.
Answers
C.
Auto-detection is not yet supported for Brotli-compressed files; when staging or loading Brotlicompressed files, you must explicitly specify the compression method that was used.
C.
Auto-detection is not yet supported for Brotli-compressed files; when staging or loading Brotlicompressed files, you must explicitly specify the compression method that was used.
Answers
D.
When staging 50 compressed files in a Snowflake stage, the files are automatically com-pressed using gzip.
D.
When staging 50 compressed files in a Snowflake stage, the files are automatically com-pressed using gzip.
Answers
Suggested answer: C

Explanation:

Auto-detection is not yet supported for Brotli-compressed files; when staging or loading Brotlicompressed files, you must explicitly specify the compression method that was used.

To Know more about Compression of Staged Files, please refer the link:

https://docs.snowflake.com/en/user-guide/intro-summary-loading.html#compression-of-stagedfiles

Total 130 questions
Go to page: of 13