ExamGecko
Home Home / Snowflake / DEA-C01

Snowflake DEA-C01 Practice Test - Questions Answers, Page 11

Question list
Search
Search

Which one is not the Core benefits of micro-partitioning

A.
Snowflake micro-partitions are derived automatically they do not need to be explicitly defined upfront or maintained by users.
A.
Snowflake micro-partitions are derived automatically they do not need to be explicitly defined upfront or maintained by users.
Answers
B.
Enables extremely efficient DML and fine-grained pruning for faster queries.
B.
Enables extremely efficient DML and fine-grained pruning for faster queries.
Answers
C.
Micro-partitions can overlap in their range of values, helps data skewing.
C.
Micro-partitions can overlap in their range of values, helps data skewing.
Answers
D.
Columns are stored independently within micro-partitions, often referred to as colum-nar storage.
D.
Columns are stored independently within micro-partitions, often referred to as colum-nar storage.
Answers
E.
Columns are also compressed individually within micro-partitions.
E.
Columns are also compressed individually within micro-partitions.
Answers
Suggested answer: C

Explanation:

The benefits of Snowflake's approach to partitioning table data include:

· In contrast to traditional static partitioning, Snowflake micro-partitions are derived automatically; they don't need to be explicitly defined up-front or maintained by users.

· As the name suggests, micro-partitions are small in size (50 to 500 MB, before compression), which enables extremely efficient DML and fine-grained pruning for faster queries.

· Micro-partitions can overlap in their range of values, which, combined with their uniformly small size, helps prevent skew.

· Columns are stored independently within micro-partitions, often referred to as columnar storage.

This enables efficient scanning of individual columns; only the columns referenced by a query are scanned.

· Columns are also compressed individually within micro-partitions. Snowflake automatically determines the most efficient compression algorithm for the columns in each micro-partition.

Select the incorrect statements regarding Clustering depth?

A.
The clustering depth for a populated table measures the average depth (1 or greater) of the overlapping micro-partitions for specified columns in a table. The smaller the aver-age depth, the better clustered the table is with regards to the specified columns.
A.
The clustering depth for a populated table measures the average depth (1 or greater) of the overlapping micro-partitions for specified columns in a table. The smaller the aver-age depth, the better clustered the table is with regards to the specified columns.
Answers
B.
It helps Monitoring the clustering "health" of a large table, particularly over time as DML is performed on the table.
B.
It helps Monitoring the clustering "health" of a large table, particularly over time as DML is performed on the table.
Answers
C.
Clustering depth can be used for determining whether a large table would benefit from explicitly defining a clustering key.
C.
Clustering depth can be used for determining whether a large table would benefit from explicitly defining a clustering key.
Answers
D.
A table with no micro-partitions (i.e. an unpopulated/empty table) has a clustering depth of 1.(Correct)
D.
A table with no micro-partitions (i.e. an unpopulated/empty table) has a clustering depth of 1.(Correct)
Answers
Suggested answer: D

Explanation:

A table with no micro-partitions (i.e. an unpopulated/empty table) has a clustering depth of 0.

To view/monitor the clustering metadata for a table, Snowflake provides which of the following system functions?

A.
SYSTEM$CLUSTERING_DEPTH_KEY
A.
SYSTEM$CLUSTERING_DEPTH_KEY
Answers
B.
SYSTEM$CLUSTERING_KEY_INFORMATION (including clustering depth)
B.
SYSTEM$CLUSTERING_KEY_INFORMATION (including clustering depth)
Answers
C.
SYSTEM$CLUSTERING_DEPTH
C.
SYSTEM$CLUSTERING_DEPTH
Answers
D.
SYSTEM$CLUSTERING_INFORMATION (including clustering depth)
D.
SYSTEM$CLUSTERING_INFORMATION (including clustering depth)
Answers
Suggested answer: C, D

Explanation:

SYSTEM$CLUSTERING_DEPTH:

Computes the average depth of the table according to the specified columns (or the clustering key defined for the table). The average depth of a populated table (i.e. a table containing data) is always 1 or more. The smaller the average depth, the better clustered the table is with regards to the specified columns.

Calculate the clustering depth for a table using two columns in the table:

SELECT SYSTEM$CLUSTERING_DEPTH('TPCH_PRODUCT', '(C2, C9)');

SYSTEM$CLUSTERING_INFORMATION:

Returns clustering information, including average clustering depth, for a table based on one or more columns in the table.

SELECT SYSTEM$CLUSTERING_INFORMATION('SAMPLE_TABLE', '(col1, col3)');

Which ones are the false statements about Materialized Views?

A.
Snowflake does not allow standard DML (e.g. INSERT, UPDATE, DELETE) on ma-terialized views.
A.
Snowflake does not allow standard DML (e.g. INSERT, UPDATE, DELETE) on ma-terialized views.
Answers
B.
Snowflake does not allow users to truncate materialized views.
B.
Snowflake does not allow users to truncate materialized views.
Answers
C.
Materialized views are first-class account objects.
C.
Materialized views are first-class account objects.
Answers
D.
A materialized view can also be used as the data source for a subquery.
D.
A materialized view can also be used as the data source for a subquery.
Answers
E.
Materialized views can be secure views.
E.
Materialized views can be secure views.
Answers
F.
Clustering a subset of the materialized views on a table tends to be more cost-effective than clustering the table itself.
F.
Clustering a subset of the materialized views on a table tends to be more cost-effective than clustering the table itself.
Answers
Suggested answer: C

Explanation:

Materialized views are first-class Database objects & rest of the understandings are true.

Charles, A Lead Data engineer, with ACCOUNTADMIN role wants to configure the time travel for one of the Schema's object. He setup the MIN_DATA_RETENTION_TIME_IN_DAYS pa-rameter with Value 79 at account level but he figured out that DA-TA_RETENTION_TIME_IN_DAYS is already set with value 81 at account level. What would be the effective minimum data retention period for an object?

A.
90
A.
90
Answers
B.
81
B.
81
Answers
C.
79
C.
79
Answers
D.
There is no such MIN_DATA_RETENTION_TIME_IN_DAYS parameter
D.
There is no such MIN_DATA_RETENTION_TIME_IN_DAYS parameter
Answers
Suggested answer: B

Explanation:

A user with the ACCOUNTADMIN role can also set the MIN_DATA_RETENTION_TIME_IN_DAYS at the account level. This parameter setting enforc-es a minimum data retention period for databases, schemas, and tables. Setting MIN_DATA_RETENTION_TIME_IN_DAYS does not alter or replace the DA-TA_RETENTION_TIME_IN_DAYS parameter value. It may, however, change the effective data retention period for objects. When MIN_DATA_RETENTION_TIME_IN_DAYS is set at the ac-count level, the data retention period for an object is determined by MAX(DATA_RETENTION_TIME_IN_DAYS, MIN_DATA_RETENTION_TIME_IN_DAYS).

Changing the retention period for your account or individual objects changes the value for all lowerlevel objects that do not have a retention period explicitly set?

A.
TRUE
A.
TRUE
Answers
B.
FALSE
B.
FALSE
Answers
Suggested answer: A

In efforts to recover the dropped child tables within schema named SCV_SCHEMA by Data Engi-neer, She found that DATA_RETENTION_TIME_IN_DAYS parameter set with value 45 days at Schema level & the data retention period for child tables explicitly set at 85 days. What will happen when she will try to run undrop table command on Child tables to recover them on the 50th day as-suming SCV_SCHEMA is already dropped on 45th day?

A.
To honor the data retention period for child tables, She will ab able to recover the child tables on 50th day as DATA_RETENTION_TIME_IN_DAYS is explicitly set with higher retention value.
A.
To honor the data retention period for child tables, She will ab able to recover the child tables on 50th day as DATA_RETENTION_TIME_IN_DAYS is explicitly set with higher retention value.
Answers
B.
When a schema is already dropped, the data retention period for child tables, if explicit-ly set to be different from the retention of the schema, is not honoured. So UNDROP command will fail to run on 50th day for Child tables recovery.
B.
When a schema is already dropped, the data retention period for child tables, if explicit-ly set to be different from the retention of the schema, is not honoured. So UNDROP command will fail to run on 50th day for Child tables recovery.
Answers
C.
Child tables can be recovered using Fail-Safe SQL commands.
C.
Child tables can be recovered using Fail-Safe SQL commands.
Answers
D.
Data Engineer needs to first recover the Schema & then Child tables will automatically be recovered irrespective of Retention Inheritance.
D.
Data Engineer needs to first recover the Schema & then Child tables will automatically be recovered irrespective of Retention Inheritance.
Answers
Suggested answer: B

Explanation:

Dropped Containers and Object Retention Inheritance

Currently, when a database is dropped, the data retention period for child schemas or tables, if explicitly set to be different from the retention of the database, is not honored. The child schemas or tables are retained for the same period of time as the database.

Similarly, when a schema is dropped, the data retention period for child tables, if explicitly set to be different from the retention of the schema, is not honored. The child tables are retained for the same period of time as the schema.

To honor the data retention period for these child objects (schemas or tables), drop them explicitly before you drop the database or schema.

To support Time Travel, Which of the following SQL extensions/parameters/commands have been implemented?

A.
AT | BEFORE clause which can be specified in the CREATE … CLONE commands.
A.
AT | BEFORE clause which can be specified in the CREATE … CLONE commands.
Answers
B.
ONSET (time difference in seconds from the present time)
B.
ONSET (time difference in seconds from the present time)
Answers
C.
OFFSET (time difference in seconds from the present time)
C.
OFFSET (time difference in seconds from the present time)
Answers
D.
STATEMENT_ID (identifier for statement, e.g. query ID)
D.
STATEMENT_ID (identifier for statement, e.g. query ID)
Answers
E.
STATEMENT (identifier for statement, e.g. query ID)
E.
STATEMENT (identifier for statement, e.g. query ID)
Answers
F.
UNDROP command for tables, schemas, and databases.
F.
UNDROP command for tables, schemas, and databases.
Answers
Suggested answer: A, C, E, F

The following CREATE DATABASE command creates a clone of a database snowmy_db i.e.

Create database pods_db clone snowmy_db before (statement => '7e5d0cb9-005e-94e6-b058-k8f5b37c5725'); What are possible reason of failing cloning operation for this database?

A.
Time Travel Statement query time is beyond the retention time of few current child (e.g., a table) of the Database entity.
A.
Time Travel Statement query time is beyond the retention time of few current child (e.g., a table) of the Database entity.
Answers
B.
Time Travel Statement query time is at or before the point in time when the object was created.
B.
Time Travel Statement query time is at or before the point in time when the object was created.
Answers
C.
CREATE DATABASE query fails due to compilation error as it do not support state-ment keyword.
C.
CREATE DATABASE query fails due to compilation error as it do not support state-ment keyword.
Answers
D.
SQL Compilation error: "Incorrect Syntax 'before' while creating database"
D.
SQL Compilation error: "Incorrect Syntax 'before' while creating database"
Answers
Suggested answer: A, B

Which of the following statements is/are incorrect regarding Fail-safe data recovery?

A.
Data stored in temporary tables is not recoverable after the table is dropped as they do not have fail-safe.
A.
Data stored in temporary tables is not recoverable after the table is dropped as they do not have fail-safe.
Answers
B.
Historical data in transient tables can be recovered by Snowflake due to Operation fail-ure after the Time Travel retention period ends using Fail-safe.
B.
Historical data in transient tables can be recovered by Snowflake due to Operation fail-ure after the Time Travel retention period ends using Fail-safe.
Answers
C.
Long-lived tables, such as fact tables, should always be defined as permanent to ensure they are fully protected by Fail-safe.
C.
Long-lived tables, such as fact tables, should always be defined as permanent to ensure they are fully protected by Fail-safe.
Answers
D.
Short-lived tables (i.e. <1 day), such as ETL work tables, can be defined as transient to eliminate Fail-safe costs.
D.
Short-lived tables (i.e. <1 day), such as ETL work tables, can be defined as transient to eliminate Fail-safe costs.
Answers
E.
If downtime and the time required to reload lost data are factors, permanent tables, even with their added Fail-safe costs, may offer a better overall solution than transient tables.
E.
If downtime and the time required to reload lost data are factors, permanent tables, even with their added Fail-safe costs, may offer a better overall solution than transient tables.
Answers
Suggested answer: B
Total 130 questions
Go to page: of 13