ExamGecko
Home Home / Snowflake / COF-C02

Snowflake COF-C02 Practice Test - Questions Answers, Page 15

Question list
Search
Search

Which methods can be used to delete staged files from a Snowflake stage? (Choose two.)

A.
Use the DROP <file> command after the load completes.
A.
Use the DROP <file> command after the load completes.
Answers
B.
Specify the TEMPORARY option when creating the file format.
B.
Specify the TEMPORARY option when creating the file format.
Answers
C.
Specify the PURGE copy option in the COPY INTO <table> command.
C.
Specify the PURGE copy option in the COPY INTO <table> command.
Answers
D.
Use the REMOVE command after the load completes.
D.
Use the REMOVE command after the load completes.
Answers
E.
Use the DELETE LOAD HISTORY command after the load completes.
E.
Use the DELETE LOAD HISTORY command after the load completes.
Answers
Suggested answer: C, D

Explanation:

To delete staged files from a Snowflake stage, you can specify thePURGEoption in theCOPY INTO <table>command, which will automatically delete the files after they have been successfully loaded.Additionally, you can use theREMOVEcommand after the load completes to manually delete the files from the stage12.

References =DROP STAGE,REMOVE

Assume there is a table consisting of five micro-partitions with values ranging from A to Z.

Which diagram indicates a well-clustered table?

A.
A.
Answers
B.
B.
Answers
C.
C.
Answers
D.
D.
Answers
E.
Option A
E.
Option A
Answers
F.
Option B
F.
Option B
Answers
G.
Option C
G.
Option C
Answers
H.
Option D
H.
Option D
Answers
Suggested answer: C

Explanation:

A well-clustered table in Snowflake means that the data is organized in such a way that related data points are stored close to each other within the micro-partitions. This optimizes query performance by reducing the amount of scanned data.The diagram indicated by option C shows a well-clustered table, as it likely represents a more evenly distributed range of values across the micro-partitions1.

References =Snowflake Micro-partitions & Table Clustering

What is an advantage of using an explain plan instead of the query profiler to evaluate the performance of a query?

A.
The explain plan output is available graphically.
A.
The explain plan output is available graphically.
Answers
B.
An explain plan can be used to conduct performance analysis without executing a query.
B.
An explain plan can be used to conduct performance analysis without executing a query.
Answers
C.
An explain plan will handle queries with temporary tables and the query profiler will not.
C.
An explain plan will handle queries with temporary tables and the query profiler will not.
Answers
D.
An explain plan's output will display automatic data skew optimization information.
D.
An explain plan's output will display automatic data skew optimization information.
Answers
Suggested answer: B

Explanation:

An explain plan is beneficial because it allows for the evaluation of how a query will be processed without the need to actually execute the query.This can help in understanding the query's performance implications and potential bottlenecks without consuming resources that would be used if the query were run

Which data types are supported by Snowflake when using semi-structured data? (Choose two.)

A.
VARIANT
A.
VARIANT
Answers
B.
VARRAY
B.
VARRAY
Answers
C.
STRUCT
C.
STRUCT
Answers
D.
ARRAY
D.
ARRAY
Answers
E.
QUEUE
E.
QUEUE
Answers
Suggested answer: A, D

Explanation:

Snowflake supports the VARIANT and ARRAY data types for semi-structured data. VARIANT can store values of any other type, including OBJECT and ARRAY, making it suitable for semi-structured data formats like JSON.ARRAY is used to store an ordered list of elements

Why does Snowflake recommend file sizes of 100-250 MB compressed when loading data?

A.
Optimizes the virtual warehouse size and multi-cluster setting to economy mode
A.
Optimizes the virtual warehouse size and multi-cluster setting to economy mode
Answers
B.
Allows a user to import the files in a sequential order
B.
Allows a user to import the files in a sequential order
Answers
C.
Increases the latency staging and accuracy when loading the data
C.
Increases the latency staging and accuracy when loading the data
Answers
D.
Allows optimization of parallel operations
D.
Allows optimization of parallel operations
Answers
Suggested answer: D

Explanation:

Snowflake recommends file sizes between 100-250 MB compressed when loading data to optimize parallel processing.Smaller, compressed files can be loaded in parallel, which maximizes the efficiency of the virtual warehouses and speeds up the data loading process

Which of the following features are available with the Snowflake Enterprise edition? (Choose two.)

A.
Database replication and failover
A.
Database replication and failover
Answers
B.
Automated index management
B.
Automated index management
Answers
C.
Customer managed keys (Tri-secret secure)
C.
Customer managed keys (Tri-secret secure)
Answers
D.
Extended time travel
D.
Extended time travel
Answers
E.
Native support for geospatial data
E.
Native support for geospatial data
Answers
Suggested answer: A, D

Explanation:

The Snowflake Enterprise edition includes database replication and failover for business continuity and disaster recovery, as well as extended time travel capabilities for longer data retention periods1.

What is the default file size when unloading data from Snowflake using the COPY command?

A.
5 MB
A.
5 MB
Answers
B.
8 GB
B.
8 GB
Answers
C.
16 MB
C.
16 MB
Answers
D.
32 MB
D.
32 MB
Answers
Suggested answer: C

Explanation:

The default file size when unloading data from Snowflake using the COPY command is not explicitly stated in the provided resources.However, Snowflake documentation suggests that the file size can be specified using theMAX_FILE_SIZEoption in theCOPY INTO <location>command2.

What features that are part of the Continuous Data Protection (CDP) feature set in Snowflake do not require additional configuration? (Choose two.)

A.
Row level access policies
A.
Row level access policies
Answers
B.
Data masking policies
B.
Data masking policies
Answers
C.
Data encryption
C.
Data encryption
Answers
D.
Time Travel
D.
Time Travel
Answers
E.
External tokenization
E.
External tokenization
Answers
Suggested answer: C, D

Explanation:

Data encryption and Time Travel are part of Snowflake's Continuous Data Protection (CDP) feature set that do not require additional configuration.Data encryption is automatically applied to all files stored on internal stages, and Time Travel allows for querying and restoring data without any extra setup

Which Snowflake layer is always leveraged when accessing a query from the result cache?

A.
Metadata
A.
Metadata
Answers
B.
Data Storage
B.
Data Storage
Answers
C.
Compute
C.
Compute
Answers
D.
Cloud Services
D.
Cloud Services
Answers
Suggested answer: D

Explanation:

The Cloud Services layer in Snowflake is responsible for managing the result cache.When a query is executed, the results are stored in this cache, and subsequent identical queries can leverage these cached results without re-executing the entire query1.

A Snowflake Administrator needs to ensure that sensitive corporate data in Snowflake tables is not visible to end users, but is partially visible to functional managers.

How can this requirement be met?

A.
Use data encryption.
A.
Use data encryption.
Answers
B.
Use dynamic data masking.
B.
Use dynamic data masking.
Answers
C.
Use secure materialized views.
C.
Use secure materialized views.
Answers
D.
Revoke all roles for functional managers and end users.
D.
Revoke all roles for functional managers and end users.
Answers
Suggested answer: B

Explanation:

Dynamic data masking is a feature in Snowflake that allows administrators to define masking policies to protect sensitive data.It enables partial visibility of the data to certain roles, such as functional managers, while hiding it from others, like end users

Total 716 questions
Go to page: of 72