ExamGecko
Home / Snowflake / COF-C02
Ask Question

Snowflake COF-C02 Practice Test - Questions Answers, Page 6

Question list
Search

Question 51

Report
Export
Collapse

Which of the following are best practice recommendations that should be considered when loading data into Snowflake? (Select TWO).

Load files that are approximately 25 MB or smaller.
Load files that are approximately 25 MB or smaller.
Remove all dates and timestamps.
Remove all dates and timestamps.
Load files that are approximately 100-250 MB (or larger)
Load files that are approximately 100-250 MB (or larger)
Avoid using embedded characters such as commas for numeric data types
Avoid using embedded characters such as commas for numeric data types
Remove semi-structured data types
Remove semi-structured data types
Suggested answer: C, D

Explanation:

When loading data into Snowflake, it is recommended to:

C . Load files that are approximately 100-250 MB (or larger): This size is optimal for parallel processing and can help to maximize throughput. Smaller files can lead to overhead that outweighs the actual data processing time.

D . Avoid using embedded characters such as commas for numeric data types: Embedded characters can cause issues during data loading as they may be interpreted incorrectly. It's best to clean the data of such characters to ensure accurate and efficient data loading.

These best practices are designed to optimize the data loading process, ensuring that data is loaded quickly and accurately into Snowflake.

References:

Snowflake Documentation on Data Loading Considerations

[COF-C02] SnowPro Core Certification Exam Study Guide

asked 23/09/2024
Wilco Gent
32 questions

Question 52

Report
Export
Collapse

A user has 10 files in a stage containing new customer data. The ingest operation completes with no errors, using the following command:

COPY INTO my__table FROM @my__stage;

The next day the user adds 10 files to the stage so that now the stage contains a mixture of new customer data and updates to the previous data. The user did not remove the 10 original files.

If the user runs the same copy into command what will happen?

All data from all of the files on the stage will be appended to the table
All data from all of the files on the stage will be appended to the table
Only data about new customers from the new files will be appended to the table
Only data about new customers from the new files will be appended to the table
The operation will fail with the error uncertain files in stage.
The operation will fail with the error uncertain files in stage.
All data from only the newly-added files will be appended to the table.
All data from only the newly-added files will be appended to the table.
Suggested answer: A

Explanation:

When theCOPY INTOcommand is executed in Snowflake, it processes all files present in the specified stage that have not been ingested before or marked as already loaded. Since the user did not remove the original 10 files after the first load, running the sameCOPY INTOcommand again will result in all 20 files being processed. This means that the data from the original 10 files will be appended to the table again, along with the data from the new 10 files, potentially leading to duplicate records for the original data set.

References:

Snowflake Documentation on Data Loading

SnowPro Core Certification Study Guide

asked 23/09/2024
Minh Phan
29 questions

Question 53

Report
Export
Collapse

A user has unloaded data from Snowflake to a stage

Which SQL command should be used to validate which data was loaded into the stage?

list @file__stage
list @file__stage
show @file__stage
show @file__stage
view @file__stage
view @file__stage
verify @file__stage
verify @file__stage
Suggested answer: A

Explanation:

Thelistcommand in Snowflake is used to validate and display the list of files in a specified stage. When a user has unloaded data to a stage, running thelist @file__stagecommand will show all the files that have been uploaded to that stage, allowing the user to verify the data that was unloaded.

References:

Snowflake Documentation on Stages

SnowPro Core Certification Study Guide

asked 23/09/2024
Sarath Ganaparthi
43 questions

Question 54

Report
Export
Collapse

What happens when a cloned table is replicated to a secondary database? (Select TWO)

A read-only copy of the cloned tables is stored.
A read-only copy of the cloned tables is stored.
The replication will not be successful.
The replication will not be successful.
The physical data is replicated
The physical data is replicated
Additional costs for storage are charged to a secondary account
Additional costs for storage are charged to a secondary account
Metadata pointers to cloned tables are replicated
Metadata pointers to cloned tables are replicated
Suggested answer: C, E

Explanation:

When a cloned table is replicated to a secondary database in Snowflake, the following occurs:

C . The physical data is replicated: The actual data of the cloned table is physically replicated to the secondary database.This ensures that the secondary database has its own copy of the data, which can be used for read-only purposes or failover scenarios1.

E . Metadata pointers to cloned tables are replicated: Along with the physical data, the metadata pointers that refer to the cloned tables are also replicated.This metadata includes information about the structure of the table and any associated properties2.

It's important to note that while the physical data and metadata are replicated, the secondary database is typically read-only and cannot be used for write operations. Additionally, while there may be additional storage costs associated with the secondary account, this is not a direct result of the replication process but rather a consequence of storing additional data.

References:

SnowPro Core Exam Prep --- Answers to Snowflake's LEVEL UP: Backup and Recovery

Snowflake SnowPro Core Certification Exam Questions Set 10

asked 23/09/2024
Chaston Williams
33 questions

Question 55

Report
Export
Collapse

Which data types does Snowflake support when querying semi-structured data? (Select TWO)

VARIANT
VARIANT
ARRAY
ARRAY
VARCHAR
VARCHAR
XML
XML
BLOB
BLOB
Suggested answer: A, B

Explanation:

Snowflake supports querying semi-structured data using specific data types that are capable of handling the flexibility and structure of such data. The data types supported for this purpose are:

A . VARIANT: This is a universal data type that can store values of any other type, including structured and semi-structured types.It is particularly useful for handling JSON, Avro, ORC, Parquet, and XML data formats1.

B . ARRAY: An array is a list of elements that can be of any data type, including VARIANT, and is used to handle semi-structured data that is naturally represented as a list1.

These data types are part of Snowflake's built-in support for semi-structured data, allowing for the storage, querying, and analysis of data that does not fit into the traditional row-column format.

References:

Snowflake Documentation on Semi-Structured Data

[COF-C02] SnowPro Core Certification Exam Study Guide

asked 23/09/2024
Avadhesh Dubey
32 questions

Question 56

Report
Export
Collapse

Which of the following Snowflake objects can be shared using a secure share? (Select TWO).

Materialized views
Materialized views
Sequences
Sequences
Procedures
Procedures
Tables
Tables
Secure User Defined Functions (UDFs)
Secure User Defined Functions (UDFs)
Suggested answer: D, E

Explanation:

Secure sharing in Snowflake allows users to share specific objects with other Snowflake accounts without physically copying the data, thus not consuming additional storage. Tables and Secure User Defined Functions (UDFs) are among the objects that can be shared using this feature. Materialized views, sequences, and procedures are not shareable objects in Snowflake.

References:

[COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Secure Data Sharing1

asked 23/09/2024
Gerald Saraci
36 questions

Question 57

Report
Export
Collapse

Will data cached in a warehouse be lost when the warehouse is resized?

Possibly, if the warehouse is resized to a smaller size and the cache no longer fits.
Possibly, if the warehouse is resized to a smaller size and the cache no longer fits.
Yes. because the compute resource is replaced in its entirety with a new compute resource.
Yes. because the compute resource is replaced in its entirety with a new compute resource.
No. because the size of the cache is independent from the warehouse size
No. because the size of the cache is independent from the warehouse size
Yes. became the new compute resource will no longer have access to the cache encryption key
Yes. became the new compute resource will no longer have access to the cache encryption key
Suggested answer: C

Explanation:

When a Snowflake virtual warehouse is resized, the data cached in the warehouse is not lost. This is because the cache is maintained independently of the warehouse size. Resizing a warehouse, whether scaling up or down, does not affect the cached data, ensuring that query performance is not impacted by such changes.

References:

[COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Virtual Warehouse Performance1

asked 23/09/2024
Henrik Persson
34 questions

Question 58

Report
Export
Collapse

Which Snowflake partner specializes in data catalog solutions?

Alation
Alation
DataRobot
DataRobot
dbt
dbt
Tableau
Tableau
Suggested answer: A

Explanation:

Alation is known for specializing in data catalog solutions and is a partner of Snowflake. Data catalog solutions are essential for organizations to effectively manage their metadata and make it easily accessible and understandable for users, which aligns with the capabilities provided by Alation.

References:

[COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake's official documentation and partner listings

asked 23/09/2024
Alexander Voronetsky
42 questions

Question 59

Report
Export
Collapse

What is the MOST performant file format for loading data in Snowflake?

CSV (Unzipped)
CSV (Unzipped)
Parquet
Parquet
CSV (Gzipped)
CSV (Gzipped)
ORC
ORC
Suggested answer: B

Explanation:

Parquet is a columnar storage file format that is optimized for performance in Snowflake. It is designed to be efficient for both storage and query performance, particularly for complex queries on large datasets. Parquet files support efficient compression and encoding schemes, which can lead to significant savings in storage and speed in query processing, making it the most performant file format for loading data into Snowflake.

References:

[COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Data Loading1

asked 23/09/2024
Kunle Fodeke
44 questions

Question 60

Report
Export
Collapse

Which copy INTO command outputs the data into one file?

SINGLE=TRUE
SINGLE=TRUE
MAX_FILE_NUMBER=1
MAX_FILE_NUMBER=1
FILE_NUMBER=1
FILE_NUMBER=1
MULTIPLE=FAISE
MULTIPLE=FAISE
Suggested answer: B

Explanation:

TheCOPY INTOcommand in Snowflake can be configured to output data into a single file by setting theMAX_FILE_NUMBERoption to 1. This option limits the number of files generated by the command, ensuring that only one file is created regardless of the amount of data being exported.

References:

[COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Data Unloading

asked 23/09/2024
Corey Workman
35 questions
Total 716 questions
Go to page: of 72