ExamGecko
Home Home / Snowflake / COF-C02

Snowflake COF-C02 Practice Test - Questions Answers, Page 14

Question list
Search
Search

What are the correct parameters for time travel and fail-safe in the Snowflake Enterprise Edition?

A.
Default Time Travel Retention is set to 0 days. Maximum Time Travel Retention is 30 days. Fail Safe retention time is 1 day.
A.
Default Time Travel Retention is set to 0 days. Maximum Time Travel Retention is 30 days. Fail Safe retention time is 1 day.
Answers
B.
Default Time Travel Retention is set to 1 day. Maximum Time Travel Retention is 365 days. Fail Safe retention time is 7 days.
B.
Default Time Travel Retention is set to 1 day. Maximum Time Travel Retention is 365 days. Fail Safe retention time is 7 days.
Answers
C.
Default Time Travel Retention is set to 0 days. Maximum Time Travel Retention is 90 days. Fail Safe retention time is 7 days.
C.
Default Time Travel Retention is set to 0 days. Maximum Time Travel Retention is 90 days. Fail Safe retention time is 7 days.
Answers
D.
Default Time Travel Retention is set to 1 day. Maximum Time Travel Retention is 90 days. Fail Safe retention time is 7 days.
D.
Default Time Travel Retention is set to 1 day. Maximum Time Travel Retention is 90 days. Fail Safe retention time is 7 days.
Answers
E.
Default Time Travel Retention is set to 7 days. Maximum Time Travel Retention is 1 day. Fail Safe retention time is 90 days.
E.
Default Time Travel Retention is set to 7 days. Maximum Time Travel Retention is 1 day. Fail Safe retention time is 90 days.
Answers
F.
Default Time Travel Retention is set to 90 days. Maximum Time Travel Retention is 7 days. Fail Safe retention time is 356 days.
F.
Default Time Travel Retention is set to 90 days. Maximum Time Travel Retention is 7 days. Fail Safe retention time is 356 days.
Answers
Suggested answer: D

Explanation:

In the Snowflake Enterprise Edition, the default Time Travel retention is set to 1 day, the maximum Time Travel retention can be set up to 90 days, and the Fail-safe retention time is 7 days3.

Which of the following objects are contained within a schema? (Choose two.)

A.
Role
A.
Role
Answers
B.
Stream
B.
Stream
Answers
C.
Warehouse
C.
Warehouse
Answers
D.
External table
D.
External table
Answers
E.
User
E.
User
Answers
F.
Share
F.
Share
Answers
Suggested answer: B, D

Explanation:

In Snowflake, a schema is a logical grouping of database objects, which can include streams and external tables. A stream is an object that allows users to query data that has changed in specified tables or views, and an external table is a table that references data stored outside of Snowflake. Roles, warehouses, users, and shares are not contained within a schema. References:SHOW OBJECTS,Database, Schema, & Share DDL

Which of the following statements describe features of Snowflake data caching? (Choose two.)

A.
When a virtual warehouse is suspended, the data cache is saved on the remote storage layer.
A.
When a virtual warehouse is suspended, the data cache is saved on the remote storage layer.
Answers
B.
When the data cache is full, the least-recently used data will be cleared to make room.
B.
When the data cache is full, the least-recently used data will be cleared to make room.
Answers
C.
A user can only access their own queries from the query result cache.
C.
A user can only access their own queries from the query result cache.
Answers
D.
A user must set USE_METADATA_CACHE to TRUE to use the metadata cache in queries.
D.
A user must set USE_METADATA_CACHE to TRUE to use the metadata cache in queries.
Answers
E.
The RESULT_SCAN table function can access and filter the contents of the query result cache.
E.
The RESULT_SCAN table function can access and filter the contents of the query result cache.
Answers
Suggested answer: B, E

Explanation:

Snowflake's data caching features include the ability to clear the least-recently used data when the data cache is full to make room for new data. Additionally, the RESULT_SCAN table function can access and filter the contents of the query result cache, allowing users to retrieve and work with the results of previous queries. The other statements are incorrect: the data cache is not saved on the remote storage layer when a virtual warehouse is suspended, users can access queries from the query result cache that were run by other users, and there is no setting called USE_METADATA_CACHE in Snowflake. References:Caching in the Snowflake Cloud Data Platform,Optimizing the warehouse cache

A table needs to be loaded. The input data is in JSON format and is a concatenation of multiple JSON documents. The file size is 3 GB. A warehouse size small is being used. The following COPY INTO command was executed:

COPY INTO SAMPLE FROM @~/SAMPLE.JSON (TYPE=JSON)

The load failed with this error:

Max LOB size (16777216) exceeded, actual size of parsed column is 17894470.

How can this issue be resolved?

A.
Compress the file and load the compressed file.
A.
Compress the file and load the compressed file.
Answers
B.
Split the file into multiple files in the recommended size range (100 MB - 250 MB).
B.
Split the file into multiple files in the recommended size range (100 MB - 250 MB).
Answers
C.
Use a larger-sized warehouse.
C.
Use a larger-sized warehouse.
Answers
D.
Set STRIP_OUTER_ARRAY=TRUE in the COPY INTO command.
D.
Set STRIP_OUTER_ARRAY=TRUE in the COPY INTO command.
Answers
Suggested answer: B

Explanation:

The error ''Max LOB size (16777216) exceeded'' indicates that the size of the parsed column exceeds the maximum size allowed for a single column value in Snowflake, which is 16 MB. To resolve this issue, the file should be split into multiple smaller files that are within the recommended size range of 100 MB to 250 MB. This will ensure that each JSON document within the files is smaller than the maximum LOB size allowed. Compressing the file, using a larger-sized warehouse, or setting STRIP_OUTER_ARRAY=TRUE will not resolve the issue of the column size exceeding the maximum allowed. References:COPY INTO Error during Structured Data Load: ''Max LOB size (16777216) exceeded...''

Which of the following describes a Snowflake stored procedure?

A.
They can be created as secure and hide the underlying metadata from the user.
A.
They can be created as secure and hide the underlying metadata from the user.
Answers
B.
They can only access tables from a single database.
B.
They can only access tables from a single database.
Answers
C.
They can contain only a single SQL statement.
C.
They can contain only a single SQL statement.
Answers
D.
They can be created to run with a caller's rights or an owner's rights.
D.
They can be created to run with a caller's rights or an owner's rights.
Answers
Suggested answer: D

Explanation:

Snowflake stored procedures can be created to execute with the privileges of the role that owns the procedure (owner's rights) or with the privileges of the role that calls the procedure (caller's rights).This allows for flexibility in managing security and access control within Snowflake1.

Which columns are part of the result set of the Snowflake LATERAL FLATTEN command? (Choose two.)

A.
CONTENT
A.
CONTENT
Answers
B.
PATH
B.
PATH
Answers
C.
BYTE_SIZE
C.
BYTE_SIZE
Answers
D.
INDEX
D.
INDEX
Answers
E.
DATATYPE
E.
DATATYPE
Answers
Suggested answer: B, D

Explanation:

TheLATERAL FLATTENcommand in Snowflake produces a result set that includes several columns, among whichPATHandINDEXare includedPATHindicates the path to the element within a data structure that needs to be flattened, andINDEXrepresents the index of the element if it is an array2.

Which Snowflake function will interpret an input string as a JSON document, and produce a VARIANT value?

A.
parse_json()
A.
parse_json()
Answers
B.
json_extract_path_text()
B.
json_extract_path_text()
Answers
C.
object_construct()
C.
object_construct()
Answers
D.
flatten
D.
flatten
Answers
Suggested answer: A

Explanation:

Theparse_json()function in Snowflake interprets an input string as a JSON document and produces a VARIANT value containing the JSON document.This function is specifically designed for parsing strings that contain valid JSON information3.

How are serverless features billed?

A.
Per second multiplied by an automatic sizing for the job
A.
Per second multiplied by an automatic sizing for the job
Answers
B.
Per minute multiplied by an automatic sizing for the job, with a minimum of one minute
B.
Per minute multiplied by an automatic sizing for the job, with a minimum of one minute
Answers
C.
Per second multiplied by the size, as determined by the SERVERLESS_FEATURES_SIZE account parameter
C.
Per second multiplied by the size, as determined by the SERVERLESS_FEATURES_SIZE account parameter
Answers
D.
Serverless features are not billed, unless the total cost for the month exceeds 10% of the warehouse credits, on the account
D.
Serverless features are not billed, unless the total cost for the month exceeds 10% of the warehouse credits, on the account
Answers
Suggested answer: B

Explanation:

Serverless features in Snowflake are billed based on the time they are used, measured in minutes. The cost is calculated by multiplying the duration of the job by an automatic sizing determined by Snowflake, with a minimum billing increment of one minute. This means that even if a serverless feature is used for less than a minute, it will still be billed for the full minute.

Which Snowflake architectural layer is responsible for a query execution plan?

A.
Compute
A.
Compute
Answers
B.
Data storage
B.
Data storage
Answers
C.
Cloud services
C.
Cloud services
Answers
D.
Cloud provider
D.
Cloud provider
Answers
Suggested answer: C

Explanation:

In Snowflake's architecture, the Cloud Services layer is responsible for generating the query execution plan. This layer handles all the coordination, optimization, and management tasks, including query parsing, optimization, and compilation into an execution plan that can be processed by the Compute layer.

Which SQL commands, when committed, will consume a stream and advance the stream offset? (Choose two.)

A.
UPDATE TABLE FROM STREAM
A.
UPDATE TABLE FROM STREAM
Answers
B.
SELECT FROM STREAM
B.
SELECT FROM STREAM
Answers
C.
INSERT INTO TABLE SELECT FROM STREAM
C.
INSERT INTO TABLE SELECT FROM STREAM
Answers
D.
ALTER TABLE AS SELECT FROM STREAM
D.
ALTER TABLE AS SELECT FROM STREAM
Answers
E.
BEGIN COMMIT
E.
BEGIN COMMIT
Answers
Suggested answer: A, C

Explanation:

The SQL commands that consume a stream and advance the stream offset are those that result in changes to the data, such as UPDATE and INSERT operations. Specifically, 'UPDATE TABLE FROM STREAM' and 'INSERT INTO TABLE SELECT FROM STREAM' will consume the stream and move the offset forward, reflecting the changes made to the data.

References: [COF-C02] SnowPro Core Certification Exam Study Guide

Total 716 questions
Go to page: of 72