ExamGecko
Home / Snowflake / COF-C02
Ask Question

Snowflake COF-C02 Practice Test - Questions Answers, Page 14

Question list
Search

Question 131

Report
Export
Collapse

What are the correct parameters for time travel and fail-safe in the Snowflake Enterprise Edition?

Default Time Travel Retention is set to 0 days. Maximum Time Travel Retention is 30 days. Fail Safe retention time is 1 day.
Default Time Travel Retention is set to 0 days. Maximum Time Travel Retention is 30 days. Fail Safe retention time is 1 day.
Default Time Travel Retention is set to 1 day. Maximum Time Travel Retention is 365 days. Fail Safe retention time is 7 days.
Default Time Travel Retention is set to 1 day. Maximum Time Travel Retention is 365 days. Fail Safe retention time is 7 days.
Default Time Travel Retention is set to 0 days. Maximum Time Travel Retention is 90 days. Fail Safe retention time is 7 days.
Default Time Travel Retention is set to 0 days. Maximum Time Travel Retention is 90 days. Fail Safe retention time is 7 days.
Default Time Travel Retention is set to 1 day. Maximum Time Travel Retention is 90 days. Fail Safe retention time is 7 days.
Default Time Travel Retention is set to 1 day. Maximum Time Travel Retention is 90 days. Fail Safe retention time is 7 days.
Default Time Travel Retention is set to 7 days. Maximum Time Travel Retention is 1 day. Fail Safe retention time is 90 days.
Default Time Travel Retention is set to 7 days. Maximum Time Travel Retention is 1 day. Fail Safe retention time is 90 days.
Default Time Travel Retention is set to 90 days. Maximum Time Travel Retention is 7 days. Fail Safe retention time is 356 days.
Default Time Travel Retention is set to 90 days. Maximum Time Travel Retention is 7 days. Fail Safe retention time is 356 days.
Suggested answer: D

Explanation:

In the Snowflake Enterprise Edition, the default Time Travel retention is set to 1 day, the maximum Time Travel retention can be set up to 90 days, and the Fail-safe retention time is 7 days3.

asked 23/09/2024
Michael Costello
36 questions

Question 132

Report
Export
Collapse

Which of the following objects are contained within a schema? (Choose two.)

Role
Role
Stream
Stream
Warehouse
Warehouse
External table
External table
User
User
Share
Share
Suggested answer: B, D

Explanation:

In Snowflake, a schema is a logical grouping of database objects, which can include streams and external tables. A stream is an object that allows users to query data that has changed in specified tables or views, and an external table is a table that references data stored outside of Snowflake. Roles, warehouses, users, and shares are not contained within a schema. References:SHOW OBJECTS,Database, Schema, & Share DDL

asked 23/09/2024
GBEMISOLA OSILALU
25 questions

Question 133

Report
Export
Collapse

Which of the following statements describe features of Snowflake data caching? (Choose two.)

When a virtual warehouse is suspended, the data cache is saved on the remote storage layer.
When a virtual warehouse is suspended, the data cache is saved on the remote storage layer.
When the data cache is full, the least-recently used data will be cleared to make room.
When the data cache is full, the least-recently used data will be cleared to make room.
A user can only access their own queries from the query result cache.
A user can only access their own queries from the query result cache.
A user must set USE_METADATA_CACHE to TRUE to use the metadata cache in queries.
A user must set USE_METADATA_CACHE to TRUE to use the metadata cache in queries.
The RESULT_SCAN table function can access and filter the contents of the query result cache.
The RESULT_SCAN table function can access and filter the contents of the query result cache.
Suggested answer: B, E

Explanation:

Snowflake's data caching features include the ability to clear the least-recently used data when the data cache is full to make room for new data. Additionally, the RESULT_SCAN table function can access and filter the contents of the query result cache, allowing users to retrieve and work with the results of previous queries. The other statements are incorrect: the data cache is not saved on the remote storage layer when a virtual warehouse is suspended, users can access queries from the query result cache that were run by other users, and there is no setting called USE_METADATA_CACHE in Snowflake. References:Caching in the Snowflake Cloud Data Platform,Optimizing the warehouse cache

asked 23/09/2024
Stephen McMahon
33 questions

Question 134

Report
Export
Collapse

A table needs to be loaded. The input data is in JSON format and is a concatenation of multiple JSON documents. The file size is 3 GB. A warehouse size small is being used. The following COPY INTO command was executed:

COPY INTO SAMPLE FROM @~/SAMPLE.JSON (TYPE=JSON)

The load failed with this error:

Max LOB size (16777216) exceeded, actual size of parsed column is 17894470.

How can this issue be resolved?

Compress the file and load the compressed file.
Compress the file and load the compressed file.
Split the file into multiple files in the recommended size range (100 MB - 250 MB).
Split the file into multiple files in the recommended size range (100 MB - 250 MB).
Use a larger-sized warehouse.
Use a larger-sized warehouse.
Set STRIP_OUTER_ARRAY=TRUE in the COPY INTO command.
Set STRIP_OUTER_ARRAY=TRUE in the COPY INTO command.
Suggested answer: B

Explanation:

The error ''Max LOB size (16777216) exceeded'' indicates that the size of the parsed column exceeds the maximum size allowed for a single column value in Snowflake, which is 16 MB. To resolve this issue, the file should be split into multiple smaller files that are within the recommended size range of 100 MB to 250 MB. This will ensure that each JSON document within the files is smaller than the maximum LOB size allowed. Compressing the file, using a larger-sized warehouse, or setting STRIP_OUTER_ARRAY=TRUE will not resolve the issue of the column size exceeding the maximum allowed. References:COPY INTO Error during Structured Data Load: ''Max LOB size (16777216) exceeded...''

asked 23/09/2024
Allen Yang
38 questions

Question 135

Report
Export
Collapse

Which of the following describes a Snowflake stored procedure?

They can be created as secure and hide the underlying metadata from the user.
They can be created as secure and hide the underlying metadata from the user.
They can only access tables from a single database.
They can only access tables from a single database.
They can contain only a single SQL statement.
They can contain only a single SQL statement.
They can be created to run with a caller's rights or an owner's rights.
They can be created to run with a caller's rights or an owner's rights.
Suggested answer: D

Explanation:

Snowflake stored procedures can be created to execute with the privileges of the role that owns the procedure (owner's rights) or with the privileges of the role that calls the procedure (caller's rights).This allows for flexibility in managing security and access control within Snowflake1.

asked 23/09/2024
Ah Say
31 questions

Question 136

Report
Export
Collapse

Which columns are part of the result set of the Snowflake LATERAL FLATTEN command? (Choose two.)

CONTENT
CONTENT
PATH
PATH
BYTE_SIZE
BYTE_SIZE
INDEX
INDEX
DATATYPE
DATATYPE
Suggested answer: B, D

Explanation:

TheLATERAL FLATTENcommand in Snowflake produces a result set that includes several columns, among whichPATHandINDEXare includedPATHindicates the path to the element within a data structure that needs to be flattened, andINDEXrepresents the index of the element if it is an array2.

asked 23/09/2024
Matthew Wagner
30 questions

Question 137

Report
Export
Collapse

Which Snowflake function will interpret an input string as a JSON document, and produce a VARIANT value?

parse_json()
parse_json()
json_extract_path_text()
json_extract_path_text()
object_construct()
object_construct()
flatten
flatten
Suggested answer: A

Explanation:

Theparse_json()function in Snowflake interprets an input string as a JSON document and produces a VARIANT value containing the JSON document.This function is specifically designed for parsing strings that contain valid JSON information3.

asked 23/09/2024
chalapathy naidu
39 questions

Question 138

Report
Export
Collapse

How are serverless features billed?

Per second multiplied by an automatic sizing for the job
Per second multiplied by an automatic sizing for the job
Per minute multiplied by an automatic sizing for the job, with a minimum of one minute
Per minute multiplied by an automatic sizing for the job, with a minimum of one minute
Per second multiplied by the size, as determined by the SERVERLESS_FEATURES_SIZE account parameter
Per second multiplied by the size, as determined by the SERVERLESS_FEATURES_SIZE account parameter
Serverless features are not billed, unless the total cost for the month exceeds 10% of the warehouse credits, on the account
Serverless features are not billed, unless the total cost for the month exceeds 10% of the warehouse credits, on the account
Suggested answer: B

Explanation:

Serverless features in Snowflake are billed based on the time they are used, measured in minutes. The cost is calculated by multiplying the duration of the job by an automatic sizing determined by Snowflake, with a minimum billing increment of one minute. This means that even if a serverless feature is used for less than a minute, it will still be billed for the full minute.

asked 23/09/2024
Muhammad Imran
41 questions

Question 139

Report
Export
Collapse

Which Snowflake architectural layer is responsible for a query execution plan?

Compute
Compute
Data storage
Data storage
Cloud services
Cloud services
Cloud provider
Cloud provider
Suggested answer: C

Explanation:

In Snowflake's architecture, the Cloud Services layer is responsible for generating the query execution plan. This layer handles all the coordination, optimization, and management tasks, including query parsing, optimization, and compilation into an execution plan that can be processed by the Compute layer.

asked 23/09/2024
Victor Cantu
36 questions

Question 140

Report
Export
Collapse

Which SQL commands, when committed, will consume a stream and advance the stream offset? (Choose two.)

UPDATE TABLE FROM STREAM
UPDATE TABLE FROM STREAM
SELECT FROM STREAM
SELECT FROM STREAM
INSERT INTO TABLE SELECT FROM STREAM
INSERT INTO TABLE SELECT FROM STREAM
ALTER TABLE AS SELECT FROM STREAM
ALTER TABLE AS SELECT FROM STREAM
BEGIN COMMIT
BEGIN COMMIT
Suggested answer: A, C

Explanation:

The SQL commands that consume a stream and advance the stream offset are those that result in changes to the data, such as UPDATE and INSERT operations. Specifically, 'UPDATE TABLE FROM STREAM' and 'INSERT INTO TABLE SELECT FROM STREAM' will consume the stream and move the offset forward, reflecting the changes made to the data.

References: [COF-C02] SnowPro Core Certification Exam Study Guide

asked 23/09/2024
Monterio Weaver
33 questions
Total 716 questions
Go to page: of 72