ExamGecko
Home Home / Snowflake / SnowPro Core

Snowflake SnowPro Core Practice Test - Questions Answers, Page 16

Question list
Search
Search

Which statement is true about running tasks in Snowflake?

A.
A task can be called using a CALL statement to run a set of predefined SQL commands.
A.
A task can be called using a CALL statement to run a set of predefined SQL commands.
Answers
B.
A task allows a user to execute a single SQL statement/command using a predefined schedule.
B.
A task allows a user to execute a single SQL statement/command using a predefined schedule.
Answers
C.
A task allows a user to execute a set of SQL commands on a predefined schedule.
C.
A task allows a user to execute a set of SQL commands on a predefined schedule.
Answers
D.
A task can be executed using a SELECT statement to run a predefined SQL command.
D.
A task can be executed using a SELECT statement to run a predefined SQL command.
Answers
Suggested answer: B

Explanation:

In Snowflake, a task allows a user to execute a single SQL statement/command using a predefined schedule (B). Tasks are used to automate the execution of SQL statements at scheduled intervals.

In an auto-scaling multi-cluster virtual warehouse with the setting SCALING_POLICY = ECONOMY enabled, when is another cluster started?

A.
When the system has enough load for 2 minutes
A.
When the system has enough load for 2 minutes
Answers
B.
When the system has enough load for 6 minutes
B.
When the system has enough load for 6 minutes
Answers
C.
When the system has enough load for 8 minutes
C.
When the system has enough load for 8 minutes
Answers
D.
When the system has enough load for 10 minutes
D.
When the system has enough load for 10 minutes
Answers
Suggested answer: A

Explanation:

In an auto-scaling multi-cluster virtual warehouse with the SCALING_POLICY set to ECONOMY, another cluster is started when the system has enough load for 2 minutes (A).This policy is designed to optimize the balance between performance and cost, starting additional clusters only when the sustained load justifies it2.

Which of the following describes a Snowflake stored procedure?

A.
They can be created as secure and hide the underlying metadata from the user.
A.
They can be created as secure and hide the underlying metadata from the user.
Answers
B.
They can only access tables from a single database.
B.
They can only access tables from a single database.
Answers
C.
They can contain only a single SQL statement.
C.
They can contain only a single SQL statement.
Answers
D.
They can be created to run with a caller's rights or an owner's rights.
D.
They can be created to run with a caller's rights or an owner's rights.
Answers
Suggested answer: D

Explanation:

Snowflake stored procedures can be created to execute with the privileges of the role that owns the procedure (owner's rights) or with the privileges of the role that calls the procedure (caller's rights).This allows for flexibility in managing security and access control within Snowflake1.

Which columns are part of the result set of the Snowflake LATERAL FLATTEN command? (Choose two.)

A.
CONTENT
A.
CONTENT
Answers
B.
PATH
B.
PATH
Answers
C.
BYTE_SIZE
C.
BYTE_SIZE
Answers
D.
INDEX
D.
INDEX
Answers
E.
DATATYPE
E.
DATATYPE
Answers
Suggested answer: B, D

Explanation:

TheLATERAL FLATTENcommand in Snowflake produces a result set that includes several columns, among whichPATHandINDEXare includedPATHindicates the path to the element within a data structure that needs to be flattened, andINDEXrepresents the index of the element if it is an array2.

Which Snowflake function will interpret an input string as a JSON document, and produce a VARIANT value?

A.
parse_json()
A.
parse_json()
Answers
B.
json_extract_path_text()
B.
json_extract_path_text()
Answers
C.
object_construct()
C.
object_construct()
Answers
D.
flatten
D.
flatten
Answers
Suggested answer: A

Explanation:

Theparse_json()function in Snowflake interprets an input string as a JSON document and produces a VARIANT value containing the JSON document.This function is specifically designed for parsing strings that contain valid JSON information3.

How are serverless features billed?

A.
Per second multiplied by an automatic sizing for the job
A.
Per second multiplied by an automatic sizing for the job
Answers
B.
Per minute multiplied by an automatic sizing for the job, with a minimum of one minute
B.
Per minute multiplied by an automatic sizing for the job, with a minimum of one minute
Answers
C.
Per second multiplied by the size, as determined by the SERVERLESS_FEATURES_SIZE account parameter
C.
Per second multiplied by the size, as determined by the SERVERLESS_FEATURES_SIZE account parameter
Answers
D.
Serverless features are not billed, unless the total cost for the month exceeds 10% of the warehouse credits, on the account
D.
Serverless features are not billed, unless the total cost for the month exceeds 10% of the warehouse credits, on the account
Answers
Suggested answer: B

Explanation:

Serverless features in Snowflake are billed based on the time they are used, measured in minutes. The cost is calculated by multiplying the duration of the job by an automatic sizing determined by Snowflake, with a minimum billing increment of one minute. This means that even if a serverless feature is used for less than a minute, it will still be billed for the full minute.

Which Snowflake architectural layer is responsible for a query execution plan?

A.
Compute
A.
Compute
Answers
B.
Data storage
B.
Data storage
Answers
C.
Cloud services
C.
Cloud services
Answers
D.
Cloud provider
D.
Cloud provider
Answers
Suggested answer: C

Explanation:

In Snowflake's architecture, the Cloud Services layer is responsible for generating the query execution plan. This layer handles all the coordination, optimization, and management tasks, including query parsing, optimization, and compilation into an execution plan that can be processed by the Compute layer.

Which SQL commands, when committed, will consume a stream and advance the stream offset? (Choose two.)

A.
UPDATE TABLE FROM STREAM
A.
UPDATE TABLE FROM STREAM
Answers
B.
SELECT FROM STREAM
B.
SELECT FROM STREAM
Answers
C.
INSERT INTO TABLE SELECT FROM STREAM
C.
INSERT INTO TABLE SELECT FROM STREAM
Answers
D.
ALTER TABLE AS SELECT FROM STREAM
D.
ALTER TABLE AS SELECT FROM STREAM
Answers
E.
BEGIN COMMIT
E.
BEGIN COMMIT
Answers
Suggested answer: A, C

Explanation:

The SQL commands that consume a stream and advance the stream offset are those that result in changes to the data, such as UPDATE and INSERT operations. Specifically, 'UPDATE TABLE FROM STREAM' and 'INSERT INTO TABLE SELECT FROM STREAM' will consume the stream and move the offset forward, reflecting the changes made to the data.

Which methods can be used to delete staged files from a Snowflake stage? (Choose two.)

A.
Use the DROP <file> command after the load completes.
A.
Use the DROP <file> command after the load completes.
Answers
B.
Specify the TEMPORARY option when creating the file format.
B.
Specify the TEMPORARY option when creating the file format.
Answers
C.
Specify the PURGE copy option in the COPY INTO <table> command.
C.
Specify the PURGE copy option in the COPY INTO <table> command.
Answers
D.
Use the REMOVE command after the load completes.
D.
Use the REMOVE command after the load completes.
Answers
E.
Use the DELETE LOAD HISTORY command after the load completes.
E.
Use the DELETE LOAD HISTORY command after the load completes.
Answers
Suggested answer: C, D

Explanation:

To delete staged files from a Snowflake stage, you can specify thePURGEoption in theCOPY INTO <table>command, which will automatically delete the files after they have been successfully loaded.Additionally, you can use theREMOVEcommand after the load completes to manually delete the files from the stage12.

Reference =DROP STAGE,REMOVE

Assume there is a table consisting of five micro-partitions with values ranging from A to Z.

Which diagram indicates a well-clustered table?

A.
A.
Answers
B.
B.
Answers
C.
C.
Answers
D.
D.
Answers
E.
Option A
E.
Option A
Answers
F.
Option B
F.
Option B
Answers
G.
Option C
G.
Option C
Answers
H.
Option D
H.
Option D
Answers
Suggested answer: C

Explanation:

A well-clustered table in Snowflake means that the data is organized in such a way that related data points are stored close to each other within the micro-partitions. This optimizes query performance by reducing the amount of scanned data.The diagram indicated by option C shows a well-clustered table, as it likely represents a more evenly distributed range of values across the micro-partitions1.

Reference =Snowflake Micro-partitions & Table Clustering

Total 627 questions
Go to page: of 63