ExamGecko
Home Home / Snowflake / SnowPro Core

Snowflake SnowPro Core Practice Test - Questions Answers, Page 58

Question list
Search
Search

At what level is the MIN_DATA_RETENSION_TIME_IN_DAYS parameter set?

A.

Account

A.

Account

Answers
B.

Database

B.

Database

Answers
C.

Schema

C.

Schema

Answers
D.

Table

D.

Table

Answers
Suggested answer: A

Explanation:

The MIN_DATA_RETENTION_TIME_IN_DAYS parameter is set at the Account level in Snowflake. This parameter specifies the minimum number of days Snowflake retains the historical data for time travel, which allows users to access and query data as it existed at previous points in time.

Here's how to understand and adjust this parameter:

Purpose of MIN_DATA_RETENTION_TIME_IN_DAYS: This parameter is crucial for managing data lifecycle and compliance requirements within Snowflake. It determines the minimum time frame for which you can perform operations like restoring deleted objects or accessing historical versions of data.

Setting the Parameter: Only account administrators can set or modify this parameter. It is done at the account level, impacting all databases and schemas within the account. The setting can be adjusted based on the organization's data retention policy.

Adjusting the Parameter:

To view the current setting, use:

SHOW PARAMETERS LIKE 'MIN_DATA_RETENTION_TIME_IN_DAYS';

To change the setting, an account administrator can execute:

ALTER ACCOUNT SET MIN_DATA_RETENTION_TIME_IN_DAYS = <number_of_days>;

What is the MINIMUM size of a table for which Snowflake recommends considering adding a clustering key?

A.

1 Kilobyte (KB)

A.

1 Kilobyte (KB)

Answers
B.

1 Megabyte (MB)

B.

1 Megabyte (MB)

Answers
C.

1 Gigabyte (GB)

C.

1 Gigabyte (GB)

Answers
D.

1 Terabyte (TB)

D.

1 Terabyte (TB)

Answers
Suggested answer: D

Explanation:

Snowflake recommends considering adding a clustering key to a table when its size reaches 1 Terabyte (TB) or larger. Clustering keys help optimize the storage and query performance by organizing the data in a table based on the specified columns. This is particularly beneficial for large tables where data retrieval can become inefficient without proper clustering.

Why Clustering Keys Are Important: Clustering keys ensure that data stored in Snowflake is physically ordered in a way that aligns with the most frequent access patterns, thereby reducing the amount of scanned data during queries and improving performance.

Recommendation Basis: The recommendation for tables of size 1 TB or larger is based on the observation that smaller tables generally do not benefit as much from clustering, given Snowflake's architecture. However, as tables grow in size, the benefits of clustering become more pronounced.

Implementing Clustering Keys:

To set a clustering key for a table, you can use the CLUSTER BY clause during table creation or alter an existing table to add it:

CREATE TABLE my_table (... ) CLUSTER BY (column1, column2);

Or for an existing table:

ALTER TABLE my_table CLUSTER BY (column1, column2);

Which types of subqueries does Snowflake support? (Select TWO).

A.

Uncorrelated scalar subqueries in WHERE clauses

A.

Uncorrelated scalar subqueries in WHERE clauses

Answers
B.

Uncorrelated scalar subqueries in any place that a value expression can be used

B.

Uncorrelated scalar subqueries in any place that a value expression can be used

Answers
C.

EXISTS, ANY / ALL, and IN subqueries in WHERE clauses: these subqueries can be uncorrelated only

C.

EXISTS, ANY / ALL, and IN subqueries in WHERE clauses: these subqueries can be uncorrelated only

Answers
D.

EXISTS, ANY / ALL, and IN subqueries in where clauses: these subqueries can be correlated only

D.

EXISTS, ANY / ALL, and IN subqueries in where clauses: these subqueries can be correlated only

Answers
E.

EXISTS, ANY /ALL, and IN subqueries in WHERE clauses: these subqueries can be correlated or uncorrelated

E.

EXISTS, ANY /ALL, and IN subqueries in WHERE clauses: these subqueries can be correlated or uncorrelated

Answers
Suggested answer: B, E

Explanation:

Snowflake supports a variety of subquery types, including both correlated and uncorrelated subqueries. The correct answers are B and E, which highlight Snowflake's flexibility in handling subqueries within SQL queries.

Uncorrelated Scalar Subqueries: These are subqueries that can execute independently of the outer query. They return a single value and can be used anywhere a value expression is allowed, offering great flexibility in SQL queries.

EXISTS, ANY/ALL, and IN Subqueries: These subqueries are used in WHERE clauses to filter the results of the main query based on the presence or absence of matching rows in a subquery. Snowflake supports both correlated and uncorrelated versions of these subqueries, providing powerful tools for complex data analysis scenarios.

Examples and Usage:

Uncorrelated Scalar Subquery:

SELECT * FROM employees WHERE salary > (SELECT AVG(salary) FROM employees);

Correlated EXISTS Subquery:

SELECT * FROM orders o WHERE EXISTS (SELECT 1 FROM customer c WHERE c.id = o.customer_id AND c.region = 'North America');

Which Snowflake data governance feature can support auditing when a user query reads column data?

A.

Access History

A.

Access History

Answers
B.

Data classification

B.

Data classification

Answers
C.

Column-level security

C.

Column-level security

Answers
D.

Object dependencies

D.

Object dependencies

Answers
Suggested answer: A

Explanation:

Access History in Snowflake is a feature designed to support auditing by tracking access to data within Snowflake, including when a user's query reads column data. It provides detailed information on queries executed, including the user who ran the query, the query text, and the objects (e.g., tables, views) accessed by the query. This feature is instrumental for auditing purposes, helping organizations to monitor and audit data access for security and compliance.

A clustering key was defined on a table, but It is no longer needed. How can the key be removed?

A.

ALTER TABLE <TABLE NAME> PURGE CLUSTERING KEY

A.

ALTER TABLE <TABLE NAME> PURGE CLUSTERING KEY

Answers
B.

ALTER TABLE <TABLE NAME> DELETE CLUSTERING KEY

B.

ALTER TABLE <TABLE NAME> DELETE CLUSTERING KEY

Answers
C.

ALTER TABLE <TABLE NAME> DROP CLUSTERING KEY

C.

ALTER TABLE <TABLE NAME> DROP CLUSTERING KEY

Answers
D.

ALTER TABLE <TABLE NAME> REMOVE CLUSTERING KEY

D.

ALTER TABLE <TABLE NAME> REMOVE CLUSTERING KEY

Answers
Suggested answer: C

Explanation:

To remove a clustering key that was previously defined on a table in Snowflake, the correct SQL command is ALTER TABLE <TABLE NAME> DROP CLUSTERING KEY. This command removes the existing clustering key from the table, after which Snowflake will no longer re-cluster data based on that key during maintenance operations or after data loading operations.

What are characteristics of Snowflake network policies? (Select TWO).

A.

They can be set for any Snowflake Edition.

A.

They can be set for any Snowflake Edition.

Answers
B.

They can be applied to roles.

B.

They can be applied to roles.

Answers
C.

They restrict or enable access to specific IP addresses.

C.

They restrict or enable access to specific IP addresses.

Answers
D.

They are activated using ALTER DATABASE SQL commands.

D.

They are activated using ALTER DATABASE SQL commands.

Answers
E.

They can only be managed using the ORGADMIN role.

E.

They can only be managed using the ORGADMIN role.

Answers
Suggested answer: A, C

Explanation:

Snowflake network policies are a security feature that allows administrators to control access to Snowflake by specifying allowed and blocked IP address ranges. These policies apply to all editions of Snowflake, making them widely applicable across different Snowflake environments. They are specifically designed to restrict or enable access based on the originating IP addresses of client requests, adding an extra layer of security.

Network policies are not applied to roles but are set at the account or user level. They are not activated using ALTER DATABASE SQL commands but are managed through ALTER ACCOUNT or ALTER NETWORK POLICY commands. The management of network policies does not exclusively require the ORGADMIN role; instead, they can be managed by users with the necessary privileges on the account.

Which categories are included in the execution time summary in a Query Profile? (Select TWO).

A.

Pruning

A.

Pruning

Answers
B.

Spilling

B.

Spilling

Answers
C.

Initialization

C.

Initialization

Answers
D.

Local Disk I/O

D.

Local Disk I/O

Answers
E.

Percentage of data read from cache

E.

Percentage of data read from cache

Answers
Suggested answer: A, C

Explanation:

In the execution time summary of a Query Profile in Snowflake, the categories included provide insights into various aspects of query execution. 'Pruning' refers to the process by which Snowflake reduces the amount of data scanned by eliminating partitions of data that are not relevant to the query, thus improving performance. 'Initialization' represents the time taken for query planning and setup before actual execution begins. These metrics are crucial for understanding and optimizing query performance.

Which command can be used to list all network policies available in an account?

A.

DESCRIBE SESSION POLICY

A.

DESCRIBE SESSION POLICY

Answers
B.

DESCRIBE NETWORK POLICY

B.

DESCRIBE NETWORK POLICY

Answers
C.

SHOW SESSION POLICIES

C.

SHOW SESSION POLICIES

Answers
D.

SHOW NETWORK POLICIES

D.

SHOW NETWORK POLICIES

Answers
Suggested answer: D

Explanation:

To list all network policies available in an account, the correct command is SHOW NETWORK POLICIES. Network policies in Snowflake are used to define and enforce rules for how users can connect to Snowflake, including IP whitelisting and other connection requirements. The SHOW NETWORK POLICIES command provides a list of all network policies defined within the account, along with their details.

The DESCRIBE SESSION POLICY and DESCRIBE NETWORK POLICY commands do not exist in Snowflake SQL syntax. The SHOW SESSION POLICIES command is also incorrect, as it does not pertain to the correct naming convention used by Snowflake for network policy management.

Using SHOW NETWORK POLICIES without any additional parameters will display all network policies in the account, which is useful for administrators to review and manage the security configurations pertaining to network access.

What should be considered when deciding to use a secure view? (Select TWO).

A.

No details of the query execution plan will be available in the query profiler.

A.

No details of the query execution plan will be available in the query profiler.

Answers
B.

Once created there is no way to determine if a view is secure or not.

B.

Once created there is no way to determine if a view is secure or not.

Answers
C.

Secure views do not take advantage of the same internal optimizations as standard views.

C.

Secure views do not take advantage of the same internal optimizations as standard views.

Answers
D.

It is not possible to create secure materialized views.

D.

It is not possible to create secure materialized views.

Answers
E.

The view definition of a secure view is still visible to users by way of the information schema.

E.

The view definition of a secure view is still visible to users by way of the information schema.

Answers
Suggested answer: A, C

Explanation:

When deciding to use a secure view, several considerations come into play, especially concerning security and performance:

A . No details of the query execution plan will be available in the query profiler: Secure views are designed to prevent the exposure of the underlying data and the view definition to unauthorized users. Because of this, the detailed execution plans for queries against secure views are not available in the query profiler. This is intended to protect sensitive data from being inferred through the execution plan.

C . Secure views do not take advantage of the same internal optimizations as standard views: Secure views, by their nature, limit some of the optimizations that can be applied compared to standard views. This is because they enforce row-level security and mask data, which can introduce additional processing overhead and limit the optimizer's ability to apply certain efficiencies that are available to standard views.

B . Once created, there is no way to determine if a view is secure or not is incorrect because metadata about whether a view is secure can be retrieved from the INFORMATION_SCHEMA views or by using the SHOW VIEWS command.

D . It is not possible to create secure materialized views is incorrect because the limitation is not on the security of the view but on the fact that Snowflake currently does not support materialized views with the same dynamic data masking and row-level security features as secure views.

E . The view definition of a secure view is still visible to users by way of the information schema is incorrect because secure views specifically hide the view definition from users who do not have the privilege to view it, ensuring that sensitive information in the definition is not exposed.

Which virtual warehouse consideration can help lower compute resource credit consumption?

A.

Setting up a multi-cluster virtual warehouse

A.

Setting up a multi-cluster virtual warehouse

Answers
B.

Resizing the virtual warehouse to a larger size

B.

Resizing the virtual warehouse to a larger size

Answers
C.

Automating the virtual warehouse suspension and resumption settings

C.

Automating the virtual warehouse suspension and resumption settings

Answers
D.

Increasing the maximum cluster count parameter for a multi-cluster virtual warehouse

D.

Increasing the maximum cluster count parameter for a multi-cluster virtual warehouse

Answers
Suggested answer: C

Explanation:

One key strategy to lower compute resource credit consumption in Snowflake is by automating the suspension and resumption of virtual warehouses. Virtual warehouses consume credits when they are running, and managing their operational times effectively can lead to significant cost savings.

A . Setting up a multi-cluster virtual warehouse increases parallelism and throughput but does not directly lower credit consumption. It is more about performance scaling than cost efficiency.

B . Resizing the virtual warehouse to a larger size increases the compute resources available for processing queries, which increases the credit consumption rate. This option does not help in lowering costs.

C . Automating the virtual warehouse suspension and resumption settings: This is a direct method to manage credit consumption efficiently. By automatically suspending a warehouse when it is not in use and resuming it when needed, you can avoid consuming credits during periods of inactivity. Snowflake allows warehouses to be configured to automatically suspend after a specified period of inactivity and to automatically resume when a query is submitted that requires the warehouse.

D . Increasing the maximum cluster count parameter for a multi-cluster virtual warehouse would potentially increase credit consumption by allowing more clusters to run simultaneously. It is used to scale up resources for performance, not to reduce costs.

Automating the operational times of virtual warehouses ensures that you only consume compute credits when the warehouse is actively being used for queries, thereby optimizing your Snowflake credit usage.

Total 627 questions
Go to page: of 63