ExamGecko
Home Home / Snowflake / SnowPro Core

Snowflake SnowPro Core Practice Test - Questions Answers, Page 60

Question list
Search
Search

Which Snowflake feature or tool helps troubleshoot issues in SQL query expressions that commonly cause performance bottlenecks?

A.

Persisted query results

A.

Persisted query results

Answers
B.

QUERY_HISTORY View

B.

QUERY_HISTORY View

Answers
C.

Query acceleration service

C.

Query acceleration service

Answers
D.

Query Profile

D.

Query Profile

Answers
Suggested answer: D

Explanation:

The Snowflake feature that helps troubleshoot issues in SQL query expressions and commonly identify performance bottlenecks is the Query Profile. The Query Profile provides a detailed breakdown of a query's execution plan, including each operation's time and resources consumed. It visualizes the steps involved in the query execution, highlighting areas that may be causing inefficiencies, such as full table scans, large joins, or operations that could benefit from optimization.

By examining the Query Profile, developers and database administrators can identify and troubleshoot performance issues, optimize query structures, and make informed decisions about potential schema or indexing changes to improve performance.

Snowflake Documentation on Query Profile: Using the Query Profile

Which function returns the URL of a stage using the stage name as the input?

A.

BUILD_STAGE_FILE_URL

A.

BUILD_STAGE_FILE_URL

Answers
B.

BUILD_SCOPED_FILE_URL

B.

BUILD_SCOPED_FILE_URL

Answers
C.

GET_PRESIGNED_URL

C.

GET_PRESIGNED_URL

Answers
D.

GET STAGE LOCATION

D.

GET STAGE LOCATION

Answers
Suggested answer: C

Explanation:

The function in Snowflake that returns the URL of a stage using the stage name as the input is C. GET_PRESIGNED_URL. This function generates a pre-signed URL for a specific file in a stage, enabling secure, temporary access to that file without requiring Snowflake credentials. While the function is primarily used for accessing files in external stages, such as Amazon S3 buckets, it is instrumental in scenarios requiring direct, secure file access for a limited time.

It's important to note that as of my last update, Snowflake's documentation does not specifically list a function named GET_PRESIGNED_URL for directly obtaining a stage's URL by its name. The description aligns closely with functionality available in cloud storage services (e.g., AWS S3's presigned URLs) which can be used in conjunction with Snowflake stages for secure, temporary access to files. For direct interaction with stages and their files, Snowflake offers various functions and commands, but the exact match for generating a presigned URL through a simple function call may vary or require leveraging external cloud services APIs in addition to Snowflake's capabilities.

Snowflake Documentation and cloud services (AWS, Azure, GCP) documentation on presigned URLs and stage interactions.

When does a materialized view get suspended in Snowflake?

A.

When a column is added to the base table

A.

When a column is added to the base table

Answers
B.

When a column is dropped from the base table

B.

When a column is dropped from the base table

Answers
C.

When a DML operation is run on the base table

C.

When a DML operation is run on the base table

Answers
D.

When the base table is reclustered

D.

When the base table is reclustered

Answers
Suggested answer: B

Explanation:

A materialized view in Snowflake gets suspended when structural changes that could impact the view's integrity are made to the base table, such as B. When a column is dropped from the base table. Dropping a column from the base table on which a materialized view is defined can invalidate the view's data, as the view might rely on the column that is being removed. To maintain data consistency and prevent the materialized view from serving stale or incorrect data, Snowflake automatically suspends the materialized view.

Upon suspension, the materialized view does not reflect changes to the base table until it is refreshed or re-created. This ensures that only accurate and current data is presented to users querying the materialized view.

Snowflake Documentation on Materialized Views: Materialized Views

What does a table with a clustering depth of 1 mean in Snowflake?

A.

The table has only 1 micro-partition.

A.

The table has only 1 micro-partition.

Answers
B.

The table has 1 overlapping micro-partition.

B.

The table has 1 overlapping micro-partition.

Answers
C.

The table has no overlapping micro-partitions.

C.

The table has no overlapping micro-partitions.

Answers
D.

The table has no micro-partitions.

D.

The table has no micro-partitions.

Answers
Suggested answer: C

Explanation:

In Snowflake, a table's clustering depth indicates the degree of micro-partition overlap based on the clustering keys defined for the table. A clustering depth of 1 implies that the table has no overlapping micro-partitions. This is an optimal scenario, indicating that the table's data is well-clustered according to the specified clustering keys. Well-clustered data can lead to more efficient query performance, as it reduces the amount of data scanned during query execution and improves the effectiveness of data pruning.

Snowflake Documentation on Clustering: Understanding Clustering Depth

Which Snowflake object contains all the information required to share a database?

A.

Private listing

A.

Private listing

Answers
B.

Secure view

B.

Secure view

Answers
C.

Sequence

C.

Sequence

Answers
D.

Share

D.

Share

Answers
Suggested answer: D

Explanation:

In Snowflake, a Share is the object that contains all the information required to share a database with other Snowflake accounts. Shares are used to securely share data stored in Snowflake tables and views, enabling data providers to grant data consumers access to their datasets without duplicating data. When a database is shared, it can include one or more schemas, and each schema can contain tables, views, or both.

Snowflake Documentation on Shares: Shares

Based on Snowflake recommendations, when creating a hierarchy of custom roles, the top-most custom role should be assigned to which role?

A.

ACCOUNTADMIN

A.

ACCOUNTADMIN

Answers
B.

SECURITYADMIN

B.

SECURITYADMIN

Answers
C.

SYSADMIN

C.

SYSADMIN

Answers
D.

USERADMIN

D.

USERADMIN

Answers
Suggested answer: A

Explanation:

Based on Snowflake recommendations, when creating a hierarchy of custom roles, the top-most custom role should ideally be granted to the ACCOUNTADMIN role. This recommendation stems from the best practices for implementing a least privilege access control model, ensuring that only the necessary permissions are granted at each level of the role hierarchy. The ACCOUNTADMIN role has the highest level of privileges in Snowflake, including the ability to manage all aspects of the Snowflake account. By assigning the top-most custom role to ACCOUNTADMIN, you ensure that the administration of role hierarchies and the assignment of roles remain under the control of users with the highest level of oversight and responsibility within the Snowflake environment.

Snowflake Documentation on Access Control: Managing Access Control

Which Snowflake table type is only visible to the user who creates it, can have the same name as permanent tables in the same schema, and is dropped at the end of the session?

A.

Temporary

A.

Temporary

Answers
B.

Local

B.

Local

Answers
C.

User

C.

User

Answers
D.

Transient

D.

Transient

Answers
Suggested answer: A

Explanation:

In Snowflake, a Temporary table is a type of table that is only visible to the user who creates it, can have the same name as permanent tables in the same schema, and is automatically dropped at the end of the session in which it was created. Temporary tables are designed for transient data processing needs, where data is needed for the duration of a specific task or session but not beyond. Since they are automatically cleaned up at the end of the session, they help manage storage usage efficiently and ensure that sensitive data is not inadvertently persisted.

Snowflake Documentation on Temporary Tables: Temporary Tables

The property mins_to_bypass_network_policy is set at which level?

A.

User

A.

User

Answers
B.

Role

B.

Role

Answers
C.

Account

C.

Account

Answers
D.

Organization

D.

Organization

Answers
Suggested answer: C

Explanation:

The property mins_to_bypass_network_policy is set at the account level in Snowflake. This setting allows administrators to specify a time frame during which users can bypass network policies that have been set on their account. It is particularly useful in scenarios where temporary access needs to be granted from IPs not covered by the existing network policies. By adjusting this property at the account level, Snowflake administrators can manage and enforce network access controls efficiently across the entire account.

Snowflake Documentation on Network Policies: Network Policies

Which privilege is required on a virtual warehouse to abort any existing executing queries?

A.

USAGE

A.

USAGE

Answers
B.

OPERATE

B.

OPERATE

Answers
C.

MODIFY

C.

MODIFY

Answers
D.

MONITOR

D.

MONITOR

Answers
Suggested answer: B

Explanation:

The privilege required on a virtual warehouse to abort any existing executing queries is OPERATE. The OPERATE privilege on a virtual warehouse allows a user to perform operational tasks on the warehouse, including starting, stopping, and restarting the warehouse, as well as aborting running queries. This level of control is essential for managing resource utilization and ensuring that the virtual warehouse operates efficiently.

Snowflake Documentation on Access Control: Access Control Privileges

What is the PRIMARY factor that determines the cost of using a virtual warehouse in Snowflake?

A.

The type of SQL statements executed

A.

The type of SQL statements executed

Answers
B.

The number of tables or databases queried

B.

The number of tables or databases queried

Answers
C.

The amount of data stored in the warehouse

C.

The amount of data stored in the warehouse

Answers
D.

The length of time the compute resources in each cluster run

D.

The length of time the compute resources in each cluster run

Answers
Suggested answer: D

Explanation:

The primary factor that determines the cost of using a virtual warehouse in Snowflake is D. The length of time the compute resources in each cluster run. Snowflake's pricing model for compute usage is based on the concept of Snowflake credits, which are consumed based on the time virtual warehouses are running and the size of the warehouses. The more compute resources are utilized and the longer they run, the more credits are consumed, which directly impacts the cost.

Snowflake Documentation on Virtual Warehouses: Understanding and Managing Warehouse Credit Usage

Total 627 questions
Go to page: of 63