ExamGecko
Home Home / Snowflake / SnowPro Core

Snowflake SnowPro Core Practice Test - Questions Answers, Page 59

Question list
Search
Search

A Snowflake user wants to optimize performance for a query that queries only a small number of rows in a table. The rows require significant processing. The data in the table does not change frequently.

What should the user do?

A.

Add a clustering key to the table.

A.

Add a clustering key to the table.

Answers
B.

Add the search optimization service to the table.

B.

Add the search optimization service to the table.

Answers
C.

Create a materialized view based on the query.

C.

Create a materialized view based on the query.

Answers
D.

Enable the query acceleration service for the virtual warehouse.

D.

Enable the query acceleration service for the virtual warehouse.

Answers
Suggested answer: C

Explanation:

In a scenario where a Snowflake user queries only a small number of rows that require significant processing and the data in the table does not change frequently, the most effective way to optimize performance is by creating a materialized view based on the query. Materialized views store the result of the query and can significantly reduce the computation time for queries that are executed frequently over unchanged data.

Why Materialized Views: Materialized views precompute and store the result of the query. This is especially beneficial for queries that require heavy processing. Since the data does not change frequently, the materialized view will not need to be refreshed often, making it an ideal solution for this use case.

Implementation Steps:

To create a materialized view, use the following SQL command:

CREATE MATERIALIZED VIEW my_materialized_view AS SELECT ... FROM my_table WHERE ...;

When the query is run, Snowflake uses the precomputed results from the materialized view, thus skipping the need for recalculating the data and improving query performance.

To use the overwrite option on insert, which privilege must be granted to the role?

A.

truncate

A.

truncate

Answers
B.

DELETE

B.

DELETE

Answers
C.

UPDATE

C.

UPDATE

Answers
D.

SELECT

D.

SELECT

Answers
Suggested answer: B

Explanation:

To use the overwrite option on insert in Snowflake, the DELETE privilege must be granted to the role. This is because overwriting data during an insert operation implicitly involves deleting the existing data before inserting the new data.

Understanding the Overwrite Option: The overwrite option (INSERT OVERWRITE) allows you to replace existing data in a table with new data. This operation is particularly useful for batch-loading scenarios where the entire dataset needs to be refreshed.

Why DELETE Privilege is Required: Since the overwrite operation involves removing existing rows in the table, the executing role must have the DELETE privilege to carry out both the deletion of old data and the insertion of new data.

Granting DELETE Privilege:

To grant the DELETE privilege to a role, an account administrator can execute the following SQL command:

sqlCopy code

GRANT DELETE ON TABLE my_table TO ROLE my_role;

A user needs to MINIMIZE the cost of large tables that are used to store transitory data. The data does not need to be protected against failures, because the data can be reconstructed outside of Snowflake.

What table type should be used?

A.

Permanent

A.

Permanent

Answers
B.

Transient

B.

Transient

Answers
C.

Temporary

C.

Temporary

Answers
D.

Externa

D.

Externa

Answers
Suggested answer: B

Explanation:

For minimizing the cost of large tables that are used to store transitory data, which does not need to be protected against failures because it can be reconstructed outside of Snowflake, the best table type to use is Transient. Transient tables in Snowflake are designed for temporary or transitory data storage and offer reduced storage costs compared to permanent tables. However, unlike temporary tables, they persist across sessions until explicitly dropped.

Why Transient Tables: Transient tables provide a cost-effective solution for storing data that is temporary but needs to be available longer than a single session. They have lower data storage costs because Snowflake does not maintain historical data (Time Travel) for as long as it does for permanent tables.

Creating a Transient Table:

To create a transient table, use the TRANSIENT keyword in the CREATE TABLE statement:

CREATE TRANSIENT TABLE my_transient_table (...);

Use Case Considerations: Transient tables are ideal for scenarios where the data is not critical, can be easily recreated, and where cost optimization is a priority. They are suitable for development, testing, or staging environments where data longevity is not a concern.

What is the default access of a securable object until other access is granted?

A.

No access

A.

No access

Answers
B.

Read access

B.

Read access

Answers
C.

Write access

C.

Write access

Answers
D.

Full access

D.

Full access

Answers
Suggested answer: A

Explanation:

In Snowflake, the default access level for any securable object (such as a table, view, or schema) is 'No access' until explicit access is granted. This means that when an object is created, only the owner of the object and roles with the necessary privileges can access it. Other users or roles will not have any form of access to the object until it is explicitly granted.

This design adheres to the principle of least privilege, ensuring that access to data is tightly controlled and that users and roles only have the access necessary for their functions. To grant access, the owner of the object or a role with the GRANT option can use the GRANT statement to provide specific privileges to other users or roles.

For example, to grant SELECT access on a table to a specific role, you would use a command similar to:

GRANT SELECT ON TABLE my_table TO ROLE my_role;

What happens when a suspended virtual warehouse is resized in Snowflake?

A.

It will return an error.

A.

It will return an error.

Answers
B.

It will return a warning.

B.

It will return a warning.

Answers
C.

The suspended warehouse is resumed and new compute resources are provisioned immediately.

C.

The suspended warehouse is resumed and new compute resources are provisioned immediately.

Answers
D.

The additional compute resources are provisioned when the warehouse is resumed.

D.

The additional compute resources are provisioned when the warehouse is resumed.

Answers
Suggested answer: D

Explanation:

In Snowflake, resizing a virtual warehouse that is currently suspended does not immediately provision the new compute resources. Instead, the change in size is recorded, and the additional compute resources are provisioned when the warehouse is resumed. This means that the action of resizing a suspended warehouse does not cause it to resume operation automatically. The warehouse remains suspended until an explicit command to resume it is issued, or until it automatically resumes upon the next query execution that requires it.

This behavior allows for efficient management of compute resources, ensuring that credits are not consumed by a warehouse that is not in use, even if its size is adjusted while it is suspended.

How does Snowflake handle the data retention period for a table if a stream has not been consumed?

A.

The data retention period is reduced to a minimum of 14 days.

A.

The data retention period is reduced to a minimum of 14 days.

Answers
B.

The data retention period is permanently extended for the table.

B.

The data retention period is permanently extended for the table.

Answers
C.

The data retention period is temporarily extended to the stream's offset.

C.

The data retention period is temporarily extended to the stream's offset.

Answers
D.

The data retention period is not affected by the stream consumption.

D.

The data retention period is not affected by the stream consumption.

Answers
Suggested answer: C

Explanation:

In Snowflake, the use of streams impacts how the data retention period for a table is handled, particularly in scenarios where the stream has not been consumed. The key point to understand is that Snowflake's streams are designed to capture data manipulation language (DML) changes such as INSERTS, UPDATES, and DELETES that occur on a source table. Streams maintain a record of these changes until they are consumed by a DML operation or a COPY command that references the stream.

When a stream is created on a table and remains unconsumed, Snowflake extends the data retention period of the table to ensure that the changes captured by the stream are preserved. This extension is specifically up to the point in time represented by the stream's offset, which effectively ensures that the data necessary for consuming the stream's contents is retained. This mechanism is in place to prevent data loss and ensure the integrity of the stream's data, facilitating accurate and reliable data processing and analysis based on the captured DML changes.

This behavior emphasizes the importance of managing streams and their consumption appropriately to balance between data retention needs and storage costs. It's also crucial to understand how this temporary extension of the data retention period impacts the overall management of data within Snowflake, including aspects related to data lifecycle, storage cost implications, and the planning of data consumption strategies.

Snowflake Documentation on Streams: Using Streams

Snowflake Documentation on Data Retention: Understanding Data Retention

When unloading data with the COPY into <location> command, what is the purpose of the PARTITION BY <expression> parameter option?

A.

To sort the contents of the output file by the specified expression.

A.

To sort the contents of the output file by the specified expression.

Answers
B.

To delimit the records in the output file using the specified expression.

B.

To delimit the records in the output file using the specified expression.

Answers
C.

To include a new column in the output using the specified window function expression.

C.

To include a new column in the output using the specified window function expression.

Answers
D.

To split the output into multiple files, one for each distinct value of the specified expression.

D.

To split the output into multiple files, one for each distinct value of the specified expression.

Answers
Suggested answer: D

Explanation:

The PARTITION BY <expression> parameter option in the COPY INTO <location> command is used to split the output into multiple files based on the distinct values of the specified expression. This feature is particularly useful for organizing large datasets into smaller, more manageable files and can help with optimizing downstream processing or consumption of the data. For example, if you are unloading a large dataset of transactions and use PARTITION BY DATE(transactions.transaction_date), Snowflake generates a separate output file for each unique transaction date, facilitating easier data management and access.

This approach to data unloading can significantly improve efficiency when dealing with large volumes of data by enabling parallel processing and simplifying data retrieval based on specific criteria or dimensions.

Snowflake Documentation on Unloading Data: COPY INTO <location>

What are potential impacts of storing non-native values like dates and timestamps in a variant column in Snowflake?

A.

Faster query performance and increased storage consumption

A.

Faster query performance and increased storage consumption

Answers
B.

Slower query performance and increased storage consumption

B.

Slower query performance and increased storage consumption

Answers
C.

Faster query performance and decreased storage consumption

C.

Faster query performance and decreased storage consumption

Answers
D.

Slower query performance and decreased storage consumption

D.

Slower query performance and decreased storage consumption

Answers
Suggested answer: B

Explanation:

Storing non-native values, such as dates and timestamps, in a VARIANT column in Snowflake can lead to slower query performance and increased storage consumption. VARIANT is a semi-structured data type that allows storing JSON, AVRO, ORC, Parquet, or XML data in a single column. When non-native data types are stored as VARIANT, Snowflake must perform implicit conversion to process these values, which can slow down query execution. Additionally, because the VARIANT data type is designed to accommodate a wide variety of data formats, it often requires more storage space compared to storing data in native, strongly-typed columns that are optimized for specific data types.

The performance impact arises from the need to parse and interpret the semi-structured data on the fly during query execution, as opposed to directly accessing and operating on optimally stored data in its native format. Furthermore, the increased storage consumption is a result of the overhead associated with storing data in a format that is less space-efficient than the native formats optimized for specific types of data.

Snowflake Documentation on Semi-Structured Data: Semi-Structured Data

Which views are included in the data_sharing_usage schema? (Select TWO).

A.

ACCESS_HISTORY

A.

ACCESS_HISTORY

Answers
B.

DATA_TRANSFER_HISTORY

B.

DATA_TRANSFER_HISTORY

Answers
C.

WAREHOUSE_METERING_HISTORY

C.

WAREHOUSE_METERING_HISTORY

Answers
D.

MONETIZED_USAGE_DAILY

D.

MONETIZED_USAGE_DAILY

Answers
E.

LISTING TELEMETRY DAILY

E.

LISTING TELEMETRY DAILY

Answers
Suggested answer: D, E

Explanation:

https://docs.snowflake.com/en/sql-reference/data-sharing-usage

How does the Access_History view enhance overall data governance pertaining to read and write operations? (Select TWO).

A.

Shows how the accessed data was moved from the source lo the target objects

A.

Shows how the accessed data was moved from the source lo the target objects

Answers
B.

Provides a unified picture of what data was accessed and when it was accessed

B.

Provides a unified picture of what data was accessed and when it was accessed

Answers
C.

Protects sensitive data from unauthorized access while allowing authorized users to access it at query runtime

C.

Protects sensitive data from unauthorized access while allowing authorized users to access it at query runtime

Answers
D.

Identifies columns with personal information and tags them so masking policies can be applied to protect sensitive data

D.

Identifies columns with personal information and tags them so masking policies can be applied to protect sensitive data

Answers
E.

Determines whether a given row in a table can be accessed by the user by filtering the data based on a given policy

E.

Determines whether a given row in a table can be accessed by the user by filtering the data based on a given policy

Answers
Suggested answer: B, E

Explanation:

The ACCESS_HISTORY view in Snowflake is a powerful tool for enhancing data governance, especially concerning monitoring and auditing data access patterns for both read and write operations. The key ways in which ACCESS_HISTORY enhances overall data governance are:

B . Provides a unified picture of what data was accessed and when it was accessed: This view logs details about query executions, including the objects (tables, views) accessed and the timestamps of these accesses. It's instrumental in auditing and compliance scenarios, where understanding the access patterns to sensitive data is critical.

E . Determines whether a given row in a table can be accessed by the user by filtering the data based on a given policy: While this option is a bit of a misinterpretation of what ACCESS_HISTORY directly offers, it indirectly supports data governance by providing the information necessary to analyze access patterns. This analysis can then inform policy decisions, such as implementing Row-Level Security (RLS) to restrict access to specific rows based on user roles or attributes.

ACCESS_HISTORY does not automatically apply data masking or tag columns with personal information. However, the insights derived from analyzing ACCESS_HISTORY can be used to identify sensitive data and inform the application of masking policies or other security measures.

Snowflake Documentation on ACCESS_HISTORY: Access History

Total 627 questions
Go to page: of 63