ExamGecko
Home Home / Snowflake / COF-C02

Snowflake COF-C02 Practice Test - Questions Answers, Page 59

Question list
Search
Search

What causes objects in a data share to become unavailable to a consumer account?

A.
The DATA_RETENTI0N_TIME_IN_DAYS parameter in the consumer account is set to 0.
A.
The DATA_RETENTI0N_TIME_IN_DAYS parameter in the consumer account is set to 0.
Answers
B.
The consumer account runs the GRANT IMPORTED PRIVILEGES command on the data share every 24 hours.
B.
The consumer account runs the GRANT IMPORTED PRIVILEGES command on the data share every 24 hours.
Answers
C.
The objects in the data share are being deleted and the grant pattern is not re-applied systematically.
C.
The objects in the data share are being deleted and the grant pattern is not re-applied systematically.
Answers
D.
The consumer account acquires the data share through a private data exchange.
D.
The consumer account acquires the data share through a private data exchange.
Answers
Suggested answer: C

Explanation:

Objects in a data share become unavailable to a consumer account if the objects in the data share are deleted or if the permissions on these objects are altered without re-applying the grant permissions systematically. This is because the sharing mechanism in Snowflake relies on explicit grants of permissions on specific objects (like tables, views, or secure views) to the share. If these objects are deleted or if their permissions change without updating the share accordingly, consumers can lose access.

The DATA_RETENTION_TIME_IN_DAYS parameter does not directly affect the availability of shared objects, as it controls how long Snowflake retains historical data for time travel and does not impact data sharing permissions.

Running the GRANT IMPORTED PRIVILEGES command in the consumer account is not related to the availability of shared objects; this command is used to grant privileges on imported objects within the consumer's account and is not a routine maintenance command that would need to be run regularly.

Acquiring a data share through a private data exchange does not inherently make objects unavailable; issues would only arise if there were problems with the share configuration or if the shared objects were deleted or had their permissions altered without re-granting access to the share.

Which chart type is supported in Snowsight for Snowflake users to visualize data with dashboards?

A.
Area chart
A.
Area chart
Answers
B.
Box plot
B.
Box plot
Answers
C.
Heat grid
C.
Heat grid
Answers
D.
Pie chart
D.
Pie chart
Answers
Suggested answer: A

Explanation:

Snowsight, Snowflake's user interface for exploring, analyzing, and visualizing data, supports a variety of chart types for creating dashboards and visualizations. One of the supported chart types in Snowsight is the Area Chart (A). Area charts are useful for representing quantities over time and can be used to highlight volume change and rate of change, as well as to compare multiple quantities.

While Snowsight supports many types of visualizations to help users analyze their data effectively, including line charts, bar charts, and scatter plots, it's important to select the specific reference documentation or release notes for the most current list of supported chart types, as Snowflake continues to enhance and update Snowsight's capabilities.

As of the last update, Box plots (B), Heat grids (C), and Pie charts (D) are types of visualizations that may be supported in various analytics and visualization tools, but for the specific context of Snowsight's currently confirmed features, Area charts are a verified option for users to visualize their data.

At what level is the MIN_DATA_RETENSION_TIME_IN_DAYS parameter set?

A.
Account
A.
Account
Answers
B.
Database
B.
Database
Answers
C.
Schema
C.
Schema
Answers
D.
Table
D.
Table
Answers
Suggested answer: A

Explanation:

The MIN_DATA_RETENTION_TIME_IN_DAYS parameter is set at the Account level in Snowflake. This parameter specifies the minimum number of days Snowflake retains the historical data for time travel, which allows users to access and query data as it existed at previous points in time.

Here's how to understand and adjust this parameter:

Purpose of MIN_DATA_RETENTION_TIME_IN_DAYS: This parameter is crucial for managing data lifecycle and compliance requirements within Snowflake. It determines the minimum time frame for which you can perform operations like restoring deleted objects or accessing historical versions of data.

Setting the Parameter: Only account administrators can set or modify this parameter. It is done at the account level, impacting all databases and schemas within the account. The setting can be adjusted based on the organization's data retention policy.

Adjusting the Parameter:

To view the current setting, use:

SHOW PARAMETERS LIKE 'MIN_DATA_RETENTION_TIME_IN_DAYS';

To change the setting, an account administrator can execute:

ALTER ACCOUNT SET MIN_DATA_RETENTION_TIME_IN_DAYS = <number_of_days>;

What is the MINIMUM size of a table for which Snowflake recommends considering adding a clustering key?

A.
1 Kilobyte (KB)
A.
1 Kilobyte (KB)
Answers
B.
1 Megabyte (MB)
B.
1 Megabyte (MB)
Answers
C.
1 Gigabyte (GB)
C.
1 Gigabyte (GB)
Answers
D.
1 Terabyte (TB)
D.
1 Terabyte (TB)
Answers
Suggested answer: D

Explanation:

Snowflake recommends considering adding a clustering key to a table when its size reaches 1 Terabyte (TB) or larger. Clustering keys help optimize the storage and query performance by organizing the data in a table based on the specified columns. This is particularly beneficial for large tables where data retrieval can become inefficient without proper clustering.

Why Clustering Keys Are Important: Clustering keys ensure that data stored in Snowflake is physically ordered in a way that aligns with the most frequent access patterns, thereby reducing the amount of scanned data during queries and improving performance.

Recommendation Basis: The recommendation for tables of size 1 TB or larger is based on the observation that smaller tables generally do not benefit as much from clustering, given Snowflake's architecture. However, as tables grow in size, the benefits of clustering become more pronounced.

Implementing Clustering Keys:

To set a clustering key for a table, you can use the CLUSTER BY clause during table creation or alter an existing table to add it:

CREATE TABLE my_table (... ) CLUSTER BY (column1, column2);

Or for an existing table:

ALTER TABLE my_table CLUSTER BY (column1, column2);

Which function returns an integer between 0 and 100 when used to calculate the similarity of two strings?

A.
APPROXIMATE_SIMILARITY
A.
APPROXIMATE_SIMILARITY
Answers
B.
JAROWINKLER_SIMILARITY
B.
JAROWINKLER_SIMILARITY
Answers
C.
APPROXIMATE_JACCARD_INDEX
C.
APPROXIMATE_JACCARD_INDEX
Answers
D.
MINHASH COMBINE
D.
MINHASH COMBINE
Answers
Suggested answer: B

Explanation:

The JAROWINKLER_SIMILARITY function in Snowflake returns an integer between 0 and 100, indicating the similarity of two strings based on the Jaro-Winkler similarity algorithm. This function is useful for comparing strings and determining how closely they match each other.

Understanding JAROWINKLER_SIMILARITY: The Jaro-Winkler similarity metric is a measure of similarity between two strings. The score is a number between 0 and 100, where 100 indicates an exact match and lower scores indicate less similarity.

Usage Example: To compare two strings and get their similarity score, you can use:

SELECT JAROWINKLER_SIMILARITY('string1', 'string2') AS similarity_score;

Application Scenarios: This function is particularly useful in data cleaning, matching, and deduplication tasks where you need to identify similar but not identical strings, such as names, addresses, or product titles.

Which types of subqueries does Snowflake support? (Select TWO).

A.
Uncorrelated scalar subqueries in WHERE clauses
A.
Uncorrelated scalar subqueries in WHERE clauses
Answers
B.
Uncorrelated scalar subqueries in any place that a value expression can be used
B.
Uncorrelated scalar subqueries in any place that a value expression can be used
Answers
C.
EXISTS, ANY / ALL, and IN subqueries in WHERE clauses: these subqueries can be uncorrelated only
C.
EXISTS, ANY / ALL, and IN subqueries in WHERE clauses: these subqueries can be uncorrelated only
Answers
D.
EXISTS, ANY / ALL, and IN subqueries in where clauses: these subqueries can be correlated only
D.
EXISTS, ANY / ALL, and IN subqueries in where clauses: these subqueries can be correlated only
Answers
E.
EXISTS, ANY /ALL, and IN subqueries in WHERE clauses: these subqueries can be correlated or uncorrelated
E.
EXISTS, ANY /ALL, and IN subqueries in WHERE clauses: these subqueries can be correlated or uncorrelated
Answers
Suggested answer: B, E

Explanation:

Snowflake supports a variety of subquery types, including both correlated and uncorrelated subqueries. The correct answers are B and E, which highlight Snowflake's flexibility in handling subqueries within SQL queries.

Uncorrelated Scalar Subqueries: These are subqueries that can execute independently of the outer query. They return a single value and can be used anywhere a value expression is allowed, offering great flexibility in SQL queries.

EXISTS, ANY/ALL, and IN Subqueries: These subqueries are used in WHERE clauses to filter the results of the main query based on the presence or absence of matching rows in a subquery. Snowflake supports both correlated and uncorrelated versions of these subqueries, providing powerful tools for complex data analysis scenarios.

Examples and Usage:

Uncorrelated Scalar Subquery:

SELECT * FROM employees WHERE salary > (SELECT AVG(salary) FROM employees);

Correlated EXISTS Subquery:

SELECT * FROM orders o WHERE EXISTS (SELECT 1 FROM customer c WHERE c.id = o.customer_id AND c.region = 'North America');

Which Snowflake data governance feature can support auditing when a user query reads column data?

A.
Access History
A.
Access History
Answers
B.
Data classification
B.
Data classification
Answers
C.
Column-level security
C.
Column-level security
Answers
D.
Object dependencies
D.
Object dependencies
Answers
Suggested answer: A

Explanation:

Access History in Snowflake is a feature designed to support auditing by tracking access to data within Snowflake, including when a user's query reads column data. It provides detailed information on queries executed, including the user who ran the query, the query text, and the objects (e.g., tables, views) accessed by the query. This feature is instrumental for auditing purposes, helping organizations to monitor and audit data access for security and compliance.

A clustering key was defined on a table, but It is no longer needed. How can the key be removed?

A.
ALTER TABLE <TABLE NAME> PURGE CLUSTERING KEY
A.
ALTER TABLE <TABLE NAME> PURGE CLUSTERING KEY
Answers
B.
ALTER TABLE <TABLE NAME> DELETE CLUSTERING KEY
B.
ALTER TABLE <TABLE NAME> DELETE CLUSTERING KEY
Answers
C.
ALTER TABLE <TABLE NAME> DROP CLUSTERING KEY
C.
ALTER TABLE <TABLE NAME> DROP CLUSTERING KEY
Answers
D.
ALTER TABLE <TABLE NAME> REMOVE CLUSTERING KEY
D.
ALTER TABLE <TABLE NAME> REMOVE CLUSTERING KEY
Answers
Suggested answer: C

Explanation:

To remove a clustering key that was previously defined on a table in Snowflake, the correct SQL command is ALTER TABLE <TABLE NAME> DROP CLUSTERING KEY. This command removes the existing clustering key from the table, after which Snowflake will no longer re-cluster data based on that key during maintenance operations or after data loading operations.

What are characteristics of Snowflake network policies? (Select TWO).

A.
They can be set for any Snowflake Edition.
A.
They can be set for any Snowflake Edition.
Answers
B.
They can be applied to roles.
B.
They can be applied to roles.
Answers
C.
They restrict or enable access to specific IP addresses.
C.
They restrict or enable access to specific IP addresses.
Answers
D.
They are activated using ALTER DATABASE SQL commands.
D.
They are activated using ALTER DATABASE SQL commands.
Answers
E.
They can only be managed using the ORGADMIN role.
E.
They can only be managed using the ORGADMIN role.
Answers
Suggested answer: A, C

Explanation:

Snowflake network policies are a security feature that allows administrators to control access to Snowflake by specifying allowed and blocked IP address ranges. These policies apply to all editions of Snowflake, making them widely applicable across different Snowflake environments. They are specifically designed to restrict or enable access based on the originating IP addresses of client requests, adding an extra layer of security.

Network policies are not applied to roles but are set at the account or user level. They are not activated using ALTER DATABASE SQL commands but are managed through ALTER ACCOUNT or ALTER NETWORK POLICY commands. The management of network policies does not exclusively require the ORGADMIN role; instead, they can be managed by users with the necessary privileges on the account.

Which categories are included in the execution time summary in a Query Profile? (Select TWO).

A.
Pruning
A.
Pruning
Answers
B.
Spilling
B.
Spilling
Answers
C.
Initialization
C.
Initialization
Answers
D.
Local Disk I/O
D.
Local Disk I/O
Answers
E.
Percentage of data read from cache
E.
Percentage of data read from cache
Answers
Suggested answer: A, C

Explanation:

In the execution time summary of a Query Profile in Snowflake, the categories included provide insights into various aspects of query execution. 'Pruning' refers to the process by which Snowflake reduces the amount of data scanned by eliminating partitions of data that are not relevant to the query, thus improving performance. 'Initialization' represents the time taken for query planning and setup before actual execution begins. These metrics are crucial for understanding and optimizing query performance.

Total 716 questions
Go to page: of 72