ExamGecko
Home Home / Snowflake / SnowPro Core

Snowflake SnowPro Core Practice Test - Questions Answers

Question list
Search
Search

A user is loading JSON documents composed of a huge array containing multiple records into Snowflake. The user enables the strip__outer_array file format option

What does the STRIP_OUTER_ARRAY file format do?

A.
It removes the last element of the outer array.
A.
It removes the last element of the outer array.
Answers
B.
It removes the outer array structure and loads the records into separate table rows,
B.
It removes the outer array structure and loads the records into separate table rows,
Answers
C.
It removes the trailing spaces in the last element of the outer array and loads the records into separate table columns
C.
It removes the trailing spaces in the last element of the outer array and loads the records into separate table columns
Answers
D.
It removes the NULL elements from the JSON object eliminating invalid data and enables the ability to load the records
D.
It removes the NULL elements from the JSON object eliminating invalid data and enables the ability to load the records
Answers
Suggested answer: B

Explanation:

TheSTRIP_OUTER_ARRAYfile format option in Snowflake is used when loading JSON documents that are composed of a large array containing multiple records. When this option is enabled, it removes the outer array structure, which allows each record within the array to be loaded as a separate row in the table.This is particularly useful for efficiently loading JSON data that is structured as an array of records1.

Snowflake Documentation on JSON File Format

[COF-C02] SnowPro Core Certification Exam Study Guide

What are the default Time Travel and Fail-safe retention periods for transient tables?

A.
Time Travel - 1 day. Fail-safe - 1 day
A.
Time Travel - 1 day. Fail-safe - 1 day
Answers
B.
Time Travel - 0 days. Fail-safe - 1 day
B.
Time Travel - 0 days. Fail-safe - 1 day
Answers
C.
Time Travel - 1 day. Fail-safe - 0 days
C.
Time Travel - 1 day. Fail-safe - 0 days
Answers
D.
Transient tables are retained in neither Fail-safe nor Time Travel
D.
Transient tables are retained in neither Fail-safe nor Time Travel
Answers
Suggested answer: C

Explanation:

Transient tables in Snowflake have a default Time Travel retention period of 1 day, which allows users to access historical data within the last 24 hours. However, transient tables do not have a Fail-safe period. Fail-safe is an additional layer of data protection that retains data beyond the Time Travel period for recovery purposes in case of extreme data loss.Since transient tables are designed for temporary or intermediate workloads with no requirement for long-term durability, they do not include a Fail-safe period by default1.

Snowflake Documentation on Storage Costs for Time Travel and Fail-safe

What is a best practice after creating a custom role?

A.
Create the custom role using the SYSADMIN role.
A.
Create the custom role using the SYSADMIN role.
Answers
B.
Assign the custom role to the SYSADMIN role
B.
Assign the custom role to the SYSADMIN role
Answers
C.
Assign the custom role to the PUBLIC role
C.
Assign the custom role to the PUBLIC role
Answers
D.
Add__CUSTOM to all custom role names
D.
Add__CUSTOM to all custom role names
Answers
Suggested answer: B

Explanation:

Assigning the custom role to the SYSADMIN role is considered a best practice because it allows the SYSADMIN role to manage objects created by the custom role. This is important for maintaining proper access control and ensuring that the SYSADMIN can perform necessary administrative tasks on objects created by users with the custom role.

[COF-C02] SnowPro Core Certification Exam Study Guide

Section 1.3 - SnowPro Core Certification Study Guide1

Which of the following Snowflake objects can be shared using a secure share? (Select TWO).

A.
Materialized views
A.
Materialized views
Answers
B.
Sequences
B.
Sequences
Answers
C.
Procedures
C.
Procedures
Answers
D.
Tables
D.
Tables
Answers
E.
Secure User Defined Functions (UDFs)
E.
Secure User Defined Functions (UDFs)
Answers
Suggested answer: D, E

Explanation:

Secure sharing in Snowflake allows users to share specific objects with other Snowflake accounts without physically copying the data, thus not consuming additional storage. Tables and Secure User Defined Functions (UDFs) are among the objects that can be shared using this feature. Materialized views, sequences, and procedures are not shareable objects in Snowflake.

[COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Secure Data Sharing1

Will data cached in a warehouse be lost when the warehouse is resized?

A.
Possibly, if the warehouse is resized to a smaller size and the cache no longer fits.
A.
Possibly, if the warehouse is resized to a smaller size and the cache no longer fits.
Answers
B.
Yes. because the compute resource is replaced in its entirety with a new compute resource.
B.
Yes. because the compute resource is replaced in its entirety with a new compute resource.
Answers
C.
No. because the size of the cache is independent from the warehouse size
C.
No. because the size of the cache is independent from the warehouse size
Answers
D.
Yes. became the new compute resource will no longer have access to the cache encryption key
D.
Yes. became the new compute resource will no longer have access to the cache encryption key
Answers
Suggested answer: C

Explanation:

When a Snowflake virtual warehouse is resized, the data cached in the warehouse is not lost. This is because the cache is maintained independently of the warehouse size. Resizing a warehouse, whether scaling up or down, does not affect the cached data, ensuring that query performance is not impacted by such changes.

[COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Virtual Warehouse Performance1

What happens when a virtual warehouse is resized?

A.
When increasing the size of an active warehouse the compute resource for all running and queued queries on the warehouse are affected
A.
When increasing the size of an active warehouse the compute resource for all running and queued queries on the warehouse are affected
Answers
B.
When reducing the size of a warehouse the compute resources are removed only when they are no longer being used to execute any current statements.
B.
When reducing the size of a warehouse the compute resources are removed only when they are no longer being used to execute any current statements.
Answers
C.
The warehouse will be suspended while the new compute resource is provisioned and will resume automatically once provisioning is complete.
C.
The warehouse will be suspended while the new compute resource is provisioned and will resume automatically once provisioning is complete.
Answers
D.
Users who are trying to use the warehouse will receive an error message until the resizing is complete
D.
Users who are trying to use the warehouse will receive an error message until the resizing is complete
Answers
Suggested answer: A

Explanation:

When a virtual warehouse in Snowflake is resized, specifically when it is increased in size, the additional compute resources become immediately available to all running and queued queries. This means that the performance of these queries can improve due to the increased resources.Conversely, when the size of a warehouse is reduced, the compute resources are not removed until they are no longer being used by any current operations1.

[COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Virtual Warehouses2

A developer is granted ownership of a table that has a masking policy. The developer's role is not able to see the masked data. Will the developer be able to modify the table to read the masked data?

A.
Yes, because a table owner has full control and can unset masking policies.
A.
Yes, because a table owner has full control and can unset masking policies.
Answers
B.
Yes, because masking policies only apply to cloned tables.
B.
Yes, because masking policies only apply to cloned tables.
Answers
C.
No, because masking policies must always reference specific access roles.
C.
No, because masking policies must always reference specific access roles.
Answers
D.
No, because ownership of a table does not include the ability to change masking policies
D.
No, because ownership of a table does not include the ability to change masking policies
Answers
Suggested answer: D

Explanation:

Even if a developer is granted ownership of a table with a masking policy, they will not be able to modify the table to read the masked data if their role does not have the necessary permissions. Ownership of a table does not automatically confer the ability to alter masking policies, which are designed to protect sensitive data.Masking policies are applied at the schema level and require specific privileges to modify12.

[COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Masking Policies

Which of the following describes how clustering keys work in Snowflake?

A.
Clustering keys update the micro-partitions in place with a full sort, and impact the DML operations.
A.
Clustering keys update the micro-partitions in place with a full sort, and impact the DML operations.
Answers
B.
Clustering keys sort the designated columns over time, without blocking DML operations
B.
Clustering keys sort the designated columns over time, without blocking DML operations
Answers
C.
Clustering keys create a distributed, parallel data structure of pointers to a table's rows and columns
C.
Clustering keys create a distributed, parallel data structure of pointers to a table's rows and columns
Answers
D.
Clustering keys establish a hashed key on each node of a virtual warehouse to optimize joins at run-time
D.
Clustering keys establish a hashed key on each node of a virtual warehouse to optimize joins at run-time
Answers
Suggested answer: B

Explanation:

Clustering keys in Snowflake work by sorting the designated columns over time. This process is done in the background and does not block data manipulation language (DML) operations, allowing for normal database operations to continue without interruption.The purpose of clustering keys is to organize the data within micro-partitions to optimize query performance1.

[COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Clustering1

What is a machine learning and data science partner within the Snowflake Partner Ecosystem?

A.
Informatica
A.
Informatica
Answers
B.
Power Bl
B.
Power Bl
Answers
C.
Adobe
C.
Adobe
Answers
D.
Data Robot
D.
Data Robot
Answers
Suggested answer: D

Explanation:

Data Robot is recognized as a machine learning and data science partner within the Snowflake Partner Ecosystem. It provides an enterprise AI platform that enables users to build and deploy accurate predictive models quickly.As a partner, Data Robot integrates with Snowflake to enhance data science capabilities2.

[COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Machine Learning & Data Science Partners

https://docs.snowflake.com/en/user-guide/ecosystem-analytics.html

Which of the following is a valid source for an external stage when the Snowflake account is located on Microsoft Azure?

A.
An FTP server with TLS encryption
A.
An FTP server with TLS encryption
Answers
B.
An HTTPS server with WebDAV
B.
An HTTPS server with WebDAV
Answers
C.
A Google Cloud storage bucket
C.
A Google Cloud storage bucket
Answers
D.
A Windows server file share on Azure
D.
A Windows server file share on Azure
Answers
Suggested answer: D

Explanation:

In Snowflake, when the account is located on Microsoft Azure, a valid source for an external stage can be an Azure container or a folder path within an Azure container. This includes Azure Blob storage which is accessible via theazure://endpoint. A Windows server file share on Azure, if configured properly, can be a valid source for staging data files for Snowflake.Options A, B, and C are not supported as direct sources for an external stage in Snowflake on Azure12.Reference:[COF-C02] SnowPro Core Certification Exam Study Guide

Total 627 questions
Go to page: of 63