ExamGecko
Home Home / Snowflake / SnowPro Core

Snowflake SnowPro Core Practice Test - Questions Answers, Page 2

Question list
Search
Search

Which data type can be used to store geospatial data in Snowflake?

A.
Variant
A.
Variant
Answers
B.
Object
B.
Object
Answers
C.
Geometry
C.
Geometry
Answers
D.
Geography
D.
Geography
Answers
Suggested answer: D

Explanation:

Snowflake supports two geospatial data types:GEOGRAPHYandGEOMETRY. TheGEOGRAPHYdata type is used to store geospatial data that models the Earth as a perfect sphere, which is suitable for global geospatial data. This data type follows the WGS 84 standard and is used for storing points, lines, and polygons on the Earth's surface. TheGEOMETRYdata type, on the other hand, represents features in a planar (Euclidean, Cartesian) coordinate system and is typically used for local spatial reference systems.Since the question specifically asks about geospatial data, which commonly refers to Earth-related spatial data, the correct answer isGEOGRAPHY3.Reference:[COF-C02] SnowPro Core Certification Exam Study Guide

What can be used to view warehouse usage over time? (Select Two).

A.
The load HISTORY view
A.
The load HISTORY view
Answers
B.
The Query history view
B.
The Query history view
Answers
C.
The show warehouses command
C.
The show warehouses command
Answers
D.
The WAREHOUSE_METERING__HISTORY View
D.
The WAREHOUSE_METERING__HISTORY View
Answers
E.
The billing and usage tab in the Snowflake web Ul
E.
The billing and usage tab in the Snowflake web Ul
Answers
Suggested answer: B, D

Explanation:

To view warehouse usage over time, the Query history view and the WAREHOUSE_METERING__HISTORY View can be utilized.The Query history view allows users to monitor the performance of their queries and the load on their warehouses over a specified period1.The WAREHOUSE_METERING__HISTORY View provides detailed information about the workload on a warehouse within a specified date range, including average running and queued loads2.Reference:[COF-C02] SnowPro Core Certification Exam Study Guide

Which Snowflake partner specializes in data catalog solutions?

A.
Alation
A.
Alation
Answers
B.
DataRobot
B.
DataRobot
Answers
C.
dbt
C.
dbt
Answers
D.
Tableau
D.
Tableau
Answers
Suggested answer: A

Explanation:

Alation is known for specializing in data catalog solutions and is a partner of Snowflake. Data catalog solutions are essential for organizations to effectively manage their metadata and make it easily accessible and understandable for users, which aligns with the capabilities provided by Alation.

[COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake's official documentation and partner listings

What is the MOST performant file format for loading data in Snowflake?

A.
CSV (Unzipped)
A.
CSV (Unzipped)
Answers
B.
Parquet
B.
Parquet
Answers
C.
CSV (Gzipped)
C.
CSV (Gzipped)
Answers
D.
ORC
D.
ORC
Answers
Suggested answer: B

Explanation:

Parquet is a columnar storage file format that is optimized for performance in Snowflake. It is designed to be efficient for both storage and query performance, particularly for complex queries on large datasets. Parquet files support efficient compression and encoding schemes, which can lead to significant savings in storage and speed in query processing, making it the most performant file format for loading data into Snowflake.

[COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Data Loading1

Which copy INTO command outputs the data into one file?

A.
SINGLE=TRUE
A.
SINGLE=TRUE
Answers
B.
MAX_FILE_NUMBER=1
B.
MAX_FILE_NUMBER=1
Answers
C.
FILE_NUMBER=1
C.
FILE_NUMBER=1
Answers
D.
MULTIPLE=FAISE
D.
MULTIPLE=FAISE
Answers
Suggested answer: B

Explanation:

TheCOPY INTOcommand in Snowflake can be configured to output data into a single file by setting theMAX_FILE_NUMBERoption to 1. This option limits the number of files generated by the command, ensuring that only one file is created regardless of the amount of data being exported.

[COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Data Unloading

The fail-safe retention period is how many days?

A.
1 day
A.
1 day
Answers
B.
7 days
B.
7 days
Answers
C.
45 days
C.
45 days
Answers
D.
90 days
D.
90 days
Answers
Suggested answer: B

Explanation:

Fail-safe is a feature in Snowflake that provides an additional layer of data protection. After the Time Travel retention period ends, Fail-safe offers a non-configurable 7-day period during which historical data may be recoverable by Snowflake. This period is designed to protect against accidental data loss and is not intended for customer access.

True or False: A 4X-Large Warehouse may, at times, take longer to provision than a X-Small Warehouse.

A.
True
A.
True
Answers
B.
False
B.
False
Answers
Suggested answer: A

Explanation:

Provisioning time can vary based on the size of the warehouse. A4X-Large Warehousetypically has more resources and may take longer to provision compared to aX-Small Warehouse, which has fewer resources and can generally be provisioned more quickly.Reference:Understanding and viewing Fail-safe | Snowflake Documentation

How would you determine the size of the virtual warehouse used for a task?

A.
Root task may be executed concurrently (i.e. multiple instances), it is recommended to leave some margins in the execution window to avoid missing instances of execution
A.
Root task may be executed concurrently (i.e. multiple instances), it is recommended to leave some margins in the execution window to avoid missing instances of execution
Answers
B.
Querying (select) the size of the stream content would help determine the warehouse size. For example, if querying large stream content, use a larger warehouse size
B.
Querying (select) the size of the stream content would help determine the warehouse size. For example, if querying large stream content, use a larger warehouse size
Answers
C.
If using the stored procedure to execute multiple SQL statements, it's best to test run the stored procedure separately to size the compute resource first
C.
If using the stored procedure to execute multiple SQL statements, it's best to test run the stored procedure separately to size the compute resource first
Answers
D.
Since task infrastructure is based on running the task body on schedule, it's recommended to configure the virtual warehouse for automatic concurrency handling using Multi-cluster warehouse (MCW) to match the task schedule
D.
Since task infrastructure is based on running the task body on schedule, it's recommended to configure the virtual warehouse for automatic concurrency handling using Multi-cluster warehouse (MCW) to match the task schedule
Answers
Suggested answer: D

Explanation:

The size of the virtual warehouse for a task can be configured to handle concurrency automatically using a Multi-cluster warehouse (MCW). This is because tasks are designed to run their body on a schedule, and MCW allows for scaling compute resources to match the task's execution needs without manual intervention.Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

The Information Schema and Account Usage Share provide storage information for which of the following objects? (Choose three.)

A.
Users
A.
Users
Answers
B.
Tables
B.
Tables
Answers
C.
Databases
C.
Databases
Answers
D.
Internal Stages
D.
Internal Stages
Answers
Suggested answer: B, C, D

Explanation:

The Information Schema and Account Usage Share in Snowflake provide metadata and historical usage data for various objects within a Snowflake account. Specifically, they offer storage information forTables,Databases, andInternal Stages. These schemas contain views and table functions that allow users to query object metadata and usage metrics, such as the amount of data stored and historical activity.

Tables: The storage information includes data on the daily average amount of data in database tables.

Databases: For databases, the storage usage is calculated based on all the data contained within the database, including tables and stages.

Internal Stages: Internal stages are locations within Snowflake for temporarily storing data, and their storage usage is also tracked.

What is the default File Format used in the COPY command if one is not specified?

A.
CSV
A.
CSV
Answers
B.
JSON
B.
JSON
Answers
C.
Parquet
C.
Parquet
Answers
D.
XML
D.
XML
Answers
Suggested answer: A

Explanation:

The default file format for the COPY command in Snowflake, when not specified, is CSV (Comma-Separated Values). This format is widely used for data exchange because it is simple, easy to read, and supported by many data analysis tools.

Total 627 questions
Go to page: of 63