ExamGecko
Home Home / Snowflake / SnowPro Core

Snowflake SnowPro Core Practice Test - Questions Answers, Page 43

Question list
Search
Search

When working with a managed access schema, who has the OWNERSHIP privilege of any tables added to the schema?

A.
The database owner
A.
The database owner
Answers
B.
The object owner
B.
The object owner
Answers
C.
The schema owner
C.
The schema owner
Answers
D.
The Snowflake user's role
D.
The Snowflake user's role
Answers
Suggested answer: C

Explanation:

In a managed access schema, the schema owner retains the OWNERSHIP privilege of any tables added to the schema.This means that while object owners have certain privileges over the objects they create, only the schema owner can manage privilege grants on these objects1.

When using the ALLOW CLIENT_MFA_CACHING parameter, how long is a cached Multi-Factor Authentication (MFA) token valid for?

A.
1 hour
A.
1 hour
Answers
B.
2 hours
B.
2 hours
Answers
C.
4 hours
C.
4 hours
Answers
D.
8 hours
D.
8 hours
Answers
Suggested answer: C

Explanation:

When using the ALLOW_CLIENT_MFA_CACHING parameter, a cached Multi-Factor Authentication (MFA) token is valid for up to 4 hours.This allows for continuous, secure connectivity without users needing to respond to an MFA prompt at the start of each connection attempt to Snowflake within this timeframe2.

What factors impact storage costs in Snowflake? (Select TWO).

A.
The account type
A.
The account type
Answers
B.
The storage file format
B.
The storage file format
Answers
C.
The cloud region used by the account
C.
The cloud region used by the account
Answers
D.
The type of data being stored
D.
The type of data being stored
Answers
E.
The cloud platform being used
E.
The cloud platform being used
Answers
Suggested answer: A, C

Explanation:

The factors that impact storage costs in Snowflake include the account type (Capacity or On Demand) and the cloud region used by the account.These factors determine the rate at which storage is billed, with different regions potentially having different rates3.

Which Snowflake role can manage any object grant globally, including modifying and revoking grants?

A.
USERADMIN
A.
USERADMIN
Answers
B.
ORGADMIN
B.
ORGADMIN
Answers
C.
SYSADMIN
C.
SYSADMIN
Answers
D.
SECURITYADMIN
D.
SECURITYADMIN
Answers
Suggested answer: D

Explanation:

The SECURITYADMIN role in Snowflake can manage any object grant globally, including modifying and revoking grants.This role has the necessary privileges to oversee and control access to all securable objects within the Snowflake environment4.

Which statistics are displayed in a Query Profile that indicate that intermediate results do not fit in memory? (Select TWO).

A.
Bytes scanned
A.
Bytes scanned
Answers
B.
Partitions scanned
B.
Partitions scanned
Answers
C.
Bytes spilled to local storage
C.
Bytes spilled to local storage
Answers
D.
Bytes spilled to remote storage
D.
Bytes spilled to remote storage
Answers
E.
Percentage scanned from cache
E.
Percentage scanned from cache
Answers
Suggested answer: C, D

Explanation:

The Query Profile statistics that indicate intermediate results do not fit in memory are the bytes spilled to local storage and bytes spilled to remote storage2.

How can a Snowflake user validate data that is unloaded using the COPY INTO <location> command?

A.
Load the data into a CSV file.
A.
Load the data into a CSV file.
Answers
B.
Load the data into a relational table.
B.
Load the data into a relational table.
Answers
C.
Use the VALlDATlON_MODE - SQL statement.
C.
Use the VALlDATlON_MODE - SQL statement.
Answers
D.
Use the validation mode = return rows statement.
D.
Use the validation mode = return rows statement.
Answers
Suggested answer: C

Explanation:

To validate data unloaded using the COPY INTO <location> command, a Snowflake user can use the VALIDATION_MODE parameter within the SQL statement to test the files for errors without loading them3.

At what level is the MIN_DATA_RETENTION_TIME_IN_DAYS parameter set?

A.
Account
A.
Account
Answers
B.
Database
B.
Database
Answers
C.
Schema
C.
Schema
Answers
D.
Table
D.
Table
Answers
Suggested answer: A

Explanation:

The MIN_DATA_RETENTION_TIME_IN_DAYS parameter is set at the account level.This parameter determines the minimum number of days Snowflake retains historical data for Time Travel operations

Which task is supported by the use of Access History in Snowflake?

A.

Data backups

A.

Data backups

Answers
B.

Cost monitoring

B.

Cost monitoring

Answers
C.

Compliance auditing

C.

Compliance auditing

Answers
D.

Performance optimization

D.

Performance optimization

Answers
Suggested answer: C

Explanation:

Access History in Snowflake is primarily utilized for compliance auditing. The Access History feature provides detailed logs that track data access and modifications, including queries that read from or write to database objects. This information is crucial for organizations to meet regulatory requirements and to perform audits related to data access and usage.

Role of Access History: Access History logs are designed to help organizations understand who accessed what data and when. This is particularly important for compliance with various regulations that require detailed auditing capabilities.

How Access History Supports Compliance Auditing:

By providing a detailed log of access events, organizations can trace data access patterns, identify unauthorized access, and ensure that data handling complies with relevant data protection laws and regulations.

Access History can be queried to extract specific events, users, time frames, and accessed objects, making it an invaluable tool for compliance officers and auditors.

Which feature of Snowflake's Continuous Data Protection (CDP) has associated costs?

A.

Fail-safe

A.

Fail-safe

Answers
B.

Network policies

B.

Network policies

Answers
C.

End-to-end encryption

C.

End-to-end encryption

Answers
D.

Multi-Factor Authentication (MFA)

D.

Multi-Factor Authentication (MFA)

Answers
Suggested answer: A

Explanation:

Snowflake's Continuous Data Protection (CDP) features encompass several mechanisms designed to protect data and ensure its availability and recoverability. Among these features, the one that has associated costs is Fail-safe.

Fail-safe is an additional layer of protection that kicks in after the Time Travel period expires. While Time Travel allows users to access historical data within a defined retention period (which can vary from 1 to 90 days depending on the Snowflake edition), Fail-safe provides a further 7 days (for a total of 7 additional days beyond the Time Travel period) during which Snowflake retains the data. This period is primarily intended for Snowflake's internal operations to recover data in the event of extreme scenarios, such as significant operational failures, and is not directly accessible by customers for data recovery purposes.

The associated costs with Fail-safe arise because Snowflake continues to store the data beyond the customer-specified Time Travel period, thereby incurring additional storage costs. It's important to note that while users do not incur direct costs for enabling Fail-safe (as it is an automatic feature of Snowflake), the extended storage of data during this period contributes to overall storage costs.

Snowflake Documentation on Continuous Data Protection: Continuous Data Protection (CDP)

Snowflake Documentation on Fail-safe: Understanding Fail-safe

What command is used to export or unload data from Snowflake?

A.

PUT @mystage

A.

PUT @mystage

Answers
B.

GET @mystage

B.

GET @mystage

Answers
C.

COPY INTO @mystage

C.

COPY INTO @mystage

Answers
D.

INSERT @mystage

D.

INSERT @mystage

Answers
Suggested answer: A

Explanation:

The command used to export or unload data from Snowflake to a stage (such as a file in an S3 bucket, Azure Blob Storage, or Google Cloud Storage) is the PUT command. The PUT command is designed to upload data files from a local file system (in the case of SnowSQL or other client) or a virtual warehouse to a specified stage. This functionality is critical for scenarios where data needs to be extracted from Snowflake for use in external systems, backups, or further processing.

The syntax for the PUT command follows the structure: PUT file://<local_file_path> @<stage_name>, where <local_file_path> specifies the path to the file(s) on the local file system that you wish to upload, and <stage_name> specifies the destination stage in Snowflake.

It's important to distinguish that the PUT command is used for exporting data out of Snowflake, whereas the COPY INTO <table> command is used for importing data into Snowflake from a stage. The GET command, on the other hand, is used to download files from a stage to the local file system, essentially the inverse operation of the PUT command.

Snowflake Documentation on Loading and Unloading Data: [Loading and Unloading Data](https://docs.snowflake.com/en/user-guide/data-load

Total 627 questions
Go to page: of 63