ExamGecko
Home Home / Snowflake / ARA-C01

Snowflake ARA-C01 Practice Test - Questions Answers

Question list
Search
Search

List of questions

Search

Related questions

Which system functions does Snowflake provide to monitor clustering information within a table (Choose two.)

A.
SYSTEM$CLUSTERING_INFORMATION
A.
SYSTEM$CLUSTERING_INFORMATION
Answers
B.
SYSTEM$CLUSTERING_USAGE
B.
SYSTEM$CLUSTERING_USAGE
Answers
C.
SYSTEM$CLUSTERING_DEPTH
C.
SYSTEM$CLUSTERING_DEPTH
Answers
D.
SYSTEM$CLUSTERING_KEYS
D.
SYSTEM$CLUSTERING_KEYS
Answers
E.
SYSTEM$CLUSTERING_PERCENT
E.
SYSTEM$CLUSTERING_PERCENT
Answers
Suggested answer: A, C

Explanation:

According to the Snowflake documentation, these two system functions are provided by Snowflake to monitor clustering information within a table. A system function is a type of function that allows executing actions or returning information about the system. A clustering key is a feature that allows organizing data across micro-partitions based on one or more columns in the table. Clustering can improve query performance by reducing the number of files to scan.

SYSTEM$CLUSTERING_INFORMATION is a system function that returns clustering information, including average clustering depth, for a table based on one or more columns in the table. The function takes a table name and an optional column name or expression as arguments, and returns a JSON string with the clustering information.The clustering information includes the cluster by keys, the total partition count, the total constant partition count, the average overlaps, and the average depth1.

SYSTEM$CLUSTERING_DEPTH is a system function that returns the clustering depth for a table based on one or more columns in the table. The function takes a table name and an optional column name or expression as arguments, and returns an integer value with the clustering depth. The clustering depth is the maximum number of overlapping micro-partitions for any micro-partition in the table.A lower clustering depth indicates a better clustering2.

SYSTEM$CLUSTERING_INFORMATION | Snowflake Documentation

SYSTEM$CLUSTERING_DEPTH | Snowflake Documentation

A company has a table with that has corrupted data, named Dat

a. The company wants to recover the data as it was 5 minutes ago using cloning and Time Travel.

What command will accomplish this?

A.
CREATE CLONE TABLE Recover_Data FROM Data AT(OFFSET => -60*5);
A.
CREATE CLONE TABLE Recover_Data FROM Data AT(OFFSET => -60*5);
Answers
B.
CREATE CLONE Recover_Data FROM Data AT(OFFSET => -60*5);
B.
CREATE CLONE Recover_Data FROM Data AT(OFFSET => -60*5);
Answers
C.
CREATE TABLE Recover_Data CLONE Data AT(OFFSET => -60*5);
C.
CREATE TABLE Recover_Data CLONE Data AT(OFFSET => -60*5);
Answers
D.
CREATE TABLE Recover Data CLONE Data AT(TIME => -60*5);
D.
CREATE TABLE Recover Data CLONE Data AT(TIME => -60*5);
Answers
Suggested answer: C

Explanation:

This is the correct command to create a clone of the table Data as it was 5 minutes ago using cloning and Time Travel. Cloning is a feature that allows creating a copy of a database, schema, table, or view without duplicating the data or metadata. Time Travel is a feature that enables accessing historical data (i.e. data that has been changed or deleted) at any point within a defined period. To create a clone of a table at a point in time in the past, the syntax is:

CREATE TABLE <clone_name> CLONE <source_table> AT (OFFSET => <offset_in_seconds>);

The OFFSET parameter specifies the time difference in seconds from the present time. A negative value indicates a point in the past. For example, -60*5 means 5 minutes ago. Alternatively, the TIMESTAMP parameter can be used to specify an exact timestamp in the past.The clone will contain the data as it existed in the source table at the specified point in time12.

Snowflake Documentation: Cloning Objects

Snowflake Documentation: Cloning Objects at a Point in Time in the Past

A company has an inbound share set up with eight tables and five secure views. The company plans to make the share part of its production data pipelines.

Which actions can the company take with the inbound share? (Choose two.)

A.
Clone a table from a share.
A.
Clone a table from a share.
Answers
B.
Grant modify permissions on the share.
B.
Grant modify permissions on the share.
Answers
C.
Create a table from the shared database.
C.
Create a table from the shared database.
Answers
D.
Create additional views inside the shared database.
D.
Create additional views inside the shared database.
Answers
E.
Create a table stream on the shared table.
E.
Create a table stream on the shared table.
Answers
Suggested answer: A, D

Explanation:

These two actions are possible with an inbound share, according to the Snowflake documentation and the web search results. An inbound share is a share that is created by another Snowflake account (the provider) and imported into your account (the consumer). An inbound share allows you to access the data shared by the provider, but not to modify or delete it. However, you can perform some actions with the inbound share, such as:

Clone a table from a share. You can create a copy of a table from an inbound share using the CREATE TABLE ... CLONE statement. The clone will contain the same data and metadata as the original table, but it will be independent of the share.You can modify or delete the clone as you wish, but it will not reflect any changes made to the original table by the provider1.

Create additional views inside the shared database. You can create views on the tables or views from an inbound share using the CREATE VIEW statement. The views will be stored in the shared database, but they will be owned by your account.You can query the views as you would query any other view in your account, but you cannot modify or delete the underlying objects from the share2.

The other actions listed are not possible with an inbound share, because they would require modifying the share or the shared objects, which are read-only for the consumer.You cannot grant modify permissions on the share, create a table from the shared database, or create a table stream on the shared table34.

Cloning Objects from a Share | Snowflake Documentation

Creating Views on Shared Data | Snowflake Documentation

Importing Data from a Share | Snowflake Documentation

Streams on Shared Tables | Snowflake Documentation

A Snowflake Architect is designing an application and tenancy strategy for an organization where strong legal isolation rules as well as multi-tenancy are requirements.

Which approach will meet these requirements if Role-Based Access Policies (RBAC) is a viable option for isolating tenants?

A.
Create accounts for each tenant in the Snowflake organization.
A.
Create accounts for each tenant in the Snowflake organization.
Answers
B.
Create an object for each tenant strategy if row level security is viable for isolating tenants.
B.
Create an object for each tenant strategy if row level security is viable for isolating tenants.
Answers
C.
Create an object for each tenant strategy if row level security is not viable for isolating tenants.
C.
Create an object for each tenant strategy if row level security is not viable for isolating tenants.
Answers
D.
Create a multi-tenant table strategy if row level security is not viable for isolating tenants.
D.
Create a multi-tenant table strategy if row level security is not viable for isolating tenants.
Answers
Suggested answer: A

Explanation:

This approach meets the requirements of strong legal isolation and multi-tenancy. By creating separate accounts for each tenant, the application can ensure that each tenant has its own dedicated storage, compute, and metadata resources, as well as its own encryption keys and security policies. This provides the highest level of isolation and data protection among the tenancy models. Furthermore, by creating the accounts within the same Snowflake organization, the application can leverage the features of Snowflake Organizations, such as centralized billing, account management, and cross-account data sharing.

Snowflake Organizations Overview | Snowflake Documentation

Design Patterns for Building Multi-Tenant Applications on Snowflake

Which statements describe characteristics of the use of materialized views in Snowflake? (Choose two.)

A.
They can include ORDER BY clauses.
A.
They can include ORDER BY clauses.
Answers
B.
They cannot include nested subqueries.
B.
They cannot include nested subqueries.
Answers
C.
They can include context functions, such as CURRENT_TIME().
C.
They can include context functions, such as CURRENT_TIME().
Answers
D.
They can support MIN and MAX aggregates.
D.
They can support MIN and MAX aggregates.
Answers
E.
They can support inner joins, but not outer joins.
E.
They can support inner joins, but not outer joins.
Answers
Suggested answer: B, D

Explanation:

According to the Snowflake documentation, materialized views have some limitations on the query specification that defines them. One of these limitations is that they cannot include nested subqueries, such as subqueries in the FROM clause or scalar subqueries in the SELECT list. Another limitation is that they cannot include ORDER BY clauses, context functions (such as CURRENT_TIME()), or outer joins. However, materialized views can support MIN and MAX aggregates, as well as other aggregate functions, such as SUM, COUNT, and AVG.

Limitations on Creating Materialized Views | Snowflake Documentation

Working with Materialized Views | Snowflake Documentation

The Data Engineering team at a large manufacturing company needs to engineer data coming from many sources to support a wide variety of use cases and data consumer requirements which include:

1) Finance and Vendor Management team members who require reporting and visualization

2) Data Science team members who require access to raw data for ML model development

3) Sales team members who require engineered and protected data for data monetization

What Snowflake data modeling approaches will meet these requirements? (Choose two.)

A.
Consolidate data in the company's data lake and use EXTERNAL TABLES.
A.
Consolidate data in the company's data lake and use EXTERNAL TABLES.
Answers
B.
Create a raw database for landing and persisting raw data entering the data pipelines.
B.
Create a raw database for landing and persisting raw data entering the data pipelines.
Answers
C.
Create a set of profile-specific databases that aligns data with usage patterns.
C.
Create a set of profile-specific databases that aligns data with usage patterns.
Answers
D.
Create a single star schema in a single database to support all consumers' requirements.
D.
Create a single star schema in a single database to support all consumers' requirements.
Answers
E.
Create a Data Vault as the sole data pipeline endpoint and have all consumers directly access the Vault.
E.
Create a Data Vault as the sole data pipeline endpoint and have all consumers directly access the Vault.
Answers
Suggested answer: B, C

Explanation:

These two approaches are recommended by Snowflake for data modeling in a data lake scenario. Creating a raw database allows the data engineering team to ingest data from various sources without any transformation or cleansing, preserving the original data quality and format. This enables the data science team to access the raw data for ML model development. Creating a set of profile-specific databases allows the data engineering team to apply different transformations and optimizations for different use cases and data consumer requirements. For example, the finance and vendor management team can access a dimensional database that supports reporting and visualization, while the sales team can access a secure database that supports data monetization.

Snowflake Data Lake Architecture | Snowflake Documentation

Snowflake Data Lake Best Practices | Snowflake Documentation

An Architect on a new project has been asked to design an architecture that meets Snowflake security, compliance, and governance requirements as follows:

1) Use Tri-Secret Secure in Snowflake

2) Share some information stored in a view with another Snowflake customer

3) Hide portions of sensitive information from some columns

4) Use zero-copy cloning to refresh the non-production environment from the production environment

To meet these requirements, which design elements must be implemented? (Choose three.)

A.
Define row access policies.
A.
Define row access policies.
Answers
B.
Use the Business-Critical edition of Snowflake.
B.
Use the Business-Critical edition of Snowflake.
Answers
C.
Create a secure view.
C.
Create a secure view.
Answers
D.
Use the Enterprise edition of Snowflake.
D.
Use the Enterprise edition of Snowflake.
Answers
E.
Use Dynamic Data Masking.
E.
Use Dynamic Data Masking.
Answers
F.
Create a materialized view.
F.
Create a materialized view.
Answers
Suggested answer: B, C, E

Explanation:

These three design elements are required to meet the security, compliance, and governance requirements for the project.

To use Tri-Secret Secure in Snowflake, the Business Critical edition of Snowflake is required. This edition provides enhanced data protection features, such as customer-managed encryption keys, that are not available in lower editions.Tri-Secret Secure is a feature that combines a Snowflake-maintained key and a customer-managed key to create a composite master key to encrypt the data in Snowflake1.

To share some information stored in a view with another Snowflake customer, a secure view is recommended. A secure view is a view that hides the underlying data and the view definition from unauthorized users.Only the owner of the view and the users who are granted the owner's role can see the view definition and the data in the base tables of the view2.A secure view can be shared with another Snowflake account using a data share3.

To hide portions of sensitive information from some columns, Dynamic Data Masking can be used. Dynamic Data Masking is a feature that allows applying masking policies to columns to selectively mask plain-text data at query time.Depending on the masking policy conditions and the user's role, the data can be fully or partially masked, or shown as plain-text4.

Which of the following are characteristics of how row access policies can be applied to external tables? (Choose three.)

A.
An external table can be created with a row access policy, and the policy can be applied to the VALUE column.
A.
An external table can be created with a row access policy, and the policy can be applied to the VALUE column.
Answers
B.
A row access policy can be applied to the VALUE column of an existing external table.
B.
A row access policy can be applied to the VALUE column of an existing external table.
Answers
C.
A row access policy cannot be directly added to a virtual column of an external table.
C.
A row access policy cannot be directly added to a virtual column of an external table.
Answers
D.
External tables are supported as mapping tables in a row access policy.
D.
External tables are supported as mapping tables in a row access policy.
Answers
E.
While cloning a database, both the row access policy and the external table will be cloned.
E.
While cloning a database, both the row access policy and the external table will be cloned.
Answers
F.
A row access policy cannot be applied to a view created on top of an external table.
F.
A row access policy cannot be applied to a view created on top of an external table.
Answers
Suggested answer: A, B, C

Explanation:

These three statements are true according to the Snowflake documentation and the web search results. A row access policy is a feature that allows filtering rows based on user-defined conditions. A row access policy can be applied to an external table, which is a table that reads data from external files in a stage. However, there are some limitations and considerations for using row access policies with external tables.

An external table can be created with a row access policy by using the WITH ROW ACCESS POLICY clause in the CREATE EXTERNAL TABLE statement.The policy can be applied to the VALUE column, which is the column that contains the raw data from the external files in a VARIANT data type1.

A row access policy can also be applied to the VALUE column of an existing external table by using the ALTER TABLE statement with the SET ROW ACCESS POLICY clause2.

A row access policy cannot be directly added to a virtual column of an external table. A virtual column is a column that is derived from the VALUE column using an expression.To apply a row access policy to a virtual column, the policy must be applied to the VALUE column and the expression must be repeated in the policy definition3.

External tables are not supported as mapping tables in a row access policy. A mapping table is a table that is used to determine the access rights of users or roles based on some criteria.Snowflake does not support using an external table as a mapping table because it may cause performance issues or errors4.

While cloning a database, Snowflake clones the row access policy, but not the external table. Therefore, the policy in the cloned database refers to a table that is not present in the cloned database.To avoid this issue, the external table must be manually cloned or recreated in the cloned database4.

A row access policy can be applied to a view created on top of an external table. The policy can be applied to the view itself or to the underlying external table.However, if the policy is applied to the view, the view must be a secure view, which is a view that hides the underlying data and the view definition from unauthorized users5.

CREATE EXTERNAL TABLE | Snowflake Documentation

ALTER EXTERNAL TABLE | Snowflake Documentation

Understanding Row Access Policies | Snowflake Documentation

Snowflake Data Governance: Row Access Policy Overview

Secure Views | Snowflake Documentation

Which data models can be used when modeling tables in a Snowflake environment? (Select THREE).

A.
Graph model
A.
Graph model
Answers
B.
Dimensional/Kimball
B.
Dimensional/Kimball
Answers
C.
Data lake
C.
Data lake
Answers
D.
lnmon/3NF
D.
lnmon/3NF
Answers
E.
Bayesian hierarchical model
E.
Bayesian hierarchical model
Answers
F.
Data vault
F.
Data vault
Answers
Suggested answer: B, D, F

Explanation:

Snowflake is a cloud data platform that supports various data models for modeling tables in a Snowflake environment. The data models can be classified into two categories: dimensional and normalized. Dimensional data models are designed to optimize query performance and ease of use for business intelligence and analytics. Normalized data models are designed to reduce data redundancy and ensure data integrity for transactional and operational systems. The following are some of the data models that can be used in Snowflake:

Dimensional/Kimball: This is a popular dimensional data model that uses a star or snowflake schema to organize data into fact and dimension tables. Fact tables store quantitative measures and foreign keys to dimension tables. Dimension tables store descriptive attributes and hierarchies. A star schema has a single denormalized dimension table for each dimension, while a snowflake schema has multiple normalized dimension tables for each dimension. Snowflake supports both star and snowflake schemas, and allows users to create views and joins to simplify queries.

Inmon/3NF: This is a common normalized data model that uses a third normal form (3NF) schema to organize data into entities and relationships. 3NF schema eliminates data duplication and ensures data consistency by applying three rules: 1) every column in a table must depend on the primary key, 2) every column in a table must depend on the whole primary key, not a part of it, and 3) every column in a table must depend only on the primary key, not on other columns. Snowflake supports 3NF schema and allows users to create referential integrity constraints and foreign key relationships to enforce data quality.

Data vault: This is a hybrid data model that combines the best practices of dimensional and normalized data models to create a scalable, flexible, and resilient data warehouse. Data vault schema consists of three types of tables: hubs, links, and satellites. Hubs store business keys and metadata for each entity. Links store associations and relationships between entities. Satellites store descriptive attributes and historical changes for each entity or relationship. Snowflake supports data vault schema and allows users to leverage its features such as time travel, zero-copy cloning, and secure data sharing to implement data vault methodology.

A Snowflake Architect is setting up database replication to support a disaster recovery plan. The primary database has external tables.

How should the database be replicated?

A.
Create a clone of the primary database then replicate the database.
A.
Create a clone of the primary database then replicate the database.
Answers
B.
Move the external tables to a database that is not replicated, then replicate the primary database.
B.
Move the external tables to a database that is not replicated, then replicate the primary database.
Answers
C.
Replicate the database ensuring the replicated database is in the same region as the external tables.
C.
Replicate the database ensuring the replicated database is in the same region as the external tables.
Answers
D.
Share the primary database with an account in the same region that the database will be replicated to.
D.
Share the primary database with an account in the same region that the database will be replicated to.
Answers
Suggested answer: B

Explanation:

Database replication is a feature that allows you to create a copy of a database in another account, region, or cloud platform for disaster recovery or business continuity purposes. However, not all database objects can be replicated. External tables are one of the exceptions, as they reference data files stored in an external stage that is not part of Snowflake. Therefore, to replicate a database that contains external tables, you need to move the external tables to a separate database that is not replicated, and then replicate the primary database that contains the other objects. This way, you can avoid replication errors and ensure consistency between the primary and secondary databases. The other options are incorrect because they either do not address the issue of external tables, or they use an alternative method that is not supported by Snowflake. You cannot create a clone of the primary database and then replicate it, as replication only works on the original database, not on its clones. You also cannot share the primary database with another account, as sharing is a different feature that does not create a copy of the database, but rather grants access to the shared objects. Finally, you do not need to ensure that the replicated database is in the same region as the external tables, as external tables can access data files stored in any region or cloud platform, as long as the stage URL is valid and accessible.Reference:

[Replication and Failover/Failback]1

[Introduction to External Tables]2

[Working with External Tables]3

[Replication : How to migrate an account from One Cloud Platform or Region to another in Snowflake]4

Total 162 questions
Go to page: of 17