ExamGecko
Home Home / Snowflake / ARA-C01

Snowflake ARA-C01 Practice Test - Questions Answers, Page 14

Question list
Search
Search

List of questions

Search

An Architect needs to meet a company requirement to ingest files from the company's AWS storage accounts into the company's Snowflake Google Cloud Platform (GCP) account. How can the ingestion of these files into the company's Snowflake account be initiated? (Select TWO).

A.
Configure the client application to call the Snowpipe REST endpoint when new files have arrived in Amazon S3 storage.
A.
Configure the client application to call the Snowpipe REST endpoint when new files have arrived in Amazon S3 storage.
Answers
B.
Configure the client application to call the Snowpipe REST endpoint when new files have arrived in Amazon S3 Glacier storage.
B.
Configure the client application to call the Snowpipe REST endpoint when new files have arrived in Amazon S3 Glacier storage.
Answers
C.
Create an AWS Lambda function to call the Snowpipe REST endpoint when new files have arrived in Amazon S3 storage.
C.
Create an AWS Lambda function to call the Snowpipe REST endpoint when new files have arrived in Amazon S3 storage.
Answers
D.
Configure AWS Simple Notification Service (SNS) to notify Snowpipe when new files have arrived in Amazon S3 storage.
D.
Configure AWS Simple Notification Service (SNS) to notify Snowpipe when new files have arrived in Amazon S3 storage.
Answers
E.
Configure the client application to issue a COPY INTO <TABLE> command to Snowflake when new files have arrived in Amazon S3 Glacier storage.
E.
Configure the client application to issue a COPY INTO <TABLE> command to Snowflake when new files have arrived in Amazon S3 Glacier storage.
Answers
Suggested answer: A, C

Explanation:

Snowpipe is a feature that enables continuous, near-real-time data ingestion from external sources into Snowflake tables. Snowpipe can ingest files from Amazon S3, Google Cloud Storage, or Azure Blob Storage into Snowflake tables on any cloud platform.Snowpipe can be triggered in two ways: by using the Snowpipe REST API or by using cloud notifications2

To ingest files from the company's AWS storage accounts into the company's Snowflake GCP account, the Architect can use either of these methods:

Configure the client application to call the Snowpipe REST endpoint when new files have arrived in Amazon S3 storage. This method requires the client application to monitor the S3 buckets for new files and send a request to the Snowpipe REST API with the list of files to ingest.The client application must also handle authentication, error handling, and retry logic3

Create an AWS Lambda function to call the Snowpipe REST endpoint when new files have arrived in Amazon S3 storage. This method leverages the AWS Lambda service to execute a function that calls the Snowpipe REST API whenever an S3 event notification is received.The AWS Lambda function must be configured with the appropriate permissions, triggers, and code to invoke the Snowpipe REST API4

The other options are not valid methods for triggering Snowpipe:

Configure the client application to call the Snowpipe REST endpoint when new files have arrived in Amazon S3 Glacier storage. This option is not feasible because Snowpipe does not support ingesting files from Amazon S3 Glacier storage, which is a long-term archival storage service.Snowpipe only supports ingesting files from Amazon S3 standard storage classes5

Configure AWS Simple Notification Service (SNS) to notify Snowpipe when new files have arrived in Amazon S3 storage. This option is not applicable because Snowpipe does not support cloud notifications from AWS SNS.Snowpipe only supports cloud notifications from AWS SQS, Google Cloud Pub/Sub, or Azure Event Grid6

Configure the client application to issue a COPY INTO <TABLE> command to Snowflake when new files have arrived in Amazon S3 Glacier storage. This option is not relevant because it does not use Snowpipe, but rather the standard COPY command, which is a batch loading method.Moreover, the COPY command also does not support ingesting files from Amazon S3 Glacier storage7Reference:

1: SnowPro Advanced: Architect | Study Guide8

2: Snowflake Documentation | Snowpipe Overview9

3: Snowflake Documentation | Using the Snowpipe REST API10

4: Snowflake Documentation | Loading Data Using Snowpipe and AWS Lambda11

5: Snowflake Documentation | Supported File Formats and Compression for Staged Data Files12

6: Snowflake Documentation | Using Cloud Notifications to Trigger Snowpipe13

7: Snowflake Documentation | Loading Data Using COPY into a Table

:SnowPro Advanced: Architect | Study Guide

:Snowpipe Overview

:Using the Snowpipe REST API

:Loading Data Using Snowpipe and AWS Lambda

:Supported File Formats and Compression for Staged Data Files

:Using Cloud Notifications to Trigger Snowpipe

:Loading Data Using COPY into a Table

When activating Tri-Secret Secure in a hierarchical encryption model in a Snowflake account, at what level is the customer-managed key used?

A.
At the root level (HSM)
A.
At the root level (HSM)
Answers
B.
At the account level (AMK)
B.
At the account level (AMK)
Answers
C.
At the table level (TMK)
C.
At the table level (TMK)
Answers
D.
At the micro-partition level
D.
At the micro-partition level
Answers
Suggested answer: B

Explanation:

Tri-Secret Secure is a feature that allows customers to use their own key, called the customer-managed key (CMK), in addition to the Snowflake-managed key, to create a composite master key that encrypts the data in Snowflake. The composite master key is also known as the account master key (AMK), as it is unique for each account and encrypts the table master keys (TMKs) that encrypt the file keys that encrypt the data files. The customer-managed key is used at the account level, not at the root level, the table level, or the micro-partition level.The root level is protected by a hardware security module (HSM), the table level is protected by the TMKs, and the micro-partition level is protected by the file keys12.Reference:

Understanding Encryption Key Management in Snowflake

Tri-Secret Secure FAQ for Snowflake on AWS

A new user user_01 is created within Snowflake. The following two commands are executed:

Command 1-> show grants to user user_01;

Command 2 ~> show grants on user user 01;

What inferences can be made about these commands?

A.
Command 1 defines which user owns user_01 Command 2 defines all the grants which have been given to user_01
A.
Command 1 defines which user owns user_01 Command 2 defines all the grants which have been given to user_01
Answers
B.
Command 1 defines all the grants which are given to user_01 Command 2 defines which user owns user_01
B.
Command 1 defines all the grants which are given to user_01 Command 2 defines which user owns user_01
Answers
C.
Command 1 defines which role owns user_01 Command 2 defines all the grants which have been given to user_01
C.
Command 1 defines which role owns user_01 Command 2 defines all the grants which have been given to user_01
Answers
D.
Command 1 defines all the grants which are given to user_01 Command 2 defines which role owns user 01
D.
Command 1 defines all the grants which are given to user_01 Command 2 defines which role owns user 01
Answers
Suggested answer: D

Explanation:

The SHOW GRANTS command in Snowflake can be used to list all the access control privileges that have been explicitly granted to roles, users, and shares.The syntax and the output of the command vary depending on the object type and the grantee type specified in the command1. In this question, the two commands have the following meanings:

Command 1: show grants to user user_01; This command lists all the roles granted to the user user_01. The output includes the role name, the grantee name, and the granted by role name for each grant.This command is equivalent to show grants to user current_user if user_01 is the current user1.

Command 2: show grants on user user_01; This command lists all the privileges that have been granted on the user object user_01. The output includes the privilege name, the grantee name, and the granted by role name for each grant.This command shows which role owns the user object user_01, as the owner role has the privilege to modify or drop the user object2.

Therefore, the correct inference is that command 1 defines all the grants which are given to user_01, and command 2 defines which role owns user_01.

SHOW GRANTS

Understanding Access Control in Snowflake

Which technique will efficiently ingest and consume semi-structured data for Snowflake data lake workloads?

A.
IDEF1X
A.
IDEF1X
Answers
B.
Schema-on-write
B.
Schema-on-write
Answers
C.
Schema-on-read
C.
Schema-on-read
Answers
D.
Information schema
D.
Information schema
Answers
Suggested answer: C

Explanation:

Option C is the correct answer because schema-on-read is a technique that allows Snowflake to ingest and consume semi-structured data without requiring a predefined schema. Snowflake supports various semi-structured data formats such as JSON, Avro, ORC, Parquet, and XML, and provides native data types (ARRAY, OBJECT, and VARIANT) for storing them. Snowflake also provides native support for querying semi-structured data using SQL and dot notation. Schema-on-read enables Snowflake to query semi-structured data at the same speed as performing relational queries while preserving the flexibility of schema-on-read. Snowflake's near-instant elasticity rightsizes compute resources, and consumption-based pricing ensures you only pay for what you use.

Option A is incorrect because IDEF1X is a data modeling technique that defines the structure and constraints of relational data using diagrams and notations. IDEF1X is not suitable for ingesting and consuming semi-structured data, which does not have a fixed schema or structure.

Option B is incorrect because schema-on-write is a technique that requires defining a schema before loading and processing data. Schema-on-write is not efficient for ingesting and consuming semi-structured data, which may have varying or complex structures that are difficult to fit into a predefined schema. Schema-on-write also introduces additional overhead and complexity for data transformation and validation.

Option D is incorrect because information schema is a set of metadata views that provide information about the objects and privileges in a Snowflake database. Information schema is not a technique for ingesting and consuming semi-structured data, but rather a way of accessing metadata about the data.

Semi-structured Data

Snowflake for Data Lake

A Developer is having a performance issue with a Snowflake query. The query receives up to 10 different values for one parameter and then performs an aggregation over the majority of a fact table. It then joins against a smaller dimension table. This parameter value is selected by the different query users when they execute it during business hours. Both the fact and dimension tables are loaded with new data in an overnight import process.

On a Small or Medium-sized virtual warehouse, the query performs slowly. Performance is acceptable on a size Large or bigger warehouse. However, there is no budget to increase costs. The Developer needs a recommendation that does not increase compute costs to run this query.

What should the Architect recommend?

A.
Create a task that will run the 10 different variations of the query corresponding to the 10 different parameters before the users come in to work. The query results will then be cached and ready to respond quickly when the users re-issue the query.
A.
Create a task that will run the 10 different variations of the query corresponding to the 10 different parameters before the users come in to work. The query results will then be cached and ready to respond quickly when the users re-issue the query.
Answers
B.
Create a task that will run the 10 different variations of the query corresponding to the 10 different parameters before the users come in to work. The task will be scheduled to align with the users' working hours in order to allow the warehouse cache to be used.
B.
Create a task that will run the 10 different variations of the query corresponding to the 10 different parameters before the users come in to work. The task will be scheduled to align with the users' working hours in order to allow the warehouse cache to be used.
Answers
C.
Enable the search optimization service on the table. When the users execute the query, the search optimization service will automatically adjust the query execution plan based on the frequently-used parameters.
C.
Enable the search optimization service on the table. When the users execute the query, the search optimization service will automatically adjust the query execution plan based on the frequently-used parameters.
Answers
D.
Create a dedicated size Large warehouse for this particular set of queries. Create a new role that has USAGE permission on this warehouse and has the appropriate read permissions over the fact and dimension tables. Have users switch to this role and use this warehouse when they want to access this data.
D.
Create a dedicated size Large warehouse for this particular set of queries. Create a new role that has USAGE permission on this warehouse and has the appropriate read permissions over the fact and dimension tables. Have users switch to this role and use this warehouse when they want to access this data.
Answers
Suggested answer: C

Explanation:

Enabling the search optimization service on the table can improve the performance of queries that have selective filtering criteria, which seems to be the case here. This service optimizes the execution of queries by creating a persistent data structure called a search access path, which allows some micro-partitions to be skipped during the scanning process. This can significantly speed up query performance without increasing compute costs1.

* Snowflake Documentation on Search Optimization Service1.

A retailer's enterprise data organization is exploring the use of Data Vault 2.0 to model its data lake solution. A Snowflake Architect has been asked to provide recommendations for using Data Vault 2.0 on Snowflake.

What should the Architect tell the data organization? (Select TWO).

A.
Change data capture can be performed using the Data Vault 2.0 HASH_DIFF concept.
A.
Change data capture can be performed using the Data Vault 2.0 HASH_DIFF concept.
Answers
B.
Change data capture can be performed using the Data Vault 2.0 HASH_DELTA concept.
B.
Change data capture can be performed using the Data Vault 2.0 HASH_DELTA concept.
Answers
C.
Using the multi-table insert feature in Snowflake, multiple Point-in-Time (PIT) tables can be loaded in parallel from a single join query from the data vault.
C.
Using the multi-table insert feature in Snowflake, multiple Point-in-Time (PIT) tables can be loaded in parallel from a single join query from the data vault.
Answers
D.
Using the multi-table insert feature, multiple Point-in-Time (PIT) tables can be loaded sequentially from a single join query from the data vault.
D.
Using the multi-table insert feature, multiple Point-in-Time (PIT) tables can be loaded sequentially from a single join query from the data vault.
Answers
E.
There are performance challenges when using Snowflake to load multiple Point-in-Time (PIT) tables in parallel from a single join query from the data vault.
E.
There are performance challenges when using Snowflake to load multiple Point-in-Time (PIT) tables in parallel from a single join query from the data vault.
Answers
Suggested answer: A, C

Explanation:

Data Vault 2.0 on Snowflake supports the HASH_DIFF concept for change data capture, which is a method to detect changes in the data by comparing the hash values of the records. Additionally, Snowflake's multi-table insert feature allows for the loading of multiple PIT tables in parallel from a single join query, which can significantly streamline the data loading process and improve performance1.

* Snowflake's documentation on multi-table inserts1

* Blog post on optimizing Data Vault architecture on Snowflake2

A company is designing a process for importing a large amount of loT JSON data from cloud storage into Snowflake. New sets of loT data get generated and uploaded approximately every 5 minutes.

Once the loT data is in Snowflake, the company needs up-to-date information from an external vendor to join to the data. This data is then presented to users through a dashboard that shows different levels of aggregation. The external vendor is a Snowflake customer.

What solution will MINIMIZE complexity and MAXIMIZE performance?

A.
1. Create an external table over the JSON data in cloud storage. 2. Create a task that runs every 5 minutes to run a transformation procedure on new data, based on a saved timestamp. 3. Ask the vendor to expose an API so an external function can be used to generate a call to join the data back to the loT data in the transformation procedure. 4. Give the transformed table access to the dashboard tool. 5. Perform the aggregations on the dashboard tool.
A.
1. Create an external table over the JSON data in cloud storage. 2. Create a task that runs every 5 minutes to run a transformation procedure on new data, based on a saved timestamp. 3. Ask the vendor to expose an API so an external function can be used to generate a call to join the data back to the loT data in the transformation procedure. 4. Give the transformed table access to the dashboard tool. 5. Perform the aggregations on the dashboard tool.
Answers
B.
1. Create an external table over the JSON data in cloud storage. 2. Create a task that runs every 5 minutes to run a transformation procedure on new data based on a saved timestamp. 3. Ask the vendor to create a data share with the required data that can be imported into the company's Snowflake account. 4. Join the vendor's data back to the loT data using a transformation procedure. 5. Create views over the larger dataset to perform the aggregations required by the dashboard. 6. Give the views access to the dashboard tool.
B.
1. Create an external table over the JSON data in cloud storage. 2. Create a task that runs every 5 minutes to run a transformation procedure on new data based on a saved timestamp. 3. Ask the vendor to create a data share with the required data that can be imported into the company's Snowflake account. 4. Join the vendor's data back to the loT data using a transformation procedure. 5. Create views over the larger dataset to perform the aggregations required by the dashboard. 6. Give the views access to the dashboard tool.
Answers
C.
1. Create a Snowpipe to bring the JSON data into Snowflake. 2. Use streams and tasks to trigger a transformation procedure when new JSON data arrives. 3. Ask the vendor to expose an API so an external function call can be made to join the vendor's data back to the loT data in a transformation procedure. 4. Create materialized views over the larger dataset to perform the aggregations required by the dashboard. 5. Give the materialized views access to the dashboard tool.
C.
1. Create a Snowpipe to bring the JSON data into Snowflake. 2. Use streams and tasks to trigger a transformation procedure when new JSON data arrives. 3. Ask the vendor to expose an API so an external function call can be made to join the vendor's data back to the loT data in a transformation procedure. 4. Create materialized views over the larger dataset to perform the aggregations required by the dashboard. 5. Give the materialized views access to the dashboard tool.
Answers
D.
1. Create a Snowpipe to bring the JSON data into Snowflake. 2. Use streams and tasks to trigger a transformation procedure when new JSON data arrives. 3. Ask the vendor to create a data share with the required data that is then imported into the Snowflake account. 4. Join the vendor's data back to the loT data in a transformation procedure 5. Create materialized views over the larger dataset to perform the aggregations required by the dashboard. 6. Give the materialized views access to the dashboard tool.
D.
1. Create a Snowpipe to bring the JSON data into Snowflake. 2. Use streams and tasks to trigger a transformation procedure when new JSON data arrives. 3. Ask the vendor to create a data share with the required data that is then imported into the Snowflake account. 4. Join the vendor's data back to the loT data in a transformation procedure 5. Create materialized views over the larger dataset to perform the aggregations required by the dashboard. 6. Give the materialized views access to the dashboard tool.
Answers
Suggested answer: D

Explanation:

Using Snowpipe for continuous, automated data ingestion minimizes the need for manual intervention and ensures that data is available in Snowflake promptly after it is generated. Leveraging Snowflake's data sharing capabilities allows for efficient and secure access to the vendor's data without the need for complex API integrations. Materialized views provide pre-aggregated data for fast access, which is ideal for dashboards that require high performance1234.

* Snowflake Documentation on Snowpipe4

* Snowflake Documentation on Secure Data Sharing2

* Best Practices for Data Ingestion with Snowflake1

Which command will create a schema without Fail-safe and will restrict object owners from passing on access to other users?

A.
create schema EDW.ACCOUNTING WITH MANAGED ACCESS;
A.
create schema EDW.ACCOUNTING WITH MANAGED ACCESS;
Answers
B.
create schema EDW.ACCOUNTING WITH MANAGED ACCESS DATA_RETENTION_TIME_IN_DAYS - 7;
B.
create schema EDW.ACCOUNTING WITH MANAGED ACCESS DATA_RETENTION_TIME_IN_DAYS - 7;
Answers
C.
create TRANSIENT schema EDW.ACCOUNTING WITH MANAGED ACCESS DATA_RETENTION_TIME_IN_DAYS = 1;
C.
create TRANSIENT schema EDW.ACCOUNTING WITH MANAGED ACCESS DATA_RETENTION_TIME_IN_DAYS = 1;
Answers
D.
create TRANSIENT schema EDW.ACCOUNTING WITH MANAGED ACCESS DATA_RETENTION_TIME_IN_DAYS = 7;
D.
create TRANSIENT schema EDW.ACCOUNTING WITH MANAGED ACCESS DATA_RETENTION_TIME_IN_DAYS = 7;
Answers
Suggested answer: D

Explanation:

A transient schema in Snowflake is designed without a Fail-safe period, meaning it does not incur additional storage costs once it leaves Time Travel, and it is not protected by Fail-safe in the event of a data loss. The WITH MANAGED ACCESS option ensures that all privilege grants, including future grants on objects within the schema, are managed by the schema owner, thus restricting object owners from passing on access to other users1.

* Snowflake Documentation on creating schemas1

* Snowflake Documentation on configuring access control2

* Snowflake Documentation on understanding and viewing Fail-safe3

An Architect needs to design a data unloading strategy for Snowflake, that will be used with the COPY INTO <location> command.

Which configuration is valid?

A.
Location of files: Snowflake internal location . File formats: CSV, XML . File encoding: UTF-8 . Encryption: 128-bit
A.
Location of files: Snowflake internal location . File formats: CSV, XML . File encoding: UTF-8 . Encryption: 128-bit
Answers
B.
Location of files: Amazon S3 . File formats: CSV, JSON . File encoding: Latin-1 (ISO-8859) . Encryption: 128-bit
B.
Location of files: Amazon S3 . File formats: CSV, JSON . File encoding: Latin-1 (ISO-8859) . Encryption: 128-bit
Answers
C.
Location of files: Google Cloud Storage . File formats: Parquet . File encoding: UTF-8 * Compression: gzip
C.
Location of files: Google Cloud Storage . File formats: Parquet . File encoding: UTF-8 * Compression: gzip
Answers
D.
Location of files: Azure ADLS . File formats: JSON, XML, Avro, Parquet, ORC . Compression: bzip2 . Encryption: User-supplied key
D.
Location of files: Azure ADLS . File formats: JSON, XML, Avro, Parquet, ORC . Compression: bzip2 . Encryption: User-supplied key
Answers
Suggested answer: C

Role A has the following permissions:

. USAGE on db1

. USAGE and CREATE VIEW on schemal in db1

. SELECT on tablel in schemal

Role B has the following permissions:

. USAGE on db2

. USAGE and CREATE VIEW on schema2 in db2

. SELECT on table2 in schema2

A user has Role A set as the primary role and Role B as a secondary role.

What command will fail for this user?

A.
use database db1; use schema schemal; create view v1 as select * from db2.schema2.table2;
A.
use database db1; use schema schemal; create view v1 as select * from db2.schema2.table2;
Answers
B.
use database db2; use schema schema2; create view v2 as select * from dbl.schemal. tablel;
B.
use database db2; use schema schema2; create view v2 as select * from dbl.schemal. tablel;
Answers
C.
use database db2; use schema schema2; select * from db1.schemal.tablel union select * from table2;
C.
use database db2; use schema schema2; select * from db1.schemal.tablel union select * from table2;
Answers
D.
use database db1; use schema schemal; select * from db2.schema2.table2;
D.
use database db1; use schema schemal; select * from db2.schema2.table2;
Answers
Suggested answer: B
Total 162 questions
Go to page: of 17