ExamGecko
Home Home / Snowflake / ARA-C01

Snowflake ARA-C01 Practice Test - Questions Answers, Page 3

Question list
Search
Search

List of questions

Search

Related questions

What are characteristics of Dynamic Data Masking? (Select TWO).

A.
A masking policy that Is currently set on a table can be dropped.
A.
A masking policy that Is currently set on a table can be dropped.
Answers
B.
A single masking policy can be applied to columns in different tables.
B.
A single masking policy can be applied to columns in different tables.
Answers
C.
A masking policy can be applied to the value column of an external table.
C.
A masking policy can be applied to the value column of an external table.
Answers
D.
The role that creates the masking policy will always see unmasked data In query results
D.
The role that creates the masking policy will always see unmasked data In query results
Answers
E.
A masking policy can be applied to a column with the GEOGRAPHY data type.
E.
A masking policy can be applied to a column with the GEOGRAPHY data type.
Answers
Suggested answer: A, B

Explanation:

Dynamic Data Masking is a feature that allows masking sensitive data in query results based on the role of the user who executes the query. A masking policy is a user-defined function that specifies the masking logic and can be applied to one or more columns in one or more tables. A masking policy that is currently set on a table can be dropped using the ALTER TABLE command. A single masking policy can be applied to columns in different tables using the ALTER TABLE command with the SET MASKING POLICY clause. The other options are either incorrect or not supported by Snowflake. A masking policy cannot be applied to the value column of an external table, as external tables do not support column-level security. The role that creates the masking policy will not always see unmasked data in query results, as the masking policy can be applied to the owner role as well. A masking policy cannot be applied to a column with the GEOGRAPHY data type, as Snowflake only supports masking policies for scalar data types.Reference:Snowflake Documentation: Dynamic Data Masking,Snowflake Documentation: ALTER TABLE

An Architect needs to allow a user to create a database from an inbound share.

To meet this requirement, the user's role must have which privileges? (Choose two.)

A.
IMPORT SHARE;
A.
IMPORT SHARE;
Answers
B.
IMPORT PRIVILEGES;
B.
IMPORT PRIVILEGES;
Answers
C.
CREATE DATABASE;
C.
CREATE DATABASE;
Answers
D.
CREATE SHARE;
D.
CREATE SHARE;
Answers
E.
IMPORT DATABASE;
E.
IMPORT DATABASE;
Answers
Suggested answer: C, E

Explanation:

According to the Snowflake documentation, to create a database from an inbound share, the user's role must have the following privileges:

The CREATE DATABASE privilege on the current account.This privilege allows the user to create a new database in the account1.

The IMPORT DATABASE privilege on the share.This privilege allows the user to import a database from the share into the account2. The other privileges listed are not relevant for this requirement.The IMPORT SHARE privilege is used to import a share into the account, not a database3.The IMPORT PRIVILEGES privilege is used to import the privileges granted on the shared objects, not the objects themselves2.The CREATE SHARE privilege is used to create a share to provide data to other accounts, not to consume data from other accounts4.

CREATE DATABASE | Snowflake Documentation

Importing Data from a Share | Snowflake Documentation

Importing a Share | Snowflake Documentation

CREATE SHARE | Snowflake Documentation

Files arrive in an external stage every 10 seconds from a proprietary system. The files range in size from 500 K to 3 MB. The data must be accessible by dashboards as soon as it arrives.

How can a Snowflake Architect meet this requirement with the LEAST amount of coding? (Choose two.)

A.
Use Snowpipe with auto-ingest.
A.
Use Snowpipe with auto-ingest.
Answers
B.
Use a COPY command with a task.
B.
Use a COPY command with a task.
Answers
C.
Use a materialized view on an external table.
C.
Use a materialized view on an external table.
Answers
D.
Use the COPY INTO command.
D.
Use the COPY INTO command.
Answers
E.
Use a combination of a task and a stream.
E.
Use a combination of a task and a stream.
Answers
Suggested answer: A, C

Explanation:

These two options are the best ways to meet the requirement of loading data from an external stage and making it accessible by dashboards with the least amount of coding.

Snowpipe with auto-ingest is a feature that enables continuous and automated data loading from an external stage into a Snowflake table. Snowpipe uses event notifications from the cloud storage service to detect new or modified files in the stage and triggers a COPY INTO command to load the data into the table. Snowpipe is efficient, scalable, and serverless, meaning it does not require any infrastructure or maintenance from the user.Snowpipe also supports loading data from files of any size, as long as they are in a supported format1.

A materialized view on an external table is a feature that enables creating a pre-computed result set from an external table and storing it in Snowflake. A materialized view can improve the performance and efficiency of querying data from an external table, especially for complex queries or dashboards. A materialized view can also support aggregations, joins, and filters on the external table data.A materialized view on an external table is automatically refreshed when the underlying data in the external stage changes, as long as the AUTO_REFRESH parameter is set to true2.

Snowpipe Overview | Snowflake Documentation

Materialized Views on External Tables | Snowflake Documentation

A company is storing large numbers of small JSON files (ranging from 1-4 bytes) that are received from IoT devices and sent to a cloud provider. In any given hour, 100,000 files are added to the cloud provider.

What is the MOST cost-effective way to bring this data into a Snowflake table?

A.
An external table
A.
An external table
Answers
B.
A pipe
B.
A pipe
Answers
C.
A stream
C.
A stream
Answers
D.
A copy command at regular intervals
D.
A copy command at regular intervals
Answers
Suggested answer: B

Explanation:

A pipe is a Snowflake object that continuously loads data from files in a stage (internal or external) into a table.A pipe can be configured to use auto-ingest, which means that Snowflake automatically detects new or modified files in the stage and loads them into the table without any manual intervention1.

A pipe is the most cost-effective way to bring large numbers of small JSON files into a Snowflake table, because it minimizes the number of COPY commands executed and the number of micro-partitions created. A pipe can use file aggregation, which means that it can combine multiple small files into a single larger file before loading them into the table.This reduces the load time and the storage cost of the data2.

An external table is a Snowflake object that references data files stored in an external location, such as Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage. An external table does not store the data in Snowflake, but only provides a view of the data for querying.An external table is not a cost-effective way to bring data into a Snowflake table, because it does not support file aggregation, and it requires additional network bandwidth and compute resources to query the external data3.

A stream is a Snowflake object that records the history of changes (inserts, updates, and deletes) made to a table. A stream can be used to consume the changes from a table and apply them to another table or a task.A stream is not a way to bring data into a Snowflake table, but a way to process the data after it is loaded into a table4.

A copy command is a Snowflake command that loads data from files in a stage into a table. A copy command can be executed manually or scheduled using a task.A copy command is not a cost-effective way to bring large numbers of small JSON files into a Snowflake table, because it does not support file aggregation, and it may create many micro-partitions that increase the storage cost of the data5.

A company has a Snowflake account named ACCOUNTA in AWS us-east-1 region. The company stores its marketing data in a Snowflake database named MARKET_DB. One of the company's business partners has an account named PARTNERB in Azure East US 2 region. For marketing purposes the company has agreed to share the database MARKET_DB with the partner account.

Which of the following steps MUST be performed for the account PARTNERB to consume data from the MARKET_DB database?

A.
Create a new account (called AZABC123) in Azure East US 2 region. From account ACCOUNTA create a share of database MARKET_DB, create a new database out of this share locally in AWS us-east-1 region, and replicate this new database to AZABC123 account. Then set up data sharing to the PARTNERB account.
A.
Create a new account (called AZABC123) in Azure East US 2 region. From account ACCOUNTA create a share of database MARKET_DB, create a new database out of this share locally in AWS us-east-1 region, and replicate this new database to AZABC123 account. Then set up data sharing to the PARTNERB account.
Answers
B.
From account ACCOUNTA create a share of database MARKET_DB, and create a new database out of this share locally in AWS us-east-1 region. Then make this database the provider and share it with the PARTNERB account.
B.
From account ACCOUNTA create a share of database MARKET_DB, and create a new database out of this share locally in AWS us-east-1 region. Then make this database the provider and share it with the PARTNERB account.
Answers
C.
Create a new account (called AZABC123) in Azure East US 2 region. From account ACCOUNTA replicate the database MARKET_DB to AZABC123 and from this account set up the data sharing to the PARTNERB account.
C.
Create a new account (called AZABC123) in Azure East US 2 region. From account ACCOUNTA replicate the database MARKET_DB to AZABC123 and from this account set up the data sharing to the PARTNERB account.
Answers
D.
Create a share of database MARKET_DB, and create a new database out of this share locally in AWS us-east-1 region. Then replicate this database to the partner's account PARTNERB.
D.
Create a share of database MARKET_DB, and create a new database out of this share locally in AWS us-east-1 region. Then replicate this database to the partner's account PARTNERB.
Answers
Suggested answer: C

Explanation:

Snowflake supports data sharing across regions and cloud platforms using account replication and share replication features. Account replication enables the replication of objects from a source account to one or more target accounts in the same organization.Share replication enables the replication of shares from a source account to one or more target accounts in the same organization1.

To share data from the MARKET_DB database in the ACCOUNTA account in AWS us-east-1 region with the PARTNERB account in Azure East US 2 region, the following steps must be performed:

Create a new account (called AZABC123) in Azure East US 2 region. This account will act as a bridge between the source and the target accounts.The new account must be linked to the ACCOUNTA account using an organization2.

From the ACCOUNTA account, replicate the MARKET_DB database to the AZABC123 account using the account replication feature.This will create a secondary database in the AZABC123 account that is a replica of the primary database in the ACCOUNTA account3.

From the AZABC123 account, set up the data sharing to the PARTNERB account using the share replication feature. This will create a share of the secondary database in the AZABC123 account and grant access to the PARTNERB account.The PARTNERB account can then create a database from the share and query the data4.

Therefore, option C is the correct answer.

You are a snowflake architect in an organization. The business team came to to deploy an use case which requires you to load some data which they can visualize through tableau. Everyday new data comes in and the old data is no longer required.

What type of table you will use in this case to optimize cost

A.
TRANSIENT
A.
TRANSIENT
Answers
B.
TEMPORARY
B.
TEMPORARY
Answers
C.
PERMANENT
C.
PERMANENT
Answers
Suggested answer: A

Explanation:

A transient table is a type of table in Snowflake that does not have a Fail-safe period and can have a Time Travel retention period of either 0 or 1 day.Transient tables are suitable for temporary or intermediate data that can be easily reproduced or replicated1.

A temporary table is a type of table in Snowflake that is automatically dropped when the session ends or the current user logs out.Temporary tables do not incur any storage costs, but they are not visible to other users or sessions2.

A permanent table is a type of table in Snowflake that has a Fail-safe period and a Time Travel retention period of up to 90 days.Permanent tables are suitable for persistent and durable data that needs to be protected from accidental or malicious deletion3.

In this case, the use case requires loading some data that can be visualized through Tableau. The data is updated every day and the old data is no longer required. Therefore, the best type of table to use in this case to optimize cost is a transient table, because it does not incur any Fail-safe costs and it can have a short Time Travel retention period of 0 or 1 day. This way, the data can be loaded and queried by Tableau, and then deleted or overwritten without incurring any unnecessary storage costs.

Following objects can be cloned in snowflake

A.
Permanent table
A.
Permanent table
Answers
B.
Transient table
B.
Transient table
Answers
C.
Temporary table
C.
Temporary table
Answers
D.
External tables
D.
External tables
Answers
E.
Internal stages
E.
Internal stages
Answers
Suggested answer: A, B, D

Explanation:

Snowflake supports cloning of various objects, such as databases, schemas, tables, stages, file formats, sequences, streams, tasks, and roles. Cloning creates a copy of an existing object in the system without copying the data or metadata.Cloning is also known as zero-copy cloning1.

Among the objects listed in the question, the following ones can be cloned in Snowflake:

Permanent table: A permanent table is a type of table that has a Fail-safe period and a Time Travel retention period of up to 90 days.A permanent table can be cloned using the CREATE TABLE ... CLONE command2. Therefore, option A is correct.

Transient table: A transient table is a type of table that does not have a Fail-safe period and can have a Time Travel retention period of either 0 or 1 day.A transient table can also be cloned using the CREATE TABLE ... CLONE command2. Therefore, option B is correct.

External table: An external table is a type of table that references data files stored in an external location, such as Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage.An external table can be cloned using the CREATE EXTERNAL TABLE ... CLONE command3. Therefore, option D is correct.

The following objects listed in the question cannot be cloned in Snowflake:

Temporary table: A temporary table is a type of table that is automatically dropped when the session ends or the current user logs out.Temporary tables do not support cloning4. Therefore, option C is incorrect.

Internal stage: An internal stage is a type of stage that is managed by Snowflake and stores files in Snowflake's internal cloud storage.Internal stages do not support cloning5. Therefore, option E is incorrect.

When loading data from stage using COPY INTO, what options can you specify for the ON_ERROR clause?

A.
CONTINUE
A.
CONTINUE
Answers
B.
SKIP_FILE
B.
SKIP_FILE
Answers
C.
ABORT_STATEMENT
C.
ABORT_STATEMENT
Answers
D.
FAIL
D.
FAIL
Answers
Suggested answer: A, B, C

Explanation:

The ON_ERROR clause is an optional parameter for the COPY INTO command that specifies the behavior of the command when it encounters errors in the files.The ON_ERROR clause can have one of the following values1:

CONTINUE: This value instructs the command to continue loading the file and return an error message for a maximum of one error encountered per data file. The difference between the ROWS_PARSED and ROWS_LOADED column values represents the number of rows that include detected errors.To view all errors in the data files, use the VALIDATION_MODE parameter or query the VALIDATE function1.

SKIP_FILE: This value instructs the command to skip the file when it encounters a data error on any of the records in the file. The command moves on to the next file in the stage and continues loading.The skipped file is not loaded and no error message is returned for the file1.

ABORT_STATEMENT: This value instructs the command to stop loading data when the first error is encountered. The command returns an error message for the file and aborts the load operation.This is the default value for the ON_ERROR clause1.

Therefore, options A, B, and C are correct.

Which of the below commands will use warehouse credits?

A.
SHOW TABLES LIKE 'SNOWFL%';
A.
SHOW TABLES LIKE 'SNOWFL%';
Answers
B.
SELECT MAX(FLAKE_ID) FROM SNOWFLAKE;
B.
SELECT MAX(FLAKE_ID) FROM SNOWFLAKE;
Answers
C.
SELECT COUNT(*) FROM SNOWFLAKE;
C.
SELECT COUNT(*) FROM SNOWFLAKE;
Answers
D.
SELECT COUNT(FLAKE_ID) FROM SNOWFLAKE GROUP BY FLAKE_ID;
D.
SELECT COUNT(FLAKE_ID) FROM SNOWFLAKE GROUP BY FLAKE_ID;
Answers
Suggested answer: B, C, D

Explanation:

Warehouse credits are used to pay for the processing time used by each virtual warehouse in Snowflake. A virtual warehouse is a cluster of compute resources that enables executing queries, loading data, and performing other DML operations.Warehouse credits are charged based on the number of virtual warehouses you use, how long they run, and their size1.

Among the commands listed in the question, the following ones will use warehouse credits:

SELECT MAX(FLAKE_ID) FROM SNOWFLAKE: This command will use warehouse credits because it is a query that requires a virtual warehouse to execute.The query will scan the SNOWFLAKE table and return the maximum value of the FLAKE_ID column2. Therefore, option B is correct.

SELECT COUNT(*) FROM SNOWFLAKE: This command will also use warehouse credits because it is a query that requires a virtual warehouse to execute.The query will scan the SNOWFLAKE table and return the number of rows in the table3. Therefore, option C is correct.

SELECT COUNT(FLAKE_ID) FROM SNOWFLAKE GROUP BY FLAKE_ID: This command will also use warehouse credits because it is a query that requires a virtual warehouse to execute.The query will scan the SNOWFLAKE table and return the number of rows for each distinct value of the FLAKE_ID column4. Therefore, option D is correct.

The command that will not use warehouse credits is:

SHOW TABLES LIKE 'SNOWFL%': This command will not use warehouse credits because it is a metadata operation that does not require a virtual warehouse to execute.The command will return the names of the tables that match the pattern 'SNOWFL%' in the current database and schema5. Therefore, option A is incorrect.

What does a Snowflake Architect need to consider when implementing a Snowflake Connector for Kafka?

A.
Every Kafka message is in JSON or Avro format.
A.
Every Kafka message is in JSON or Avro format.
Answers
B.
The default retention time for Kafka topics is 14 days.
B.
The default retention time for Kafka topics is 14 days.
Answers
C.
The Kafka connector supports key pair authentication, OAUTH. and basic authentication (for example, username and password).
C.
The Kafka connector supports key pair authentication, OAUTH. and basic authentication (for example, username and password).
Answers
D.
The Kafka connector will create one table and one pipe to ingest data for each topic. If the connector cannot create the table or the pipe it will result in an exception.
D.
The Kafka connector will create one table and one pipe to ingest data for each topic. If the connector cannot create the table or the pipe it will result in an exception.
Answers
Suggested answer: D

Explanation:

The Snowflake Connector for Kafka is a Kafka Connect sink connector that reads data from one or more Apache Kafka topics and loads the data into a Snowflake table. The connector supports different authentication methods to connect to Snowflake, such as key pair authentication, OAUTH, and basic authentication (for example, username and password).The connector also supports different encryption methods, such as HTTPS and SSL1.The connector does not require that every Kafka message is in JSON or Avro format, as it can handle other formats such as CSV, XML, and Parquet2. The default retention time for Kafka topics is not relevant for the connector, as it only consumes the messages that are available in the topics and does not store them in Kafka.The connector will create one table and one pipe to ingest data for each topic by default, but this behavior can be customized by using the snowflake.topic2table.map configuration property3.If the connector cannot create the table or the pipe, it will log an error and retry the operation until it succeeds or the connector is stopped4.Reference:

Installing and Configuring the Kafka Connector

Overview of the Kafka Connector

Managing the Kafka Connector

Troubleshooting the Kafka Connector

Total 162 questions
Go to page: of 17