ExamGecko
Home Home / Snowflake / ARA-C01

Snowflake ARA-C01 Practice Test - Questions Answers, Page 4

Question list
Search
Search

Related questions

A group of Data Analysts have been granted the role analyst role. They need a Snowflake database where they can create and modify tables, views, and other objects to load with their own data. The Analysts should not have the ability to give other Snowflake users outside of their role access to this data.

How should these requirements be met?

A.
Grant ANALYST_R0LE OWNERSHIP on the database, but make sure that ANALYST_ROLE does not have the MANAGE GRANTS privilege on the account.
A.
Grant ANALYST_R0LE OWNERSHIP on the database, but make sure that ANALYST_ROLE does not have the MANAGE GRANTS privilege on the account.
Answers
B.
Grant SYSADMIN ownership of the database, but grant the create schema privilege on the database to the ANALYST_ROLE.
B.
Grant SYSADMIN ownership of the database, but grant the create schema privilege on the database to the ANALYST_ROLE.
Answers
C.
Make every schema in the database a managed access schema, owned by SYSADMIN, and grant create privileges on each schema to the ANALYST_ROLE for each type of object that needs to be created.
C.
Make every schema in the database a managed access schema, owned by SYSADMIN, and grant create privileges on each schema to the ANALYST_ROLE for each type of object that needs to be created.
Answers
D.
Grant ANALYST_ROLE ownership on the database, but grant the ownership on future [object type] s in database privilege to SYSADMIN.
D.
Grant ANALYST_ROLE ownership on the database, but grant the ownership on future [object type] s in database privilege to SYSADMIN.
Answers
Suggested answer: A

Explanation:

Granting ANALYST_ROLE OWNERSHIP on the database allows the analysts to create and modify tables, views, and other objects within the database. However, to prevent the analysts from giving other Snowflake users outside of their role access to this data, the ANALYST_ROLE should not have the MANAGE GRANTS privilege on the account.The MANAGE GRANTS privilege enables a role to grant or revoke privileges on any object in the account, regardless of the ownership of the object1.Therefore, by removing this privilege from the ANALYST_ROLE, the analysts can only grant or revoke privileges on the objects that they own within the database, and not on any other objects in the account2.

The other options are not correct because:

B) Granting SYSADMIN ownership of the database and granting the create schema privilege on the database to the ANALYST_ROLE would allow the analysts to create schemas within the database, but not to create or modify tables, views, or other objects within those schemas.The analysts would need to have the create [object type] privilege on each schema to create or modify objects within the schema3.

C) Making every schema in the database a managed access schema, owned by SYSADMIN, and granting create privileges on each schema to the ANALYST_ROLE for each type of object that needs to be created would allow the analysts to create and modify objects within the schemas, but not to grant or revoke privileges on those objects.A managed access schema is a schema that requires explicit grants for any access to the objects within the schema, regardless of the ownership of the objects4. Therefore, the analysts would need to have the grant privilege on each schema to grant or revoke privileges on the objects within the schema.

D) Granting ANALYST_ROLE ownership on the database and granting the ownership on future [object type] s in database privilege to SYSADMIN would allow the analysts to create and modify objects within the database, but also to grant or revoke privileges on those objects. The ownership on future [object type] s in database privilege enables a role to automatically become the owner of any new object of the specified type that is created in the database. Therefore, by granting this privilege to SYSADMIN, the analysts would not be able to prevent SYSADMIN from accessing or modifying the objects that they create within the database.

1:MANAGE GRANTS Privilege | Snowflake Documentation

2:Access Control Privileges | Snowflake Documentation

3: CREATE SCHEMA | Snowflake Documentation

4: Managed Access | Snowflake Documentation

: GRANT | Snowflake Documentation

: Ownership on Future Objects | Snowflake Documentation

: Ownership and Revoking Privileges | Snowflake Documentation

What considerations need to be taken when using database cloning as a tool for data lifecycle management in a development environment? (Select TWO).

A.
Any pipes in the source are not cloned.
A.
Any pipes in the source are not cloned.
Answers
B.
Any pipes in the source referring to internal stages are not cloned.
B.
Any pipes in the source referring to internal stages are not cloned.
Answers
C.
Any pipes in the source referring to external stages are not cloned.
C.
Any pipes in the source referring to external stages are not cloned.
Answers
D.
The clone inherits all granted privileges of all child objects in the source object, including the database.
D.
The clone inherits all granted privileges of all child objects in the source object, including the database.
Answers
E.
The clone inherits all granted privileges of all child objects in the source object, excluding the database.
E.
The clone inherits all granted privileges of all child objects in the source object, excluding the database.
Answers
Suggested answer: A, D

Explanation:

Database cloning is a feature of Snowflake that allows creating a copy of a database, schema, table, or view without consuming any additional storage space.Database cloning can be used as a tool for data lifecycle management in a development environment, where developers and testers can work on isolated copies of production data without affecting the original data or each other1.

However, there are some considerations that need to be taken when using database cloning in a development environment, such as:

Any pipes in the source are not cloned. Pipes are objects that load data from a stage into a table continuously.Pipes are not cloned because they are associated with a specific stage and table, and cloning them would create duplicate data loading and potential conflicts2.

The clone inherits all granted privileges of all child objects in the source object, including the database. Privileges are the permissions that control the access and actions that can be performed on an object. When a database is cloned, the clone inherits all the privileges that were granted on the source database and its child objects, such as schemas, tables, and views.This means that the same roles that can access and modify the source database can also access and modify the clone, unless the privileges are explicitly revoked or modified3.

The other options are not correct because:

B) Any pipes in the source referring to internal stages are not cloned. This is a subset of option A, which states that any pipes in the source are not cloned, regardless of the type of stage they refer to.

C) Any pipes in the source referring to external stages are not cloned. This is also a subset of option A, which states that any pipes in the source are not cloned, regardless of the type of stage they refer to.

E) The clone inherits all granted privileges of all child objects in the source object, excluding the database. This is incorrect, as the clone inherits all granted privileges of the source object, including the database.

1:Database Cloning | Snowflake Documentation

2: Pipes | Snowflake Documentation

3:Access Control Privileges | Snowflake Documentation

Which columns can be included in an external table schema? (Select THREE).

A.
VALUE
A.
VALUE
Answers
B.
METADATASROW_ID
B.
METADATASROW_ID
Answers
C.
METADATASISUPDATE
C.
METADATASISUPDATE
Answers
D.
METADAT A$ FILENAME
D.
METADAT A$ FILENAME
Answers
E.
METADATAS FILE_ROW_NUMBER
E.
METADATAS FILE_ROW_NUMBER
Answers
F.
METADATASEXTERNAL TABLE PARTITION
F.
METADATASEXTERNAL TABLE PARTITION
Answers
Suggested answer: A, D, E

Explanation:

An external table schema defines the columns and data types of the data stored in an external stage. All external tables include the following columns by default:

VALUE: A VARIANT type column that represents a single row in the external file.

METADATA$FILENAME: A pseudocolumn that identifies the name of each staged data file included in the external table, including its path in the stage.

METADATA$FILE_ROW_NUMBER: A pseudocolumn that shows the row number for each record in a staged data file.

You can also create additional virtual columns as expressions using the VALUE column and/or the pseudocolumns. However, the following columns are not valid for external tables and cannot be included in the schema:

METADATASROW_ID: This column is only available for internal tables and shows the unique identifier for each row in the table.

METADATASISUPDATE: This column is only available for internal tables and shows whether the row was inserted or updated by a merge operation.

METADATASEXTERNAL TABLE PARTITION: This column is not a valid column name and does not exist in Snowflake.

Which SQL alter command will MAXIMIZE memory and compute resources for a Snowpark stored procedure when executed on the snowpark_opt_wh warehouse?

A.
A.
Answers
B.
B.
Answers
C.
C.
Answers
D.
D.
Answers
Suggested answer: A

Explanation:

To maximize memory and compute resources for a Snowpark stored procedure, you need to set the MAX_CONCURRENCY_LEVEL parameter for the warehouse that executes the stored procedure. This parameter determines the maximum number of concurrent queries that can run on a single warehouse. By setting it to 16, you ensure that the warehouse can use all the available CPU cores and memory on a single node, which is the optimal configuration for Snowpark-optimized warehouses. This will improve the performance and efficiency of the stored procedure, as it will not have to share resources with other queries or nodes. The other options are incorrect because they either do not change the MAX_CONCURRENCY_LEVEL parameter, or they set it to a lower value than 16, which will reduce the memory and compute resources for the stored procedure.Reference:

[Snowpark-optimized Warehouses]1

[Training Machine Learning Models with Snowpark Python]2

[Snowflake Shorts: Snowpark Optimized Warehouses]3

An Architect clones a database and all of its objects, including tasks. After the cloning, the tasks stop running.

Why is this occurring?

A.
Tasks cannot be cloned.
A.
Tasks cannot be cloned.
Answers
B.
The objects that the tasks reference are not fully qualified.
B.
The objects that the tasks reference are not fully qualified.
Answers
C.
Cloned tasks are suspended by default and must be manually resumed.
C.
Cloned tasks are suspended by default and must be manually resumed.
Answers
D.
The Architect has insufficient privileges to alter tasks on the cloned database.
D.
The Architect has insufficient privileges to alter tasks on the cloned database.
Answers
Suggested answer: C

Explanation:

When a database is cloned, all of its objects, including tasks, are also cloned. However, cloned tasks are suspended by default and must be manually resumed by using the ALTER TASK command. This is to prevent the cloned tasks from running unexpectedly or interfering with the original tasks. Therefore, the reason why the tasks stop running after the cloning is because they are suspended by default (Option C). Options A, B, and D are not correct because tasks can be cloned, the objects that the tasks reference are also cloned and do not need to be fully qualified, and the Architect does not need to alter the tasks on the cloned database, only resume them.Reference: The answer can be verified from Snowflake's official documentation on cloning and tasks available on their website. Here are some relevant links:

Cloning Objects | Snowflake Documentation

Tasks | Snowflake Documentation

ALTER TASK | Snowflake Documentation

What are characteristics of the use of transactions in Snowflake? (Select TWO).

A.
Explicit transactions can contain DDL, DML, and query statements.
A.
Explicit transactions can contain DDL, DML, and query statements.
Answers
B.
The autocommit setting can be changed inside a stored procedure.
B.
The autocommit setting can be changed inside a stored procedure.
Answers
C.
A transaction can be started explicitly by executing a begin work statement and end explicitly by executing a commit work statement.
C.
A transaction can be started explicitly by executing a begin work statement and end explicitly by executing a commit work statement.
Answers
D.
A transaction can be started explicitly by executing a begin transaction statement and end explicitly by executing an end transaction statement.
D.
A transaction can be started explicitly by executing a begin transaction statement and end explicitly by executing an end transaction statement.
Answers
E.
Explicit transactions should contain only DML statements and query statements. All DDL statements implicitly commit active transactions.
E.
Explicit transactions should contain only DML statements and query statements. All DDL statements implicitly commit active transactions.
Answers
Suggested answer: A, D

Explanation:

In Snowflake, a transaction is a sequence of SQL statements that are processed as an atomic unit. All statements in the transaction are either applied (i.e. committed) or undone (i.e. rolled back) together. Snowflake transactions guarantee ACID properties.A transaction can include both reads and writes1.

Explicit transactions are transactions that are started and ended explicitly by using the BEGIN TRANSACTION, COMMIT, and ROLLBACK statements. Snowflake supports the synonyms BEGIN WORK and BEGIN TRANSACTION, and COMMIT WORK and ROLLBACK WORK. Explicit transactions can contain DDL, DML, and query statements. However, explicit transactions should contain only DML statements and query statements, because DDL statements implicitly commit active transactions.This means that any changes made by the previous statements in the transaction are applied, and any changes made by the subsequent statements in the transaction are not part of the same transaction1.

The other options are not correct because:

B) The autocommit setting can be changed inside a stored procedure, but this does not affect the use of transactions in Snowflake. The autocommit setting determines whether each statement is executed in its own implicit transaction or not. If autocommit is enabled, each statement is committed automatically. If autocommit is disabled, each statement is executed in an implicit transaction until an explicit COMMIT or ROLLBACK is issued.Changing the autocommit setting inside a stored procedure only affects the statements within the stored procedure, and does not affect the statements outside the stored procedure2.

C) A transaction can be started explicitly by executing a BEGIN WORK statement and end explicitly by executing a COMMIT WORK statement, but this is not a characteristic of the use of transactions in Snowflake. This is just one way of writing the statements that start and end an explicit transaction.Snowflake also supports the synonyms BEGIN TRANSACTION and COMMIT, which are recommended over BEGIN WORK and COMMIT WORK1.

D) A transaction can be started explicitly by executing a BEGIN TRANSACTION statement and end explicitly by executing an END TRANSACTION statement, but this is not a valid syntax in Snowflake. Snowflake does not support the END TRANSACTION statement.The correct way to end an explicit transaction is to use the COMMIT or ROLLBACK statement1.

1:Transactions | Snowflake Documentation

2: AUTOCOMMIT | Snowflake Documentation

Which query will identify the specific days and virtual warehouses that would benefit from a multi-cluster warehouse to improve the performance of a particular workload?

A.
A.
Answers
B.
B.
Answers
C.
C.
Answers
D.
D.
Answers
Suggested answer: C

Explanation:

A multi-cluster warehouse is a virtual warehouse that can scale compute resources by adding or removing clusters based on the workload demand. A multi-cluster warehouse can improve the performance of a particular workload by reducing the query queue time and the data spillage to local storage. To identify the specific days and virtual warehouses that would benefit from a multi-cluster warehouse, you need to analyze the query history and look for the following indicators:

High average queued load: This metric shows the average number of queries waiting in the queue for each warehouse cluster. A high value indicates that the warehouse is overloaded and cannot handle the concurrency demand.

High bytes spilled to local storage: This metric shows the amount of data that was spilled from memory to local disk during query processing. A high value indicates that the warehouse size is too small and cannot fit the data in memory.

High variation in workload: This metric shows the fluctuation in the number of queries submitted to the warehouse over time. A high variation indicates that the workload is unpredictable and dynamic, and requires a flexible scaling policy.

The query in option C is the best one to identify these indicators, as it selects the date, warehouse name, bytes spilled to local storage, and sum of average queued load from the query history table, and filters the results where bytes spilled to local storage is greater than zero. This query will show the days and warehouses that experienced data spillage and high queue time, and could benefit from a multi-cluster warehouse with auto-scale mode.

The query in option A is not correct, as it only selects the date and warehouse name, and does not include any metrics to measure the performance of the workload. The query in option B is not correct, as it selects the date, warehouse name, and average execution time, which is not a good indicator of the need for a multi-cluster warehouse. The query in option D is not correct, as it selects the date, warehouse name, and average credits used, which is not a good indicator of the need for a multi-cluster warehouse either.

When loading data into a table that captures the load time in a column with a default value of either CURRENT_TIME () or CURRENT_TIMESTAMP() what will occur?

A.
All rows loaded using a specific COPY statement will have varying timestamps based on when the rows were inserted.
A.
All rows loaded using a specific COPY statement will have varying timestamps based on when the rows were inserted.
Answers
B.
Any rows loaded using a specific COPY statement will have varying timestamps based on when the rows were read from the source.
B.
Any rows loaded using a specific COPY statement will have varying timestamps based on when the rows were read from the source.
Answers
C.
Any rows loaded using a specific COPY statement will have varying timestamps based on when the rows were created in the source.
C.
Any rows loaded using a specific COPY statement will have varying timestamps based on when the rows were created in the source.
Answers
D.
All rows loaded using a specific COPY statement will have the same timestamp value.
D.
All rows loaded using a specific COPY statement will have the same timestamp value.
Answers
Suggested answer: D

Explanation:

According to the Snowflake documentation, when loading data into a table that captures the load time in a column with a default value of either CURRENT_TIME () or CURRENT_TIMESTAMP(), the default value is evaluated once per COPY statement, not once per row. Therefore, all rows loaded using a specific COPY statement will have the same timestamp value. This behavior ensures that the timestamp value reflects the time when the data was loaded into the table, not when the data was read from the source or created in the source.Reference:

Snowflake Documentation: Loading Data into Tables with Default Values

Snowflake Documentation: COPY INTO table

How does a standard virtual warehouse policy work in Snowflake?

A.
It conserves credits by keeping running clusters fully loaded rather than starting additional clusters.
A.
It conserves credits by keeping running clusters fully loaded rather than starting additional clusters.
Answers
B.
It starts only if the system estimates that there is a query load that will keep the cluster busy for at least 6 minutes.
B.
It starts only if the system estimates that there is a query load that will keep the cluster busy for at least 6 minutes.
Answers
C.
It starts only f the system estimates that there is a query load that will keep the cluster busy for at least 2 minutes.
C.
It starts only f the system estimates that there is a query load that will keep the cluster busy for at least 2 minutes.
Answers
D.
It prevents or minimizes queuing by starting additional clusters instead of conserving credits.
D.
It prevents or minimizes queuing by starting additional clusters instead of conserving credits.
Answers
Suggested answer: D

Explanation:

A standard virtual warehouse policy is one of the two scaling policies available for multi-cluster warehouses in Snowflake. The other policy is economic. A standard policy aims to prevent or minimize queuing by starting additional clusters as soon as the current cluster is fully loaded, regardless of the number of queries in the queue. This policy can improve query performance and concurrency, but it may also consume more credits than an economic policy, which tries to conserve credits by keeping the running clusters fully loaded before starting additional clusters. The scaling policy can be set when creating or modifying a warehouse, and it can be changed at any time.

Snowflake Documentation: Multi-cluster Warehouses

Snowflake Documentation: Scaling Policy for Multi-cluster Warehouses

Which feature provides the capability to define an alternate cluster key for a table with an existing cluster key?

A.
External table
A.
External table
Answers
B.
Materialized view
B.
Materialized view
Answers
C.
Search optimization
C.
Search optimization
Answers
D.
Result cache
D.
Result cache
Answers
Suggested answer: B

Explanation:

A materialized view is a feature that provides the capability to define an alternate cluster key for a table with an existing cluster key. A materialized view is a pre-computed result set that is stored in Snowflake and can be queried like a regular table. A materialized view can have a different cluster key than the base table, which can improve the performance and efficiency of queries on the materialized view. A materialized view can also support aggregations, joins, and filters on the base table data.A materialized view is automatically refreshed when the underlying data in the base table changes, as long as the AUTO_REFRESH parameter is set to true1.

Materialized Views | Snowflake Documentation

Total 162 questions
Go to page: of 17