ExamGecko
Home Home / Snowflake / ADA-C01

Snowflake ADA-C01 Practice Test - Questions Answers, Page 5

Question list
Search
Search

List of questions

Search

Related questions











A Snowflake user runs a complex SQL query on a dedicated virtual warehouse that reads a large amount of data from micro-partitions. The same user wants to run another query that uses the same data set.

Which action would provide optimal performance for the second SQL query?

A.
Assign additional clusters to the virtual warehouse.
A.
Assign additional clusters to the virtual warehouse.
Answers
B.
Increase the STATEMENT_TIMEOUT_IN_SECONDS parameter in the session.
B.
Increase the STATEMENT_TIMEOUT_IN_SECONDS parameter in the session.
Answers
C.
Prevent the virtual warehouse from suspending between the running of the first and second queries.
C.
Prevent the virtual warehouse from suspending between the running of the first and second queries.
Answers
D.
Use the RESULT_SCAN function to post-process the output of the first query.
D.
Use the RESULT_SCAN function to post-process the output of the first query.
Answers
Suggested answer: D

Explanation:

According to the Using Persisted Query Results documentation, the RESULT_SCAN function allows you to query the result set of a previous command as if it were a table. This can improve the performance of the second query by avoiding reading the same data from micro-partitions again. The other actions do not provide optimal performance for the second query because:

* Assigning additional clusters to the virtual warehouse does not affect the data access speed, but only the query execution speed. It also increases the cost of the warehouse.

* Increasing the STATEMENT_TIMEOUT_IN_SECONDS parameter in the session does not improve the performance of the query, but only allows it to run longer before timing out. It also increases the risk of resource contention and deadlock.

* Preventing the virtual warehouse from suspending between the running of the first and second queries does not guarantee that the data will be cached in memory, as Snowflake uses a least recently used (LRU) cache eviction policy. It also increases the cost of the warehouse.

https://docs.snowflake.com/en/user-guide/querying-persisted-results

For Snowflake network policies, what will occur when the account_level and user_level network policies are both defined?

A.
The account_level policy will override the user_level policy.
A.
The account_level policy will override the user_level policy.
Answers
B.
The user_level policy will override the account_level policy.
B.
The user_level policy will override the account_level policy.
Answers
C.
The user_level network policies will not be supported.
C.
The user_level network policies will not be supported.
Answers
D.
A network policy error will be generated with no definitions provided.
D.
A network policy error will be generated with no definitions provided.
Answers
Suggested answer: B

Explanation:

According to the Network Policies documentation, a network policy can be applied to an account, a security integration, or a user. If there are network policies applied to more than one of these, the most specific network policy overrides more general network policies. The following summarizes the order of precedence:

* Account: Network policies applied to an account are the most general network policies. They are overridden by network policies applied to a security integration or user.

* Security Integration: Network policies applied to a security integration override network policies applied to the account, but are overridden by a network policy applied to a user.

* User: Network policies applied to a user are the most specific network policies. They override both accounts and security integrations.

Therefore, if both the account_level and user_level network policies are defined, the user_level policy will take effect and the account_level policy will be ignored. The other options are incorrect because:

* The account_level policy will not override the user_level policy, as explained above.

* The user_level network policies will be supported, as they are part of the network policy feature.

* A network policy error will not be generated, as there is no conflict between the account_level and user_level network policies.

MY_TABLE is a table that has not been updated or modified for several days. On 01 January 2021 at 07:01, a user executed a query to update this table. The query ID is

'8e5d0ca9-005e-44e6-b858-a8f5b37c5726'. It is now 07:30 on the same day.

Which queries will allow the user to view the historical data that was in the table before this query was executed? (Select THREE).

A.
SELECT * FROM my table WITH TIME_TRAVEL (OFFSET => -60*30);
A.
SELECT * FROM my table WITH TIME_TRAVEL (OFFSET => -60*30);
Answers
B.
SELECT * FROM my_table AT (TIMESTAMP => '2021-01-01 07:00:00' :: timestamp);
B.
SELECT * FROM my_table AT (TIMESTAMP => '2021-01-01 07:00:00' :: timestamp);
Answers
C.
SELECT * FROM TIME_TRAVEL ('MY_TABLE', 2021-01-01 07:00:00);
C.
SELECT * FROM TIME_TRAVEL ('MY_TABLE', 2021-01-01 07:00:00);
Answers
D.
SELECT * FROM my table PRIOR TO STATEMENT '8e5d0ca9-005e-44e6-b858-a8f5b37c5726';
D.
SELECT * FROM my table PRIOR TO STATEMENT '8e5d0ca9-005e-44e6-b858-a8f5b37c5726';
Answers
E.
SELECT * FROM my_table AT (OFFSET => -60*30);
E.
SELECT * FROM my_table AT (OFFSET => -60*30);
Answers
F.
SELECT * FROM my_table BEFORE (STATEMENT => '8e5d0ca9-005e-44e6-b858-a8f5b37c5726');
F.
SELECT * FROM my_table BEFORE (STATEMENT => '8e5d0ca9-005e-44e6-b858-a8f5b37c5726');
Answers
Suggested answer: B, D, F

Explanation:

According to the AT | BEFORE documentation, the AT or BEFORE clause is used for Snowflake Time Travel, which allows you to query historical data from a table based on a specific point in the past. The clause can use one of the following parameters to pinpoint the exact historical data you wish to access:

* TIMESTAMP: Specifies an exact date and time to use for Time Travel.

* OFFSET: Specifies the difference in seconds from the current time to use for Time Travel.

* STATEMENT: Specifies the query ID of a statement to use as the reference point for Time Travel.

Therefore, the queries that will allow the user to view the historical data that was in the table before the query was executed are:

* B. SELECT * FROM my_table AT (TIMESTAMP => '2021-01-01 07:00:00' :: timestamp); This query uses the TIMESTAMP parameter to specify a point in time that is before the query execution time of 07:01.

* D. SELECT * FROM my table PRIOR TO STATEMENT '8e5d0ca9-005e-44e6-b858-a8f5b37c5726'; This query uses the PRIOR TO STATEMENT keyword and the STATEMENT parameter to specify a point in time that is immediately preceding the query execution time of 07:01.

* F. SELECT * FROM my_table BEFORE (STATEMENT => '8e5d0ca9-005e-44e6-b858-a8f5b37c5726'); This query uses the BEFORE keyword and the STATEMENT parameter to specify a point in time that is immediately preceding the query execution time of 07:01.

The other queries are incorrect because:

* A. SELECT * FROM my table WITH TIME_TRAVEL (OFFSET => -60*30); This query uses the OFFSET parameter to specify a point in time that is 30 minutes before the current time, which is 07:30. This is after the query execution time of 07:01, so it will not show the historical data before the query was executed.

* C. SELECT * FROM TIME_TRAVEL ('MY_TABLE', 2021-01-01 07:00:00); This query is not valid syntax for Time Travel. The TIME_TRAVEL function does not exist in Snowflake. The correct syntax is to use the AT or BEFORE clause after the table name in the FROM clause.

* E. SELECT * FROM my_table AT (OFFSET => -60*30); This query uses the AT keyword and the OFFSET parameter to specify a point in time that is 30 minutes before the current time, which is 07:30. This is equal to the query execution time of 07:01, so it will not show the historical data before the query was executed. The AT keyword specifies that the request is inclusive of any changes made by a statement or transaction with timestamp equal to the specified parameter. To exclude the changes made by the query, the BEFORE keyword should be used instead.

What are characteristics of Dynamic Data Masking? (Select TWO).

A.
A masking policy that is currently set on a table can be dropped.
A.
A masking policy that is currently set on a table can be dropped.
Answers
B.
A single masking policy can be applied to columns in different tables.
B.
A single masking policy can be applied to columns in different tables.
Answers
C.
A masking policy can be applied to the VALUE column of an external table.
C.
A masking policy can be applied to the VALUE column of an external table.
Answers
D.
The role that creates the masking policy will always see unmasked data in query results.
D.
The role that creates the masking policy will always see unmasked data in query results.
Answers
E.
A single masking policy can be applied to columns with different data types.
E.
A single masking policy can be applied to columns with different data types.
Answers
Suggested answer: B, E

Explanation:

According to the Using Dynamic Data Masking documentation, Dynamic Data Masking is a feature that allows you to alter sections of data in table and view columns at query time using a predefined masking strategy. The following are some of the characteristics of Dynamic Data Masking:

* A single masking policy can be applied to columns in different tables. This means that you can write a policy once and have it apply to thousands of columns across databases and schemas.

* A single masking policy can be applied to columns with different data types. This means that you can use the same masking strategy for columns that store different kinds of data, such as strings, numbers, dates, etc.

* A masking policy that is currently set on a table can be dropped. This means that you can remove the masking policy from the table and restore the original data visibility.

* A masking policy can be applied to the VALUE column of an external table. This means that you can mask data that is stored in an external stage and queried through an external table.

* The role that creates the masking policy will always see unmasked data in query results. This is not true, as the masking policy can also apply to the creator role depending on the execution context conditions defined in the policy. For example, if the policy specifies that only users with a certain custom entitlement can see the unmasked data, then the creator role will also need to have that entitlement to see the unmasked data.

A Snowflake Administrator needs to set up Time Travel for a presentation area that includes facts and dimensions tables, and receives a lot of meaningless and erroneous loT data. Time Travel is being used as a component of the company's data quality process in which the ingestion pipeline should revert to a known quality data state if any anomalies are detected in the latest load. Data from the past 30 days may have to be retrieved because of latencies in the data acquisition process.

According to best practices, how should these requirements be met? (Select TWO).

A.
Related data should not be placed together in the same schema. Facts and dimension tables should each have their own schemas.
A.
Related data should not be placed together in the same schema. Facts and dimension tables should each have their own schemas.
Answers
B.
The fact and dimension tables should have the same DATA_RETENTION_TIME_IN_ DAYS.
B.
The fact and dimension tables should have the same DATA_RETENTION_TIME_IN_ DAYS.
Answers
C.
The DATA_RETENTION_TIME_IN_DAYS should be kept at the account level and never used for lower level containers (databases and schemas).
C.
The DATA_RETENTION_TIME_IN_DAYS should be kept at the account level and never used for lower level containers (databases and schemas).
Answers
D.
Only TRANSIENT tables should be used to ensure referential integrity between the fact and dimension tables.
D.
Only TRANSIENT tables should be used to ensure referential integrity between the fact and dimension tables.
Answers
E.
The fact and dimension tables should be cloned together using the same Time Travel options to reduce potential referential integrity issues with the restored data.
E.
The fact and dimension tables should be cloned together using the same Time Travel options to reduce potential referential integrity issues with the restored data.
Answers
Suggested answer: B, E

Explanation:

According to the Understanding & Using Time Travel documentation, Time Travel is a feature that allows you to query, clone, and restore historical data in tables, schemas, and databases for up to 90 days. To meet the requirements of the scenario, the following best practices should be followed:

* The fact and dimension tables should have the same DATA_RETENTION_TIME_IN_DAYS. This parameter specifies the number of days for which the historical data is preserved and can be accessed by Time Travel. To ensure that the fact and dimension tables can be reverted to a consistent state in case of any anomalies in the latest load, they should have the same retention period. Otherwise, some tables may lose their historical data before others, resulting in data inconsistency and quality issues.

* The fact and dimension tables should be cloned together using the same Time Travel options to reduce potential referential integrity issues with the restored data. Cloning is a way of creating a copy of an object (table, schema, or database) at a specific point in time using Time Travel. To ensure that the fact and dimension tables are cloned with the same data set, they should be cloned together using the same AT or BEFORE clause. This will avoid any referential integrity issues that may arise from cloning tables at different points in time.

The other options are incorrect because:

* Related data should not be placed together in the same schema. Facts and dimension tables should each have their own schemas. This is not a best practice for Time Travel, as it does not affect the ability to query, clone, or restore historical data. However, it may be a good practice for data modeling and organization, depending on the use case and design principles.

* The DATA_RETENTION_TIME_IN_DAYS should be kept at the account level and never used for lower level containers (databases and schemas). This is not a best practice for Time Travel, as it limits the flexibility and granularity of setting the retention period for different objects. The retention period can be set at the account, database, schema, or table level, and the most specific setting overrides the more general ones. This allows for customizing the retention period based on the data needs and characteristics of each object.

* Only TRANSIENT tables should be used to ensure referential integrity between the fact and dimension tables. This is not a best practice for Time Travel, as it does not affect the referential integrity between the tables. Transient tables are tables that do not have a Fail-safe period, which means that they cannot be recovered by Snowflake after the retention period ends. However, they still support Time Travel within the retention period, and can be queried, cloned, and restored like permanent tables. The choice of table type depends on the data durability and availability requirements, not on the referential integrity.

A Snowflake Administrator needs to persist all virtual warehouse configurations for auditing and backups. Given a table already exists with the following schema:

Table Name : VWH_META

Column 1 : SNAPSHOT_TIME TIMESTAMP_NTZ

Column 2 : CONFIG VARIANT

Which commands should be executed to persist the warehouse data at the time of execution in JSON format in the table VWH META?

A.
1. SHOW WAREHOUSES; 2. INSERT INTO VWH META SELECT CURRENT TIMESTAMP (), FROM TABLE (RESULT_SCAN (LAST_QUERY_ID(1) ) ) ;
A.
1. SHOW WAREHOUSES; 2. INSERT INTO VWH META SELECT CURRENT TIMESTAMP (), FROM TABLE (RESULT_SCAN (LAST_QUERY_ID(1) ) ) ;
Answers
B.
1. SHOW WAREHOUSES; 2. INSERT INTO VWH META SELECT CURRENT TIMESTAMP (), * FROM TABLE (RESULT_SCAN (LAST_QUERY_ID ())) ;
B.
1. SHOW WAREHOUSES; 2. INSERT INTO VWH META SELECT CURRENT TIMESTAMP (), * FROM TABLE (RESULT_SCAN (LAST_QUERY_ID ())) ;
Answers
C.
1. SHOW WAREHOUSES; 2. INSERT INTO VWH_META SELECT CURRENT_TIMESTAMP (), OBJECT CONSTRUCT (*) FROM TABLE (RESULT_SCAN (LAST_QUERY_ID ()));
C.
1. SHOW WAREHOUSES; 2. INSERT INTO VWH_META SELECT CURRENT_TIMESTAMP (), OBJECT CONSTRUCT (*) FROM TABLE (RESULT_SCAN (LAST_QUERY_ID ()));
Answers
D.
1. SHOW WAREHOUSES; 2. INSERT INTO VWH META SELECT CURRENT TIMESTAMP (), * FROM TABLE (RESULT_SCAN (SELECT LAST QUERY ID(-1)));
D.
1. SHOW WAREHOUSES; 2. INSERT INTO VWH META SELECT CURRENT TIMESTAMP (), * FROM TABLE (RESULT_SCAN (SELECT LAST QUERY ID(-1)));
Answers
Suggested answer: C

Explanation:

According to the Using Persisted Query Results documentation, the RESULT_SCAN function allows you to query the result set of a previous command as if it were a table. The LAST_QUERY_ID function returns the query ID of the most recent statement executed in the current session. Therefore, the combination of these two functions can be used to access the output of the SHOW WAREHOUSES command, which returns the configurations of all the virtual warehouses in the account. However, to persist the warehouse data in JSON format in the table VWH_META, the OBJECT_CONSTRUCT function is needed to convert the output of the SHOW WAREHOUSES command into a VARIANT column. The OBJECT_CONSTRUCT function takes a list of key-value pairs and returns a single JSON object. Therefore, the correct commands to execute are:

1. SHOW WAREHOUSES;

2. INSERT INTO VWH_META SELECT CURRENT_TIMESTAMP (), OBJECT_CONSTRUCT (*) FROM TABLE (RESULT_SCAN (LAST_QUERY_ID ()));

The other options are incorrect because:

*

A) This option does not use the OBJECT_CONSTRUCT function, so it will not persist the warehouse data in JSON format. Also, it is missing the * symbol in the SELECT clause, so it will not select any columns from the result set of the SHOW WAREHOUSES command.

* B) This option does not use the OBJECT_CONSTRUCT function, so it will not persist the warehouse data in JSON format. It will also try to insert multiple columns into a single VARIANT column, which will cause a type mismatch error.

* D) This option does not use the OBJECT_CONSTRUCT function, so it will not persist the warehouse data in JSON format. It will also try to use the RESULT_SCAN function on a subquery, which is not supported. The RESULT_SCAN function can only be used on a query ID or a table name.

What are the requirements when creating a new account within an organization in Snowflake? (Select TWO).

A.
The account requires at least one ORGADMIN role within one of the organization's accounts.
A.
The account requires at least one ORGADMIN role within one of the organization's accounts.
Answers
B.
The account name is immutable and cannot be changed.
B.
The account name is immutable and cannot be changed.
Answers
C.
The account name must be specified when the account is created.
C.
The account name must be specified when the account is created.
Answers
D.
The account name must be unique among all Snowflake customers.
D.
The account name must be unique among all Snowflake customers.
Answers
E.
The account name must be unique within the organization.
E.
The account name must be unique within the organization.
Answers
Suggested answer: C, E

Explanation:

According to the CREATE ACCOUNT documentation, the account name must be specified when the account is created, and it must be unique within an organization, regardless of which Snowflake Region the account is in. The other options are incorrect because:

* The account does not require at least one ORGADMIN role within one of the organization's accounts. The account can be created by an organization administrator (i.e. a user with the ORGADMIN role) through the web interface or using SQL, but the new account does not inherit the ORGADMIN role from the existing account. The new account will have its own set of users, roles, databases, and warehouses.

* The account name is not immutable and can be changed. The account name can be modified by contacting Snowflake Support and requesting a name change. However, changing the account name may affect some features that depend on the account name, such as SSO or SCIM.

* The account name does not need to be unique among all Snowflake customers. The account name only needs to be unique within the organization, as the account URL also includes the region and cloud platform information. For example, two accounts with the same name can exist in different regions or cloud platforms, such as myaccount.us-east-1.snowflakecomputing.com and myaccount.eu-west-1.aws.snowflakecomputing.com.

A Snowflake customer is experiencing higher costs than anticipated while migrating their data warehouse workloads from on-premises to Snowflake. The migration workloads have been deployed on a single warehouse and are characterized by a large number of small INSERTs rather than bulk loading of large extracts. That single warehouse has been configured as a single cluster, 2XL because there are many parallel INSERTs that are scheduled during nightly loads.

How can the Administrator reduce the costs, while minimizing the overall load times, for migrating data warehouse history?

A.
There should be another 2XL warehouse deployed to handle a portion of the load queries.
A.
There should be another 2XL warehouse deployed to handle a portion of the load queries.
Answers
B.
The 2XL warehouse should be changed to 4XL to increase the number of threads available for parallel load queries.
B.
The 2XL warehouse should be changed to 4XL to increase the number of threads available for parallel load queries.
Answers
C.
The warehouse should be kept as a SMALL or XSMALL and configured as a multi-cluster warehouse to handle the parallel load queries.
C.
The warehouse should be kept as a SMALL or XSMALL and configured as a multi-cluster warehouse to handle the parallel load queries.
Answers
D.
The INSERTS should be converted to several tables to avoid contention on large tables that slows down query processing.
D.
The INSERTS should be converted to several tables to avoid contention on large tables that slows down query processing.
Answers
Suggested answer: C

Explanation:

According to the Snowflake Warehouse Cost Optimization blog post, one of the strategies to reduce the cost of running a warehouse is to use a multi-cluster warehouse with auto-scaling enabled. This allows the warehouse to automatically adjust the number of clusters based on the concurrency demand and the queue size. A multi-cluster warehouse can also be configured with a minimum and maximum number of clusters, as well as a scaling policy to control the scaling behavior. This way, the warehouse can handle the parallel load queries efficiently without wasting resources or credits. The blog post also suggests using a smaller warehouse size, such as SMALL or XSMALL, for loading data, as it can perform better than a larger warehouse size for small INSERTs. Therefore, the best option to reduce the costs while minimizing the overall load times for migrating data warehouse history is to keep the warehouse as a SMALL or XSMALL and configure it as a multi-cluster warehouse to handle the parallel load queries. The other options are incorrect because:

* A. Deploying another 2XL warehouse to handle a portion of the load queries will not reduce the costs, but increase them. It will also introduce complexity and potential inconsistency in managing the data loading process across multiple warehouses.

* B. Changing the 2XL warehouse to 4XL will not reduce the costs, but increase them. It will also provide more compute resources than needed for small INSERTs, which are not CPU-intensive but I/O-intensive.

* D. Converting the INSERTs to several tables will not reduce the costs, but increase them. It will also create unnecessary data duplication and fragmentation, which will affect the query performance and data quality.

What roles or security privileges will allow a consumer account to request and get data from the Data Exchange? (Select TWO).

A.
SYSADMIN
A.
SYSADMIN
Answers
B.
SECURITYADMIN
B.
SECURITYADMIN
Answers
C.
ACCOUNTADMIN
C.
ACCOUNTADMIN
Answers
D.
IMPORT SHARE and CREATE DATABASE
D.
IMPORT SHARE and CREATE DATABASE
Answers
E.
IMPORT PRIVILEGES and SHARED DATABASE
E.
IMPORT PRIVILEGES and SHARED DATABASE
Answers
Suggested answer: C, D

Explanation:

According to the Accessing a Data Exchange documentation, a consumer account can request and get data from the Data Exchange using either the ACCOUNTADMIN role or a role with the IMPORT SHARE and CREATE DATABASE privileges. The ACCOUNTADMIN role is the top-level role that has all privileges on all objects in the account, including the ability to request and get data from the Data Exchange. A role with the IMPORT SHARE and CREATE DATABASE privileges can also request and get data from the Data Exchange, as these are the minimum privileges required to create a database from a share. The other options are incorrect because:

* A. The SYSADMIN role does not have the privilege to request and get data from the Data Exchange, unless it is also granted the IMPORT SHARE and CREATE DATABASE privileges. The SYSADMIN role is a pre-defined role that has all privileges on all objects in the account, except for the privileges reserved for the ACCOUNTADMIN role, such as managing users, roles, and shares.

* B. The SECURITYADMIN role does not have the privilege to request and get data from the Data Exchange, unless it is also granted the IMPORT SHARE and CREATE DATABASE privileges. The SECURITYADMIN role is a pre-defined role that has the privilege to manage security objects in the account, such as network policies, encryption keys, and security integrations, but not data objects, such as databases, schemas, and tables.

* E. The IMPORT PRIVILEGES and SHARED DATABASE are not valid privileges in Snowflake. The correct privilege names are IMPORT SHARE and CREATE DATABASE, as explained above.

An Administrator wants to delegate the administration of a company's data exchange to users who do not have access to the ACCOUNTADMIN role.

How can this requirement be met?

A.
Grant imported privileges on data exchange EXCHANGE_NAME to ROLE_NAME;
A.
Grant imported privileges on data exchange EXCHANGE_NAME to ROLE_NAME;
Answers
B.
Grant modify on data exchange EXCHANGE_NAME to ROLE_NAME;
B.
Grant modify on data exchange EXCHANGE_NAME to ROLE_NAME;
Answers
C.
Grant ownership on data exchange EXCHANGE_NAME to ROLE NAME;
C.
Grant ownership on data exchange EXCHANGE_NAME to ROLE NAME;
Answers
D.
Grant usage on data exchange EXCHANGE_NAME to ROLE_NAME;
D.
Grant usage on data exchange EXCHANGE_NAME to ROLE_NAME;
Answers
Suggested answer: B

Explanation:

According to the [GRANT MODIFY] documentation, the MODIFY privilege on a data exchange allows a role to perform administrative tasks on the data exchange, such as inviting members, approving profiles, and reviewing listings. This privilege can be granted by the ACCOUNTADMIN role or a role that already has the MODIFY privilege on the data exchange. Therefore, to delegate the administration of a company's data exchange to users who do not have access to the ACCOUNTADMIN role, the best option is to grant the MODIFY privilege on the data exchange to a role that the users can assume. The other options are incorrect because:

* A. There is no such privilege as IMPORTED PRIVILEGES in Snowflake. The correct privilege name is IMPORT SHARE, which allows a role to create a database from a share. This privilege is not related to the administration of a data exchange, but to the consumption of shared data.

* C. There is no such privilege as OWNERSHIP in Snowflake. The correct privilege name is OWNED BY, which indicates the role that owns an object and has full control over it. However, this privilege cannot be granted or revoked, but only transferred by the current owner to another role using the GRANT OWNERSHIP command. Therefore, this option is not feasible for delegating the administration of a data exchange.

* D. The USAGE privilege on a data exchange allows a role to access the data exchange and view the available data listings. This privilege does not allow a role to perform administrative tasks on the data exchange, such as inviting members, approving profiles, and reviewing listings. Therefore, this option is not sufficient for delegating the administration of a data exchange.

Total 72 questions
Go to page: of 8