ExamGecko
Home Home / Snowflake / ARA-C01

Snowflake ARA-C01 Practice Test - Questions Answers, Page 15

Question list
Search
Search

List of questions

Search

What actions are permitted when using the Snowflake SQL REST API? (Select TWO).

A.
The use of a GET command
A.
The use of a GET command
Answers
B.
The use of a PUT command
B.
The use of a PUT command
Answers
C.
The use of a ROLLBACK command
C.
The use of a ROLLBACK command
Answers
D.
The use of a CALL command to a stored procedure which returns a table
D.
The use of a CALL command to a stored procedure which returns a table
Answers
E.
Submitting multiple SQL statements in a single call
E.
Submitting multiple SQL statements in a single call
Answers
Suggested answer: D, E

An Architect has a design where files arrive every 10 minutes and are loaded into a primary database table using Snowpipe. A secondary database is refreshed every hour with the latest data from the primary database.

Based on this scenario, what Time Travel query options are available on the secondary database?

A.
A query using Time Travel in the secondary database is available for every hourly table version within the retention window.
A.
A query using Time Travel in the secondary database is available for every hourly table version within the retention window.
Answers
B.
A query using Time Travel in the secondary database is available for every hourly table version within and outside the retention window.
B.
A query using Time Travel in the secondary database is available for every hourly table version within and outside the retention window.
Answers
C.
Using Time Travel, secondary database users can query every iterative version within each hour (the individual Snowpipe loads) in the retention window.
C.
Using Time Travel, secondary database users can query every iterative version within each hour (the individual Snowpipe loads) in the retention window.
Answers
D.
Using Time Travel, secondary database users can query every iterative version within each hour (the individual Snowpipe loads) and outside the retention window.
D.
Using Time Travel, secondary database users can query every iterative version within each hour (the individual Snowpipe loads) and outside the retention window.
Answers
Suggested answer: A

Explanation:

Snowflake's Time Travel feature allows users to query historical data within a defined retention period. In the given scenario, since the secondary database is refreshed every hour, Time Travel can be used to query each hourly version of the table as long as it falls within the retention window. This does not include individual Snowpipe loads within each hour unless they coincide with the hourly refresh.

An Architect for a multi-national transportation company has a system that is used to check the weather conditions along vehicle routes. The data is provided to drivers.

The weather information is delivered regularly by a third-party company and this information is generated as JSON structure. Then the data is loaded into Snowflake in a column with a VARIANT data type. This table is directly queried to deliver the statistics to the drivers with minimum time lapse.

A single entry includes (but is not limited to):

- Weather condition; cloudy, sunny, rainy, etc.

- Degree

- Longitude and latitude

- Timeframe

- Location address

- Wind

The table holds more than 10 years' worth of data in order to deliver the statistics from different years and locations. The amount of data on the table increases every day.

The drivers report that they are not receiving the weather statistics for their locations in time.

What can the Architect do to deliver the statistics to the drivers faster?

A.
Create an additional table in the schema for longitude and latitude. Determine a regular task to fill this information by extracting it from the JSON dataset.
A.
Create an additional table in the schema for longitude and latitude. Determine a regular task to fill this information by extracting it from the JSON dataset.
Answers
B.
Add search optimization service on the variant column for longitude and latitude in order to query the information by using specific metadata.
B.
Add search optimization service on the variant column for longitude and latitude in order to query the information by using specific metadata.
Answers
C.
Divide the table into several tables for each year by using the timeframe information from the JSON dataset in order to process the queries in parallel.
C.
Divide the table into several tables for each year by using the timeframe information from the JSON dataset in order to process the queries in parallel.
Answers
D.
Divide the table into several tables for each location by using the location address information from the JSON dataset in order to process the queries in parallel.
D.
Divide the table into several tables for each location by using the location address information from the JSON dataset in order to process the queries in parallel.
Answers
Suggested answer: B

Explanation:

To improve the performance of queries on semi-structured data, such as JSON stored in a VARIANT column, Snowflake's search optimization service can be utilized. By adding search optimization specifically for the longitude and latitude fields within the VARIANT column, the system can perform point lookups and substring queries more efficiently. This will allow for faster retrieval of weather statistics, which is critical for the drivers to receive timely updates.

A new table and streams are created with the following commands:

CREATE OR REPLACE TABLE LETTERS (ID INT, LETTER STRING) ;

CREATE OR REPLACE STREAM STREAM_1 ON TABLE LETTERS;

CREATE OR REPLACE STREAM STREAM_2 ON TABLE LETTERS APPEND_ONLY = TRUE;

The following operations are processed on the newly created table:

INSERT INTO LETTERS VALUES (1, 'A');

INSERT INTO LETTERS VALUES (2, 'B');

INSERT INTO LETTERS VALUES (3, 'C');

TRUNCATE TABLE LETTERS;

INSERT INTO LETTERS VALUES (4, 'D');

INSERT INTO LETTERS VALUES (5, 'E');

INSERT INTO LETTERS VALUES (6, 'F');

DELETE FROM LETTERS WHERE ID = 6;

What would be the output of the following SQL commands, in order?

SELECT COUNT (*) FROM STREAM_1;

SELECT COUNT (*) FROM STREAM_2;

A.
2 & 6
A.
2 & 6
Answers
B.
2 & 3
B.
2 & 3
Answers
C.
4 & 3
C.
4 & 3
Answers
D.
4 & 6
D.
4 & 6
Answers
Suggested answer: C

Explanation:

In Snowflake, a stream records data manipulation language (DML) changes to its base table since the stream was created or last consumed. STREAM_1 will show all changes including the TRUNCATE operation, while STREAM_2, being APPEND_ONLY, will not show deletions like TRUNCATE. Therefore, STREAM_1 will count the three inserts, the TRUNCATE (counted as a single operation), and the subsequent two inserts before the delete, totaling 4. STREAM_2 will only count the three initial inserts and the two after the TRUNCATE, totaling 3, as it does not count the TRUNCATE or the delete operation.

When loading data into a table that captures the load time in a column with a default value of either CURRENT_TIME () or CURRENT_TIMESTAMP () what will occur?

A.
All rows loaded using a specific COPY statement will have varying timestamps based on when the rows were inserted.
A.
All rows loaded using a specific COPY statement will have varying timestamps based on when the rows were inserted.
Answers
B.
Any rows loaded using a specific COPY statement will have varying timestamps based on when the rows were read from the source.
B.
Any rows loaded using a specific COPY statement will have varying timestamps based on when the rows were read from the source.
Answers
C.
Any rows loaded using a specific COPY statement will have varying timestamps based on when the rows were created in the source.
C.
Any rows loaded using a specific COPY statement will have varying timestamps based on when the rows were created in the source.
Answers
D.
All rows loaded using a specific COPY statement will have the same timestamp value.
D.
All rows loaded using a specific COPY statement will have the same timestamp value.
Answers
Suggested answer: D

Explanation:

When using the COPY command to load data into Snowflake, if a column has a default value set to CURRENT_TIME() or CURRENT_TIMESTAMP(), all rows loaded by that specific COPY command will have the same timestamp. This is because the default value for the timestamp is evaluated at the start of the COPY operation, and that same value is applied to all rows loaded by that operation.

In a managed access schema, what are characteristics of the roles that can manage object privileges? (Select TWO).

A.
Users with the SYSADMIN role can grant object privileges in a managed access schema.
A.
Users with the SYSADMIN role can grant object privileges in a managed access schema.
Answers
B.
Users with the SECURITYADMIN role or higher, can grant object privileges in a managed access schema.
B.
Users with the SECURITYADMIN role or higher, can grant object privileges in a managed access schema.
Answers
C.
Users who are database owners can grant object privileges in a managed access schema.
C.
Users who are database owners can grant object privileges in a managed access schema.
Answers
D.
Users who are schema owners can grant object privileges in a managed access schema.
D.
Users who are schema owners can grant object privileges in a managed access schema.
Answers
E.
Users who are object owners can grant object privileges in a managed access schema.
E.
Users who are object owners can grant object privileges in a managed access schema.
Answers
Suggested answer: B, D

Explanation:

In a managed access schema, the privilege management is centralized with the schema owner, who has the authority to grant object privileges within the schema. Additionally, the SECURITYADMIN role has the capability to manage object grants globally, which includes within managed access schemas. Other roles, such as SYSADMIN or database owners, do not inherently have this privilege unless explicitly granted.

An Architect is designing a data lake with Snowflake. The company has structured, semi-structured, and unstructured data. The company wants to save the data inside the data lake within the Snowflake system. The company is planning on sharing data among its corporate branches using Snowflake data sharing.

What should be considered when sharing the unstructured data within Snowflake?

A.
A pre-signed URL should be used to save the unstructured data into Snowflake in order to share data over secure views, with no time limit for the URL.
A.
A pre-signed URL should be used to save the unstructured data into Snowflake in order to share data over secure views, with no time limit for the URL.
Answers
B.
A scoped URL should be used to save the unstructured data into Snowflake in order to share data over secure views, with a 24-hour time limit for the URL.
B.
A scoped URL should be used to save the unstructured data into Snowflake in order to share data over secure views, with a 24-hour time limit for the URL.
Answers
C.
A file URL should be used to save the unstructured data into Snowflake in order to share data over secure views, with a 7-day time limit for the URL.
C.
A file URL should be used to save the unstructured data into Snowflake in order to share data over secure views, with a 7-day time limit for the URL.
Answers
D.
A file URL should be used to save the unstructured data into Snowflake in order to share data over secure views, with the 'expiration_time' argument defined for the URL time limit.
D.
A file URL should be used to save the unstructured data into Snowflake in order to share data over secure views, with the 'expiration_time' argument defined for the URL time limit.
Answers
Suggested answer: B

Explanation:

When sharing unstructured data within Snowflake, using a scoped URL is recommended. Scoped URLs provide temporary access to staged files without granting privileges to the stage itself, enhancing security. The URL expires when the persisted query result period ends, which is currently set to 24 hours. This approach is suitable for sharing unstructured data over secure views within Snowflake's data sharing framework.

Which Snowflake architecture recommendation needs multiple Snowflake accounts for implementation?

A.
Enable a disaster recovery strategy across multiple cloud providers.
A.
Enable a disaster recovery strategy across multiple cloud providers.
Answers
B.
Create external stages pointing to cloud providers and regions other than the region hosting the Snowflake account.
B.
Create external stages pointing to cloud providers and regions other than the region hosting the Snowflake account.
Answers
C.
Enable zero-copy cloning among the development, test, and production environments.
C.
Enable zero-copy cloning among the development, test, and production environments.
Answers
D.
Enable separation of the development, test, and production environments.
D.
Enable separation of the development, test, and production environments.
Answers
Suggested answer: D

Explanation:

The Snowflake architecture recommendation that necessitates multiple Snowflake accounts for implementation is the separation of development, test, and production environments. This approach, known as Account per Tenant (APT), isolates tenants into separate Snowflake accounts, ensuring dedicated resources and security isolation12.

* Snowflake's white paper on ''Design Patterns for Building Multi-Tenant Applications on Snowflake'' discusses the APT model and its requirement for separate Snowflake accounts for each tenant1.

* Snowflake Documentation on Secure Data Sharing, which mentions the possibility of sharing data across multiple accounts3.

A company is designing its serving layer for data that is in cloud storage. Multiple terabytes of the data will be used for reporting. Some data does not have a clear use case but could be useful for experimental analysis. This experimentation data changes frequently and is sometimes wiped out and replaced completely in a few days.

The company wants to centralize access control, provide a single point of connection for the end-users, and maintain data governance.

What solution meets these requirements while MINIMIZING costs, administrative effort, and development overhead?

A.
Import the data used for reporting into a Snowflake schema with native tables. Then create external tables pointing to the cloud storage folders used for the experimentation data. Then create two different roles with grants to the different datasets to match the different user personas, and grant these roles to the corresponding users.
A.
Import the data used for reporting into a Snowflake schema with native tables. Then create external tables pointing to the cloud storage folders used for the experimentation data. Then create two different roles with grants to the different datasets to match the different user personas, and grant these roles to the corresponding users.
Answers
B.
Import all the data in cloud storage to be used for reporting into a Snowflake schema with native tables. Then create a role that has access to this schema and manage access to the data through that role.
B.
Import all the data in cloud storage to be used for reporting into a Snowflake schema with native tables. Then create a role that has access to this schema and manage access to the data through that role.
Answers
C.
Import all the data in cloud storage to be used for reporting into a Snowflake schema with native tables. Then create two different roles with grants to the different datasets to match the different user personas, and grant these roles to the corresponding users.
C.
Import all the data in cloud storage to be used for reporting into a Snowflake schema with native tables. Then create two different roles with grants to the different datasets to match the different user personas, and grant these roles to the corresponding users.
Answers
D.
Import the data used for reporting into a Snowflake schema with native tables. Then create views that have SELECT commands pointing to the cloud storage files for the experimentation data. Then create two different roles to match the different user personas, and grant these roles to the corresponding users.
D.
Import the data used for reporting into a Snowflake schema with native tables. Then create views that have SELECT commands pointing to the cloud storage files for the experimentation data. Then create two different roles to match the different user personas, and grant these roles to the corresponding users.
Answers
Suggested answer: A

Explanation:

The most cost-effective and administratively efficient solution is to use a combination of native and external tables. Native tables for reporting data ensure performance and governance, while external tables allow for flexibility with frequently changing experimental data. Creating roles with specific grants to datasets aligns with the principle of least privilege, centralizing access control and simplifying user management12.

* Snowflake Documentation on Optimizing Cost1.

* Snowflake Documentation on Controlling Cost2.

What is the MOST efficient way to design an environment where data retention is not considered critical, and customization needs are to be kept to a minimum?

A.
Use a transient database.
A.
Use a transient database.
Answers
B.
Use a transient schema.
B.
Use a transient schema.
Answers
C.
Use a transient table.
C.
Use a transient table.
Answers
D.
Use a temporary table.
D.
Use a temporary table.
Answers
Suggested answer: A
Total 162 questions
Go to page: of 17