Snowflake DEA-C01 Practice Test - Questions Answers, Page 2
List of questions
Question 11

Which column provides information when the stream became stale or may become stale if not consumed?
Explanation:
execute SHOW STREAMS command.
Column Name: STALE_AFTER
Timestamp when the stream became stale or may become stale if not consumed. The value is calculated by adding the retention period for the source table (i.e. the larger of the DATA_ RETENTION_TIME_IN_DAYS or MAX_DATA_EXTENSION_TIME_IN_DAYS parame-ter setting) to the last time the stream was read.
Question 12

When created, a stream logically takes an initial snapshot of every row in the source object and the contents of a stream change as DML statements execute on the source table.
A Data Engineer, Sophie Created a view that queries the table and returns the CURRENT_USER and CURRENT_TIMESTAMP values for the query transaction. A Stream has been created on views to capture CDC.
Tony, another user inserted the data e.g.
insert into <table> values (1),(2),(3); Emily, another user also inserted the data e.g.
insert into <table> values (4),(5),(6); What will happened when Different user queries the same stream after 1 hour?
Explanation:
When User queries the stream, the stream returns the username for the user. The stream also returns the current timestamp for the query transaction in each row, NOT the timestamp when each row was inserted.
Question 13

Which Function would Data engineer used to recursively resume all tasks in Chain of Tasks rather than resuming each task individually (using ALTER TASK … RESUME)?
Explanation:
To recursively resume all tasks in a DAG(A Directed Acyclic Graph (DAG) is a series of tasks com-posed of a single root task and additional tasks, organized by their dependencies.), query the SYSTEM$ TASK_DEPENDENTS_ENABLE function rather than resuming each task individually (us-ing ALTER TASK … RESUME).
Question 14

Steven created the task, what additional privileges required by Steven on the task so that he can suspend or resume the tasks?
Explanation:
In addition to the task ownership privilege, a role that has the OPERATE privilege on the task can suspend or resume the task.
Question 15

John, Data Engineer, do have technical requirements to refresh the External tables Metadata periodically or in auto mode, which approach John can take to meet this technical specification?
Explanation:
Both Option A & B are correct.
For Refreshing External Table Metadata on a Auto Mode, John can use the AUTO_REFRESH parameter properties of External table.When an external table is created, the AUTO_REFRESH parameter is set to TRUE by default.
When an external table is created, the AUTO_REFRESH parameter is set to TRUE by default.
Snowflake recommend that you accept this default value for external tables that reference data files in either Amazon S3 or Microsoft Azure stages.
However, the automatic refresh option is not available currently for external tables that reference Google Cloud Storage stages.
For these external tables, manually refreshing the metadata on a schedule can be useful.
The following example refreshes the metadata for an external table named snowdb.snowschema.snow_ext_table (using ALTER EXTERNAL TABLE … REFRESH) on a schedule.
-- Create a task that executes an ALTER EXTERNAL TABLE ... REFRESH statement every 5 minutes.
1. CREATE TASK snow_ext_table_refresh_task
2. WAREHOUSE=mywh
3. SCHEDULE='5 minutes'
4. AS
5. ALTER EXTERNAL TABLE snowmydb.snowmyschema.snow_ext_table REFRESH;
Question 16

If you need to connect to Snowflake using a BI tool or technology, which of the following BI tools and technologies are known to provide native connectivity to Snowflake?
Explanation:
SISENSE is BI tools and technologies which is known to provide native connectivity to Snowflake, Rest of the options given are security & governance tools supported by SnowFlake.
Business intelligence (BI) tools enable analyzing, discovering, and reporting on data to help executives and managers make more informed business decisions. A key component of any BI tool is the ability to deliver data visualization through dashboards, charts, and other graphical output.
For More details around supported BI Tools in Snowflake Ecosystem, do refer the link below:
https://docs.snowflake.com/en/user-guide/ecosystem-bi
Question 17

Which of the following security and governance tools/technologies are known to provide native connectivity to Snowflake? [Select 2]
Explanation:
Security and governance tools ensure sensitive data maintained by an organization is protected from inappropriate access and tampering, as well as helping organizations to achieve and maintain regulatory compliance. These tools are often used in conjunction with observability solutions/services to provide organizations with visibility into the status, quality, and integrity of their data, including identifying potential issues.
Together, these tools support a wide range of operations, including risk assessment, intrusion detection/ monitoring/notification, data masking, data cataloging, data health/quality checks, issue identification/ troubleshooting/resolution, and more.
ALTR & Baffle are correct options here.
Question 18

Mohan, Data engineer works with ALTUSO Company, wants to programmatically check the status of the query. He needs query id to identify each query executed by Snowflake & using Snowflake Connector for Python to execute a query, how he will be able to meet this requirements. Select the best options you will suggest?
Explanation:
Retrieving the Snowflake Query ID
A query ID identifies each query executed by Snowflake. When you use the Snowflake Connector for Python to execute a query, you can access the query ID through the sfqid attribute in the Cursor object:
1. # Retrieving a Snowflake Query ID
2. cur = con.cursor()
3. cur.execute("SELECT * FROM testtable")
4. print(cur.sfqid)
Question 19

Which connector creates the RECORD_CONTENT and RECORD_METADATA columns in the existing Snowflake table while connecting to Snowflake?
Explanation:
Apache Kafka software uses a publish and subscribe model to write and read streams of records, similar to a message queue or enterprise messaging system. Kafka allows processes to read and write messages asynchronously. A subscriber does not need to be connected directly to a publisher; a pub-lisher can queue a message in Kafka for the subscriber to receive later.
An application publishes messages to a topic, and an application subscribes to a topic to receive those messages. Kafka can process, as well as transmit, messages; however, that is outside the scope of this document. Topics can be divided into partitions to increase scalability.
Kafka Connect is a framework for connecting Kafka with external systems, including databases. A
Kafka Connect cluster is a separate cluster from the Kafka cluster. The Kafka Connect cluster supports running and scaling out connectors (components that support reading and/or writing between external systems).
The Kafka connector is designed to run in a Kafka Connect cluster to read data from Kafka topics and write the data into Snowflake tables.
Every Snowflake table loaded by the Kafka connector has a schema consisting of two VARIANT columns:
RECORD_CONTENT. This contains the Kafka message.
RECORD_METADATA. This contains metadata about the message, for example, the topic from which the message was read.
Question 20

Ryan, a Data Engineer, accidently drop the Share named SF_SHARE which results in immediate access revoke for all the consumers (i.e., accounts who have created a database from that SF_SHARE). What action he can take to recover the dropped Share?
Explanation:
You can drop a share at any time using the DROP SHARE command.
Dropping a share instantly invalidates all databases created from the share by consumer accounts.
All queries and other operations performed on these databases will no longer work.
After dropping a share, you can recreate it with the same name; however, this does not restore any of the databases created from the share by consumer accounts.
The recreated share is treated as a new share and all consumer accounts must create a new database from the new share.
Question