ExamGecko
Home Home / Snowflake / DEA-C01

Snowflake DEA-C01 Practice Test - Questions Answers, Page 2

Question list
Search
Search

List of questions

Search

Which column provides information when the stream became stale or may become stale if not consumed?

A.
STREAM_STALE_PERIOD
A.
STREAM_STALE_PERIOD
Answers
B.
STALE_PERIOD_AFTER
B.
STALE_PERIOD_AFTER
Answers
C.
STALE_STREAM_PERIOD
C.
STALE_STREAM_PERIOD
Answers
D.
STALE_AFTER
D.
STALE_AFTER
Answers
Suggested answer: D

Explanation:

execute SHOW STREAMS command.

Column Name: STALE_AFTER

Timestamp when the stream became stale or may become stale if not consumed. The value is calculated by adding the retention period for the source table (i.e. the larger of the DATA_ RETENTION_TIME_IN_DAYS or MAX_DATA_EXTENSION_TIME_IN_DAYS parame-ter setting) to the last time the stream was read.

When created, a stream logically takes an initial snapshot of every row in the source object and the contents of a stream change as DML statements execute on the source table.

A Data Engineer, Sophie Created a view that queries the table and returns the CURRENT_USER and CURRENT_TIMESTAMP values for the query transaction. A Stream has been created on views to capture CDC.

Tony, another user inserted the data e.g.

insert into <table> values (1),(2),(3); Emily, another user also inserted the data e.g.

insert into <table> values (4),(5),(6); What will happened when Different user queries the same stream after 1 hour?

A.
All the 6 records would be shown with METADATA$ACTION as 'INSERT' out of which 3 records would be displayed with username 'Tony' & rest 3 records would be displayed with username 'Emily'.
A.
All the 6 records would be shown with METADATA$ACTION as 'INSERT' out of which 3 records would be displayed with username 'Tony' & rest 3 records would be displayed with username 'Emily'.
Answers
B.
All the Six Records would be displayed with CURRENT_USER & CUR-RENT_TIMESTAMP while querying Streams.
B.
All the Six Records would be displayed with CURRENT_USER & CUR-RENT_TIMESTAMP while querying Streams.
Answers
C.
All the Six records would be displayed with User 'Sohpie' Who is the owner of the View.
C.
All the Six records would be displayed with User 'Sohpie' Who is the owner of the View.
Answers
D.
User would be displayed with the one who queried during the session, but Recorded timestamp would be of past 1 hour i.e. actual records insertion time.
D.
User would be displayed with the one who queried during the session, but Recorded timestamp would be of past 1 hour i.e. actual records insertion time.
Answers
Suggested answer: B

Explanation:

When User queries the stream, the stream returns the username for the user. The stream also returns the current timestamp for the query transaction in each row, NOT the timestamp when each row was inserted.

Which Function would Data engineer used to recursively resume all tasks in Chain of Tasks rather than resuming each task individually (using ALTER TASK … RESUME)?

A.
SYSTEMASK_DEPENDENTS
A.
SYSTEMASK_DEPENDENTS
Answers
B.
SYSTEMASK_DEPENDENTS_ENABLE
B.
SYSTEMASK_DEPENDENTS_ENABLE
Answers
C.
SYSTEMASK_DEPENDENTS_RESUME
C.
SYSTEMASK_DEPENDENTS_RESUME
Answers
D.
SYSTEMASK_RECURSIVE_ENABLE
D.
SYSTEMASK_RECURSIVE_ENABLE
Answers
Suggested answer: B

Explanation:

To recursively resume all tasks in a DAG(A Directed Acyclic Graph (DAG) is a series of tasks com-posed of a single root task and additional tasks, organized by their dependencies.), query the SYSTEM$ TASK_DEPENDENTS_ENABLE function rather than resuming each task individually (us-ing ALTER TASK … RESUME).

Steven created the task, what additional privileges required by Steven on the task so that he can suspend or resume the tasks?

A.
Steven is already owner of the task; he can execute the task & suspend/resume the task without any additional privileges.
A.
Steven is already owner of the task; he can execute the task & suspend/resume the task without any additional privileges.
Answers
B.
In addition to the task owner, a Steven Role must have OPERATE privilege on the task so that he can suspend or resume the task.
B.
In addition to the task owner, a Steven Role must have OPERATE privilege on the task so that he can suspend or resume the task.
Answers
C.
Steven must have SUSPEND privilege on the task so that he can suspend or resume the task.
C.
Steven must have SUSPEND privilege on the task so that he can suspend or resume the task.
Answers
D.
Steven needs to have Global Managed RESUME privilege by TASK administrator.
D.
Steven needs to have Global Managed RESUME privilege by TASK administrator.
Answers
Suggested answer: B

Explanation:

In addition to the task ownership privilege, a role that has the OPERATE privilege on the task can suspend or resume the task.

John, Data Engineer, do have technical requirements to refresh the External tables Metadata periodically or in auto mode, which approach John can take to meet this technical specification?

A.
John can use AUTO_REFRESH parameter if the underlying External Cloud host sup-ports this for External tables.
A.
John can use AUTO_REFRESH parameter if the underlying External Cloud host sup-ports this for External tables.
Answers
B.
He can create a task that executes an ALTER EXTERNAL TABLE ... REFRESH statement every 5 minutes.
B.
He can create a task that executes an ALTER EXTERNAL TABLE ... REFRESH statement every 5 minutes.
Answers
C.
External table cannot be scheduled via Snowflake Tasks, 3rd party tools/scripts needs to be used provided by External cloud storage provider.
C.
External table cannot be scheduled via Snowflake Tasks, 3rd party tools/scripts needs to be used provided by External cloud storage provider.
Answers
D.
Snowflake implicitly take care this Infrastructure needs, as underlying warehouse layer internally manage the refresh. No action needed from John.
D.
Snowflake implicitly take care this Infrastructure needs, as underlying warehouse layer internally manage the refresh. No action needed from John.
Answers
Suggested answer: A, B

Explanation:

Both Option A & B are correct.

For Refreshing External Table Metadata on a Auto Mode, John can use the AUTO_REFRESH parameter properties of External table.When an external table is created, the AUTO_REFRESH parameter is set to TRUE by default.

When an external table is created, the AUTO_REFRESH parameter is set to TRUE by default.

Snowflake recommend that you accept this default value for external tables that reference data files in either Amazon S3 or Microsoft Azure stages.

However, the automatic refresh option is not available currently for external tables that reference Google Cloud Storage stages.

For these external tables, manually refreshing the metadata on a schedule can be useful.

The following example refreshes the metadata for an external table named snowdb.snowschema.snow_ext_table (using ALTER EXTERNAL TABLE … REFRESH) on a schedule.

-- Create a task that executes an ALTER EXTERNAL TABLE ... REFRESH statement every 5 minutes.

1. CREATE TASK snow_ext_table_refresh_task

2. WAREHOUSE=mywh

3. SCHEDULE='5 minutes'

4. AS

5. ALTER EXTERNAL TABLE snowmydb.snowmyschema.snow_ext_table REFRESH;

If you need to connect to Snowflake using a BI tool or technology, which of the following BI tools and technologies are known to provide native connectivity to Snowflake?

A.
SISENSE
A.
SISENSE
Answers
B.
SELECT STAR
B.
SELECT STAR
Answers
C.
ALATION
C.
ALATION
Answers
D.
PROTEGRITY
D.
PROTEGRITY
Answers
Suggested answer: A

Explanation:

SISENSE is BI tools and technologies which is known to provide native connectivity to Snowflake, Rest of the options given are security & governance tools supported by SnowFlake.

Business intelligence (BI) tools enable analyzing, discovering, and reporting on data to help executives and managers make more informed business decisions. A key component of any BI tool is the ability to deliver data visualization through dashboards, charts, and other graphical output.

For More details around supported BI Tools in Snowflake Ecosystem, do refer the link below:

https://docs.snowflake.com/en/user-guide/ecosystem-bi

Which of the following security and governance tools/technologies are known to provide native connectivity to Snowflake? [Select 2]

A.
ALTR
A.
ALTR
Answers
B.
Baffle
B.
Baffle
Answers
C.
BIG Squid
C.
BIG Squid
Answers
D.
Dataiku
D.
Dataiku
Answers
E.
Zepl
E.
Zepl
Answers
Suggested answer: A, B

Explanation:

Security and governance tools ensure sensitive data maintained by an organization is protected from inappropriate access and tampering, as well as helping organizations to achieve and maintain regulatory compliance. These tools are often used in conjunction with observability solutions/services to provide organizations with visibility into the status, quality, and integrity of their data, including identifying potential issues.

Together, these tools support a wide range of operations, including risk assessment, intrusion detection/ monitoring/notification, data masking, data cataloging, data health/quality checks, issue identification/ troubleshooting/resolution, and more.

ALTR & Baffle are correct options here.

Mohan, Data engineer works with ALTUSO Company, wants to programmatically check the status of the query. He needs query id to identify each query executed by Snowflake & using Snowflake Connector for Python to execute a query, how he will be able to meet this requirements. Select the best options you will suggest?

A.
He can access the query ID through the sfqid attribute in the Cursor object.
A.
He can access the query ID through the sfqid attribute in the Cursor object.
Answers
B.
cur = con.cursor()
B.
cur = con.cursor()
Answers
C.
cur.execute("SELECT * FROM snowtable")
C.
cur.execute("SELECT * FROM snowtable")
Answers
D.
print(cur.sfqid)
D.
print(cur.sfqid)
Answers
E.
When he used the Snowflake Connector for Python to execute a query, he can access the query ID through the pyqueryid attribute in the Cursor object.
E.
When he used the Snowflake Connector for Python to execute a query, he can access the query ID through the pyqueryid attribute in the Cursor object.
Answers
F.
He needs to query history views to get the queryID as best practices.
F.
He needs to query history views to get the queryID as best practices.
Answers
G.
Using python connector, snowflake does not support queryID retrieval for both syn-chronous & asynchronous query.
G.
Using python connector, snowflake does not support queryID retrieval for both syn-chronous & asynchronous query.
Answers
Suggested answer: A

Explanation:

Retrieving the Snowflake Query ID

A query ID identifies each query executed by Snowflake. When you use the Snowflake Connector for Python to execute a query, you can access the query ID through the sfqid attribute in the Cursor object:

1. # Retrieving a Snowflake Query ID

2. cur = con.cursor()

3. cur.execute("SELECT * FROM testtable")

4. print(cur.sfqid)

Which connector creates the RECORD_CONTENT and RECORD_METADATA columns in the existing Snowflake table while connecting to Snowflake?

A.
Python Connector
A.
Python Connector
Answers
B.
Spark Connector
B.
Spark Connector
Answers
C.
Node.js connector
C.
Node.js connector
Answers
D.
Kafka Connector
D.
Kafka Connector
Answers
Suggested answer: D

Explanation:

Apache Kafka software uses a publish and subscribe model to write and read streams of records, similar to a message queue or enterprise messaging system. Kafka allows processes to read and write messages asynchronously. A subscriber does not need to be connected directly to a publisher; a pub-lisher can queue a message in Kafka for the subscriber to receive later.

An application publishes messages to a topic, and an application subscribes to a topic to receive those messages. Kafka can process, as well as transmit, messages; however, that is outside the scope of this document. Topics can be divided into partitions to increase scalability.

Kafka Connect is a framework for connecting Kafka with external systems, including databases. A

Kafka Connect cluster is a separate cluster from the Kafka cluster. The Kafka Connect cluster supports running and scaling out connectors (components that support reading and/or writing between external systems).

The Kafka connector is designed to run in a Kafka Connect cluster to read data from Kafka topics and write the data into Snowflake tables.

Every Snowflake table loaded by the Kafka connector has a schema consisting of two VARIANT columns:

RECORD_CONTENT. This contains the Kafka message.

RECORD_METADATA. This contains metadata about the message, for example, the topic from which the message was read.

Ryan, a Data Engineer, accidently drop the Share named SF_SHARE which results in immediate access revoke for all the consumers (i.e., accounts who have created a database from that SF_SHARE). What action he can take to recover the dropped Share?

A.
By Executing UNDROP command he could possibly recover the dropped Share SF_SHARE & its associated Databases for immediate consumer access.
A.
By Executing UNDROP command he could possibly recover the dropped Share SF_SHARE & its associated Databases for immediate consumer access.
Answers
B.
He can recreate a share with the same name as a previous share which does restore the databases created (by any consumers) from the share SF_SHARE.
B.
He can recreate a share with the same name as a previous share which does restore the databases created (by any consumers) from the share SF_SHARE.
Answers
C.
A dropped share cannot be restored. The share must be created again using the CRE-ATE SHARE command and then configured using GRANT <privilege> … TO SHARE and ALTER SHARE.
C.
A dropped share cannot be restored. The share must be created again using the CRE-ATE SHARE command and then configured using GRANT <privilege> … TO SHARE and ALTER SHARE.
Answers
D.
Consumer accounts that have created databases from the share will still be able to que-ry these databases as Share is separate securable object & it's still accessible using time travel feature.
D.
Consumer accounts that have created databases from the share will still be able to que-ry these databases as Share is separate securable object & it's still accessible using time travel feature.
Answers
Suggested answer: C

Explanation:

You can drop a share at any time using the DROP SHARE command.

Dropping a share instantly invalidates all databases created from the share by consumer accounts.

All queries and other operations performed on these databases will no longer work.

After dropping a share, you can recreate it with the same name; however, this does not restore any of the databases created from the share by consumer accounts.

The recreated share is treated as a new share and all consumer accounts must create a new database from the new share.

Total 130 questions
Go to page: of 13