ExamGecko
Home / Snowflake / ARA-C01
Ask Question

ARA-C01: SnowPro Advanced: Architect Certification

Vendor:
Exam Questions:
162
 Learners
  2.371
Last Updated
April - 2025
Language
English
5 Quizzes
PDF | VPLUS
This study guide should help you understand what to expect on the exam and includes a summary of the topics the exam might cover and links to additional resources. The information and materials in this document should help you focus your studies as you prepare for the exam.

Related questions

An Architect uses COPY INTO with the ON_ERROR=SKIP_FILE option to bulk load CSV files into a table called TABLEA, using its table stage. One file named file5.csv fails to load. The Architect fixes the file and re-loads it to the stage with the exact same file name it had previously.

Which commands should the Architect use to load only file5.csv file from the stage? (Choose two.)

Become a Premium Member for full access
  Unlock Premium Member

An Architect needs to improve the performance of reports that pull data from multiple Snowflake tables, join, and then aggregate the data. Users access the reports using several dashboards. There are performance issues on Monday mornings between 9:00am-11:00am when many users check the sales reports.

The size of the group has increased from 4 to 8 users. Waiting times to refresh the dashboards has increased significantly. Currently this workload is being served by a virtual warehouse with the following parameters:

AUTO-RESUME = TRUE AUTO_SUSPEND = 60 SIZE = Medium

What is the MOST cost-effective way to increase the availability of the reports?

Become a Premium Member for full access
  Unlock Premium Member

An Architect clones a database and all of its objects, including tasks. After the cloning, the tasks stop running.

Why is this occurring?

Become a Premium Member for full access
  Unlock Premium Member

What is a valid object hierarchy when building a Snowflake environment?

Become a Premium Member for full access
  Unlock Premium Member

An Architect is designing a solution that will be used to process changed records in an orders table. Newly-inserted orders must be loaded into the f_orders fact table, which will aggregate all the orders by multiple dimensions (time, region, channel, etc.). Existing orders can be updated by the sales department within 30 days after the order creation. In case of an order update, the solution must perform two actions:

1. Update the order in the f_0RDERS fact table.

2. Load the changed order data into the special table ORDER _REPAIRS.

This table is used by the Accounting department once a month. If the order has been changed, the Accounting team needs to know the latest details and perform the necessary actions based on the data in the order_repairs table.

What data processing logic design will be the MOST performant?

Become a Premium Member for full access
  Unlock Premium Member

A Snowflake Architect is designing a multi-tenant application strategy for an organization in the Snowflake Data Cloud and is considering using an Account Per Tenant strategy.

Which requirements will be addressed with this approach? (Choose two.)

Become a Premium Member for full access
  Unlock Premium Member

A Snowflake Architect is setting up database replication to support a disaster recovery plan. The primary database has external tables.

How should the database be replicated?

Create a clone of the primary database then replicate the database.
Create a clone of the primary database then replicate the database.
Move the external tables to a database that is not replicated, then replicate the primary database.
Move the external tables to a database that is not replicated, then replicate the primary database.
Replicate the database ensuring the replicated database is in the same region as the external tables.
Replicate the database ensuring the replicated database is in the same region as the external tables.
Share the primary database with an account in the same region that the database will be replicated to.
Share the primary database with an account in the same region that the database will be replicated to.
Suggested answer: B
Explanation:

Database replication is a feature that allows you to create a copy of a database in another account, region, or cloud platform for disaster recovery or business continuity purposes. However, not all database objects can be replicated. External tables are one of the exceptions, as they reference data files stored in an external stage that is not part of Snowflake. Therefore, to replicate a database that contains external tables, you need to move the external tables to a separate database that is not replicated, and then replicate the primary database that contains the other objects. This way, you can avoid replication errors and ensure consistency between the primary and secondary databases. The other options are incorrect because they either do not address the issue of external tables, or they use an alternative method that is not supported by Snowflake. You cannot create a clone of the primary database and then replicate it, as replication only works on the original database, not on its clones. You also cannot share the primary database with another account, as sharing is a different feature that does not create a copy of the database, but rather grants access to the shared objects. Finally, you do not need to ensure that the replicated database is in the same region as the external tables, as external tables can access data files stored in any region or cloud platform, as long as the stage URL is valid and accessible.Reference:

[Replication and Failover/Failback]1

[Introduction to External Tables]2

[Working with External Tables]3

[Replication : How to migrate an account from One Cloud Platform or Region to another in Snowflake]4

asked 23/09/2024
David Hartnett
46 questions

A company has an inbound share set up with eight tables and five secure views. The company plans to make the share part of its production data pipelines.

Which actions can the company take with the inbound share? (Choose two.)

Clone a table from a share.
Clone a table from a share.
Grant modify permissions on the share.
Grant modify permissions on the share.
Create a table from the shared database.
Create a table from the shared database.
Create additional views inside the shared database.
Create additional views inside the shared database.
Create a table stream on the shared table.
Create a table stream on the shared table.
Suggested answer: A, D
Explanation:

These two actions are possible with an inbound share, according to the Snowflake documentation and the web search results. An inbound share is a share that is created by another Snowflake account (the provider) and imported into your account (the consumer). An inbound share allows you to access the data shared by the provider, but not to modify or delete it. However, you can perform some actions with the inbound share, such as:

Clone a table from a share. You can create a copy of a table from an inbound share using the CREATE TABLE ... CLONE statement. The clone will contain the same data and metadata as the original table, but it will be independent of the share.You can modify or delete the clone as you wish, but it will not reflect any changes made to the original table by the provider1.

Create additional views inside the shared database. You can create views on the tables or views from an inbound share using the CREATE VIEW statement. The views will be stored in the shared database, but they will be owned by your account.You can query the views as you would query any other view in your account, but you cannot modify or delete the underlying objects from the share2.

The other actions listed are not possible with an inbound share, because they would require modifying the share or the shared objects, which are read-only for the consumer.You cannot grant modify permissions on the share, create a table from the shared database, or create a table stream on the shared table34.

Cloning Objects from a Share | Snowflake Documentation

Creating Views on Shared Data | Snowflake Documentation

Importing Data from a Share | Snowflake Documentation

Streams on Shared Tables | Snowflake Documentation

asked 23/09/2024
Musaddiq Shorunke
48 questions

Which system functions does Snowflake provide to monitor clustering information within a table (Choose two.)

SYSTEM$CLUSTERING_INFORMATION
SYSTEM$CLUSTERING_INFORMATION
SYSTEM$CLUSTERING_USAGE
SYSTEM$CLUSTERING_USAGE
SYSTEM$CLUSTERING_DEPTH
SYSTEM$CLUSTERING_DEPTH
SYSTEM$CLUSTERING_KEYS
SYSTEM$CLUSTERING_KEYS
SYSTEM$CLUSTERING_PERCENT
SYSTEM$CLUSTERING_PERCENT
Suggested answer: A, C
Explanation:

According to the Snowflake documentation, these two system functions are provided by Snowflake to monitor clustering information within a table. A system function is a type of function that allows executing actions or returning information about the system. A clustering key is a feature that allows organizing data across micro-partitions based on one or more columns in the table. Clustering can improve query performance by reducing the number of files to scan.

SYSTEM$CLUSTERING_INFORMATION is a system function that returns clustering information, including average clustering depth, for a table based on one or more columns in the table. The function takes a table name and an optional column name or expression as arguments, and returns a JSON string with the clustering information.The clustering information includes the cluster by keys, the total partition count, the total constant partition count, the average overlaps, and the average depth1.

SYSTEM$CLUSTERING_DEPTH is a system function that returns the clustering depth for a table based on one or more columns in the table. The function takes a table name and an optional column name or expression as arguments, and returns an integer value with the clustering depth. The clustering depth is the maximum number of overlapping micro-partitions for any micro-partition in the table.A lower clustering depth indicates a better clustering2.

SYSTEM$CLUSTERING_INFORMATION | Snowflake Documentation

SYSTEM$CLUSTERING_DEPTH | Snowflake Documentation

asked 23/09/2024
Alexander Ang
47 questions

An Architect is troubleshooting a query with poor performance using the QUERY_HIST0RY function. The Architect observes that the COMPILATIONJHME is greater than the EXECUTIONJTIME.

What is the reason for this?

Become a Premium Member for full access
  Unlock Premium Member