ExamGecko

ARA-C01: SnowPro Advanced: Architect Certification

SnowPro Advanced: Architect Certification
Vendor:

Snowflake

SnowPro Advanced: Architect Certification Exam Questions: 162
SnowPro Advanced: Architect Certification   2.371 Learners
Take Practice Tests
Comming soon
PDF | VPLUS
This study guide should help you understand what to expect on the exam and includes a summary of the topics the exam might cover and links to additional resources. The information and materials in this document should help you focus your studies as you prepare for the exam.

Related questions

An Architect uses COPY INTO with the ON_ERROR=SKIP_FILE option to bulk load CSV files into a table called TABLEA, using its table stage. One file named file5.csv fails to load. The Architect fixes the file and re-loads it to the stage with the exact same file name it had previously.

Which commands should the Architect use to load only file5.csv file from the stage? (Choose two.)

A.
COPY INTO tablea FROM @%tablea RETURN_FAILED_ONLY = TRUE;
A.
COPY INTO tablea FROM @%tablea RETURN_FAILED_ONLY = TRUE;
Answers
B.
COPY INTO tablea FROM @%tablea;
B.
COPY INTO tablea FROM @%tablea;
Answers
C.
COPY INTO tablea FROM @%tablea FILES = ('file5.csv');
C.
COPY INTO tablea FROM @%tablea FILES = ('file5.csv');
Answers
D.
COPY INTO tablea FROM @%tablea FORCE = TRUE;
D.
COPY INTO tablea FROM @%tablea FORCE = TRUE;
Answers
E.
COPY INTO tablea FROM @%tablea NEW_FILES_ONLY = TRUE;
E.
COPY INTO tablea FROM @%tablea NEW_FILES_ONLY = TRUE;
Answers
F.
COPY INTO tablea FROM @%tablea MERGE = TRUE;
F.
COPY INTO tablea FROM @%tablea MERGE = TRUE;
Answers
Suggested answer: B, C

Explanation:

Option A (RETURN_FAILED_ONLY)will only load files that previously failed to load.Since file5.csv already exists in the stage with the same name,it will not be considered a new file and will not be loaded.

Option D (FORCE)will overwrite any existing data in the table.This is not desired as we only want to load the data from file5.csv.

Option E (NEW_FILES_ONLY)will only load files that have been added to the stage since the last COPY command.This will not work because file5.csv was already in the stage before it was fixed.

Option F (MERGE)is used to merge data from a stage into an existing table,creating new rows for any data not already present.This is not needed in this case as we simply want to load the data from file5.csv.

Therefore, the architect can use either COPY INTO tablea FROM @%tablea or COPY INTO tablea FROM @%tablea FILES = ('file5.csv') to load only file5.csv from the stage. Both options will load the data from the specified file without overwriting any existing data or requiring additional configuration

asked 23/09/2024
Antonio Pombo
30 questions

An Architect needs to improve the performance of reports that pull data from multiple Snowflake tables, join, and then aggregate the data. Users access the reports using several dashboards. There are performance issues on Monday mornings between 9:00am-11:00am when many users check the sales reports.

The size of the group has increased from 4 to 8 users. Waiting times to refresh the dashboards has increased significantly. Currently this workload is being served by a virtual warehouse with the following parameters:

AUTO-RESUME = TRUE AUTO_SUSPEND = 60 SIZE = Medium

What is the MOST cost-effective way to increase the availability of the reports?

Become a Premium Member for full access
Unlock Premium Member  Unlock Premium Member

An Architect clones a database and all of its objects, including tasks. After the cloning, the tasks stop running.

Why is this occurring?

A.
Tasks cannot be cloned.
A.
Tasks cannot be cloned.
Answers
B.
The objects that the tasks reference are not fully qualified.
B.
The objects that the tasks reference are not fully qualified.
Answers
C.
Cloned tasks are suspended by default and must be manually resumed.
C.
Cloned tasks are suspended by default and must be manually resumed.
Answers
D.
The Architect has insufficient privileges to alter tasks on the cloned database.
D.
The Architect has insufficient privileges to alter tasks on the cloned database.
Answers
Suggested answer: C

Explanation:

When a database is cloned, all of its objects, including tasks, are also cloned. However, cloned tasks are suspended by default and must be manually resumed by using the ALTER TASK command. This is to prevent the cloned tasks from running unexpectedly or interfering with the original tasks. Therefore, the reason why the tasks stop running after the cloning is because they are suspended by default (Option C). Options A, B, and D are not correct because tasks can be cloned, the objects that the tasks reference are also cloned and do not need to be fully qualified, and the Architect does not need to alter the tasks on the cloned database, only resume them.Reference: The answer can be verified from Snowflake's official documentation on cloning and tasks available on their website. Here are some relevant links:

Cloning Objects | Snowflake Documentation

Tasks | Snowflake Documentation

ALTER TASK | Snowflake Documentation

asked 23/09/2024
Padmanabhan Kudiarasu
48 questions

What is a valid object hierarchy when building a Snowflake environment?

A.
Account --> Database --> Schema --> Warehouse
A.
Account --> Database --> Schema --> Warehouse
Answers
B.
Organization --> Account --> Database --> Schema --> Stage
B.
Organization --> Account --> Database --> Schema --> Stage
Answers
C.
Account --> Schema > Table --> Stage
C.
Account --> Schema > Table --> Stage
Answers
D.
Organization --> Account --> Stage --> Table --> View
D.
Organization --> Account --> Stage --> Table --> View
Answers
Suggested answer: B

Explanation:

This is the valid object hierarchy when building a Snowflake environment, according to the Snowflake documentation and the web search results. Snowflake is a cloud data platform that supports various types of objects, such as databases, schemas, tables, views, stages, warehouses, and more. These objects are organized in a hierarchical structure, as follows:

Organization: An organization is the top-level entity that represents a group of Snowflake accounts that are related by business needs or ownership.An organization can have one or more accounts, and can enable features such as cross-account data sharing, billing and usage reporting, and single sign-on across accounts12.

Account: An account is the primary entity that represents a Snowflake customer. An account can have one or more databases, schemas, stages, warehouses, and other objects. An account can also have one or more users, roles, and security integrations.An account is associated with a specific cloud platform, region, and Snowflake edition34.

Database: A database is a logical grouping of schemas. A database can have one or more schemas, and can store structured, semi-structured, or unstructured data.A database can also have properties such as retention time, encryption, and ownership56.

Schema: A schema is a logical grouping of tables, views, stages, and other objects. A schema can have one or more objects, and can define the namespace and access control for the objects. A schema can also have properties such as ownership and default warehouse .

Stage: A stage is a named location that references the files in external or internal storage. A stage can be used to load data into Snowflake tables using the COPY INTO command, or to unload data from Snowflake tables using the COPY INTO LOCATION command. A stage can be created at the account, database, or schema level, and can have properties such as file format, encryption, and credentials .

The other options listed are not valid object hierarchies, because they either omit or misplace some objects in the structure. For example, option A omits the organization level and places the warehouse under the schema level, which is incorrect. Option C omits the organization, account, and stage levels, and places the table under the schema level, which is incorrect. Option D omits the database level and places the stage and table under the account level, which is incorrect.

Snowflake Documentation: Organizations

Snowflake Blog: Introducing Organizations in Snowflake

Snowflake Documentation: Accounts

Snowflake Blog: Understanding Snowflake Account Structures

Snowflake Documentation: Databases

Snowflake Blog: How to Create a Database in Snowflake

[Snowflake Documentation: Schemas]

[Snowflake Blog: How to Create a Schema in Snowflake]

[Snowflake Documentation: Stages]

[Snowflake Blog: How to Use Stages in Snowflake]

asked 23/09/2024
Jonaid Alam
36 questions

An Architect is designing a solution that will be used to process changed records in an orders table. Newly-inserted orders must be loaded into the f_orders fact table, which will aggregate all the orders by multiple dimensions (time, region, channel, etc.). Existing orders can be updated by the sales department within 30 days after the order creation. In case of an order update, the solution must perform two actions:

1. Update the order in the f_0RDERS fact table.

2. Load the changed order data into the special table ORDER _REPAIRS.

This table is used by the Accounting department once a month. If the order has been changed, the Accounting team needs to know the latest details and perform the necessary actions based on the data in the order_repairs table.

What data processing logic design will be the MOST performant?

Become a Premium Member for full access
Unlock Premium Member  Unlock Premium Member

A Snowflake Architect is designing a multi-tenant application strategy for an organization in the Snowflake Data Cloud and is considering using an Account Per Tenant strategy.

Which requirements will be addressed with this approach? (Choose two.)

A.
There needs to be fewer objects per tenant.
A.
There needs to be fewer objects per tenant.
Answers
B.
Security and Role-Based Access Control (RBAC) policies must be simple to configure.
B.
Security and Role-Based Access Control (RBAC) policies must be simple to configure.
Answers
C.
Compute costs must be optimized.
C.
Compute costs must be optimized.
Answers
D.
Tenant data shape may be unique per tenant.
D.
Tenant data shape may be unique per tenant.
Answers
E.
Storage costs must be optimized.
E.
Storage costs must be optimized.
Answers
Suggested answer: D, E

Explanation:

An Account Per Tenant strategy means creating a separate Snowflake account for each tenant (customer or business unit) of the multi-tenant application.

This approach has some advantages and disadvantages compared to other strategies, such as Database Per Tenant or Schema Per Tenant.

One advantage is that each tenant can have a unique data shape, meaning they can define their own tables, views, and other objects without affecting other tenants. This allows for more flexibility and customization for each tenant. Therefore, option D is correct.

Another advantage is that storage costs can be optimized, because each tenant can use their own storage credits and manage their own data retention policies. This also reduces the risk of data spillover or cross-tenant access. Therefore, option E is correct.

However, this approach also has some drawbacks, such as:

It requires more administrative overhead and complexity to manage multiple accounts and their resources.

It may not optimize compute costs, because each tenant has to provision their own warehouses and pay for their own compute credits. This may result in underutilization or overprovisioning of compute resources. Therefore, option C is incorrect.

It may not simplify security and RBAC policies, because each account has to define its own roles, users, and privileges. This may increase the risk of human errors or inconsistencies in security configurations. Therefore, option B is incorrect.

It may not reduce the number of objects per tenant, because each tenant still has to create their own databases, schemas, and other objects within their account. This may affect the performance and scalability of the application. Therefore, option A is incorrect.

asked 23/09/2024
Andrey Zhukovskiy
36 questions

A Snowflake Architect is setting up database replication to support a disaster recovery plan. The primary database has external tables.

How should the database be replicated?

A.
Create a clone of the primary database then replicate the database.
A.
Create a clone of the primary database then replicate the database.
Answers
B.
Move the external tables to a database that is not replicated, then replicate the primary database.
B.
Move the external tables to a database that is not replicated, then replicate the primary database.
Answers
C.
Replicate the database ensuring the replicated database is in the same region as the external tables.
C.
Replicate the database ensuring the replicated database is in the same region as the external tables.
Answers
D.
Share the primary database with an account in the same region that the database will be replicated to.
D.
Share the primary database with an account in the same region that the database will be replicated to.
Answers
Suggested answer: B

Explanation:

Database replication is a feature that allows you to create a copy of a database in another account, region, or cloud platform for disaster recovery or business continuity purposes. However, not all database objects can be replicated. External tables are one of the exceptions, as they reference data files stored in an external stage that is not part of Snowflake. Therefore, to replicate a database that contains external tables, you need to move the external tables to a separate database that is not replicated, and then replicate the primary database that contains the other objects. This way, you can avoid replication errors and ensure consistency between the primary and secondary databases. The other options are incorrect because they either do not address the issue of external tables, or they use an alternative method that is not supported by Snowflake. You cannot create a clone of the primary database and then replicate it, as replication only works on the original database, not on its clones. You also cannot share the primary database with another account, as sharing is a different feature that does not create a copy of the database, but rather grants access to the shared objects. Finally, you do not need to ensure that the replicated database is in the same region as the external tables, as external tables can access data files stored in any region or cloud platform, as long as the stage URL is valid and accessible.Reference:

[Replication and Failover/Failback]1

[Introduction to External Tables]2

[Working with External Tables]3

[Replication : How to migrate an account from One Cloud Platform or Region to another in Snowflake]4

asked 23/09/2024
David Hartnett
41 questions

A company has an inbound share set up with eight tables and five secure views. The company plans to make the share part of its production data pipelines.

Which actions can the company take with the inbound share? (Choose two.)

A.
Clone a table from a share.
A.
Clone a table from a share.
Answers
B.
Grant modify permissions on the share.
B.
Grant modify permissions on the share.
Answers
C.
Create a table from the shared database.
C.
Create a table from the shared database.
Answers
D.
Create additional views inside the shared database.
D.
Create additional views inside the shared database.
Answers
E.
Create a table stream on the shared table.
E.
Create a table stream on the shared table.
Answers
Suggested answer: A, D

Explanation:

These two actions are possible with an inbound share, according to the Snowflake documentation and the web search results. An inbound share is a share that is created by another Snowflake account (the provider) and imported into your account (the consumer). An inbound share allows you to access the data shared by the provider, but not to modify or delete it. However, you can perform some actions with the inbound share, such as:

Clone a table from a share. You can create a copy of a table from an inbound share using the CREATE TABLE ... CLONE statement. The clone will contain the same data and metadata as the original table, but it will be independent of the share.You can modify or delete the clone as you wish, but it will not reflect any changes made to the original table by the provider1.

Create additional views inside the shared database. You can create views on the tables or views from an inbound share using the CREATE VIEW statement. The views will be stored in the shared database, but they will be owned by your account.You can query the views as you would query any other view in your account, but you cannot modify or delete the underlying objects from the share2.

The other actions listed are not possible with an inbound share, because they would require modifying the share or the shared objects, which are read-only for the consumer.You cannot grant modify permissions on the share, create a table from the shared database, or create a table stream on the shared table34.

Cloning Objects from a Share | Snowflake Documentation

Creating Views on Shared Data | Snowflake Documentation

Importing Data from a Share | Snowflake Documentation

Streams on Shared Tables | Snowflake Documentation

asked 23/09/2024
Musaddiq Shorunke
44 questions

Which system functions does Snowflake provide to monitor clustering information within a table (Choose two.)

A.
SYSTEM$CLUSTERING_INFORMATION
A.
SYSTEM$CLUSTERING_INFORMATION
Answers
B.
SYSTEM$CLUSTERING_USAGE
B.
SYSTEM$CLUSTERING_USAGE
Answers
C.
SYSTEM$CLUSTERING_DEPTH
C.
SYSTEM$CLUSTERING_DEPTH
Answers
D.
SYSTEM$CLUSTERING_KEYS
D.
SYSTEM$CLUSTERING_KEYS
Answers
E.
SYSTEM$CLUSTERING_PERCENT
E.
SYSTEM$CLUSTERING_PERCENT
Answers
Suggested answer: A, C

Explanation:

According to the Snowflake documentation, these two system functions are provided by Snowflake to monitor clustering information within a table. A system function is a type of function that allows executing actions or returning information about the system. A clustering key is a feature that allows organizing data across micro-partitions based on one or more columns in the table. Clustering can improve query performance by reducing the number of files to scan.

SYSTEM$CLUSTERING_INFORMATION is a system function that returns clustering information, including average clustering depth, for a table based on one or more columns in the table. The function takes a table name and an optional column name or expression as arguments, and returns a JSON string with the clustering information.The clustering information includes the cluster by keys, the total partition count, the total constant partition count, the average overlaps, and the average depth1.

SYSTEM$CLUSTERING_DEPTH is a system function that returns the clustering depth for a table based on one or more columns in the table. The function takes a table name and an optional column name or expression as arguments, and returns an integer value with the clustering depth. The clustering depth is the maximum number of overlapping micro-partitions for any micro-partition in the table.A lower clustering depth indicates a better clustering2.

SYSTEM$CLUSTERING_INFORMATION | Snowflake Documentation

SYSTEM$CLUSTERING_DEPTH | Snowflake Documentation

asked 23/09/2024
Alexander Ang
39 questions

An Architect is troubleshooting a query with poor performance using the QUERY_HIST0RY function. The Architect observes that the COMPILATIONJHME is greater than the EXECUTIONJTIME.

What is the reason for this?

A.
The query is processing a very large dataset.
A.
The query is processing a very large dataset.
Answers
B.
The query has overly complex logic.
B.
The query has overly complex logic.
Answers
C.
The query is queued for execution.
C.
The query is queued for execution.
Answers
D.
The query is reading from remote storage.
D.
The query is reading from remote storage.
Answers
Suggested answer: B

Explanation:

Compilation time is the time it takes for the optimizer to create an optimal query plan for the efficient execution of the query.It also involves some pruning of partition files, making the query execution efficient2

If the compilation time is greater than the execution time, it means that the optimizer spent more time analyzing the query than actually running it. This could indicate that the query has overly complex logic, such as multiple joins, subqueries, aggregations, or expressions.The complexity of the query could also affect the size and quality of the query plan, which could impact the performance of the query3

To reduce the compilation time, the Architect can try to simplify the query logic, use views or common table expressions (CTEs) to break down the query into smaller parts, or use hints to guide the optimizer.The Architect can also use the EXPLAIN command to examine the query plan and identify potential bottlenecks or inefficiencies4Reference:

1: SnowPro Advanced: Architect | Study Guide5

2: Snowflake Documentation | Query Profile Overview6

3: Understanding Why Compilation Time in Snowflake Can Be Higher than Execution Time7

4: Snowflake Documentation | Optimizing Query Performance8

:SnowPro Advanced: Architect | Study Guide

:Query Profile Overview

:Understanding Why Compilation Time in Snowflake Can Be Higher than Execution Time

:Optimizing Query Performance

asked 23/09/2024
Calvin Bolico
36 questions