Salesforce Certified Data Architect Practice Test - Questions Answers, Page 6
List of questions
Question 51
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
Get Cloudy Consulting is migrating their legacy system's users and data to Salesforce. They will be creating 15,000 users, 1.5 million Account records, and 15 million Invoice records. The visibility of these records is controlled by a 50 owner and criteria-based sharing rules.
Get Cloudy Consulting needs to minimize data loading time during this migration to a new organization.
Which two approaches will accomplish this goal? (Choose two.)
Explanation:
Creating the users, uploading all data, and then deploying the sharing rules will reduce the number of sharing recalculations that occur during the data load. Deferring sharing calculations until the data has finished uploading will also improve the performance by postponing the sharing rule evaluation.These are the recommended best practices for loading large data sets into Salesforce
Question 52
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
Cloud Kicks has the following requirements:
- Data needs to be sent from Salesforce to an external system to generate invoices from their Order Management System (OMS).
- A Salesforce administrator must be able to customize which fields will be sent to the external system without changing code.
What are two approaches for fulfilling these requirements? (Choose two.)
Explanation:
An Outbound Message is a native Salesforce feature that allows sending data to an external system without code.It can be configured to include any fields from the source object3. A Field Set is a collection of fields that can be used in Visualforce pages or Apex classes to dynamically determine the fields to send in an HTTP callout. Both of these approaches meet the requirements of Cloud Kicks.
Question 53
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
The architect is planning a large data migration for Universal Containers from their legacy CRM system to Salesforce. What three things should the architect consider to optimize performance of the data migration? Choose 3 answers
Explanation:
Removing custom indexes on the data being loaded will prevent unnecessary index maintenance and improve the data load speed. Deferring sharing calculations of the Salesforce Org will avoid frequent sharing rule evaluations and reduce the load time. Deactivating approval processes and workflow rules will prevent triggering any automation logic that might slow down or fail the data load.
Question 54
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
Universal Containers has a large volume of Contact data going into Salesforce.com. There are 100,000 existing contact records. 200,000 new contacts will be loaded. The Contact object has an external ID field that is unique and must be populated for all existing records. What should the architect recommend to reduce data load processing time?
Explanation:
Loading new records via the Insert operation and existing records via the Update operation will allow using the external ID field as a unique identifier and avoid any duplication or overwriting of records. This is faster and safer than deleting all existing records or using the Upsert operation, which might cause conflicts or errors .
Question 55
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
An architect is planning on having different batches to load one million Opportunities into Salesforce using the Bulk API in parallel mode. What should be considered when loading the Opportunity records?
Explanation:
Ordering batches by Auto-number field will ensure that the records are processed in a sequential order and avoid any locking issues that might occur when loading related records in parallel mode. Creating indexes, grouping batches by AccountId, or sorting batches by Name field values are not necessary or beneficial for loading Opportunity records using the Bulk API.
Question 56
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
DreamHouse Realty has 15 million records in the Order_c custom object. When running a bulk query, the query times out.
What should be considered to address this issue?
Explanation:
PK Chunking is a feature of the Bulk API that allows splitting a large query into smaller batches based on the primary key of the object.This can improve the performance and avoid query timeouts when querying large data sets
Question 57
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
Company S was recently acquired by Company T. As part of the acquisition, all of the data for the Company S's Salesforce instance (source) must be migrated into the Company T's Salesforce instance (target). Company S has 6 million Case records.
An Architect has been tasked with optimizing the data load time.
What should the Architect consider to achieve this goal?
Explanation:
Pre-processing the data means transforming and cleansing the data before loading it into Salesforce. This can reduce the errors and conflicts that might occur during the data load.Using Data Loader with SOAP API to upsert with zip compression enabled can also improve the performance and efficiency of the data load by reducing the network bandwidth and avoiding duplication
Question 58
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
Universal Containers (UC) has users complaining about reports timing out or simply taking too long to run What two actions should the data architect recommend to improve the reporting experience? Choose 2 answers
Explanation:
Indexing key fields used in report criteria can speed up the query execution and reduce the report run time. Indexes can be created by Salesforce automatically or manually by request. Creating one skinny table per report can also improve the reporting performance by storing frequently used fields in a separate table that does not include complex formulas or joins.
Question 59
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company has 12 million records, and a nightly integration queries these records.
Which two areas should a Data Architect investigate during troubleshooting if queries are timing out? (Choose two.)
Explanation:
Making sure the query does not contain NULL in any filter criteria can avoid full table scans and leverage indexes more efficiently. Queries with NULL filters are not selective and can cause performance issues. Creating custom indexes on the fields used in the filter criteria can also enhance the query performance by reducing the number of records to scan.
Question 60
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
Universal Containers (UC) is implementing a Salesforce project with large volumes of data and daily transactions. The solution includes both real-time web service integrations and Visualforce mash -ups with back -end systems. The Salesforce Full sandbox used by the project integrates with full-scale back -end testing systems. What two types of performance testing are appropriate for this project?
Choose 2 answers
Explanation:
Pre-go-live automated page-load testing against the Salesforce Full sandbox can help identify and resolve any performance bottlenecks or issues before deploying the solution to production. The Full sandbox is an ideal environment for performance testing as it replicates the production org in terms of data, metadata, and integrations. Stress testing against the web services hosted by the integration middleware can also help evaluate the scalability and reliability of the integration solution under high load conditions.
Question