ExamGecko
Home Home / Salesforce / Certified Data Architect

Salesforce Certified Data Architect Practice Test - Questions Answers, Page 23

Question list
Search
Search

List of questions

Search

Related questions











Universal Containers (UC) is migrating from an on-premise homegrown customer relationship management (CRM) system- During analysis, UC users highlight a pain point that there are multiple versions of many customers.

What should the data architect do for a successful migration to mitigate the pain point?

A.
Hire an intern manually de-duplicate the records after migrating to Salesforce.
A.
Hire an intern manually de-duplicate the records after migrating to Salesforce.
Answers
B.
Migrate the data as is, and use Salesforce's de-duplicating feature.
B.
Migrate the data as is, and use Salesforce's de-duplicating feature.
Answers
C.
Have the users manually clean the data in the old system prior to migration.
C.
Have the users manually clean the data in the old system prior to migration.
Answers
D.
Store the data in a staging database, and de-duplicate identical records.
D.
Store the data in a staging database, and de-duplicate identical records.
Answers
Suggested answer: D

Explanation:

Storing the data in a staging database and de-duplicating identical records (option D) is the best solution for a successful migration to mitigate the pain point, as it allows the data architect to identify and merge duplicate customers before they are imported into Salesforce. Hiring an intern manually de-duplicate the records after migrating to Salesforce (option A) is not a good solution, as it may be time-consuming and error-prone, and it does not prevent duplicate records from being created in Salesforce. Migrating the data as is and using Salesforce's de-duplicating feature (option B) is also not a good solution, as it may cause data quality issues and conflicts, and it does not address the root cause of the duplication. Having the users manually clean the data in the old system prior to migration (option C) is also not a good solution, as it may be unrealistic and impractical, and it does not leverage any automated tools or processes.

A casino is implementing Salesforce and is planning to build a customer 360 degree view for a customer who visits its resorts. The casino currently maintains the following systems that record customer activity: L Point-of-sale system: All purchases for a customer

2. Salesforce; All customer service activity and sales activities for a customer

3. Mobile app: All bookings, preferences, and browser activity for a customer

4. Marketing: All email, SMS, and social campaigns for a customer

Customer service agents using Salesforce would like to view the activities from all four systems to provide support to customers. The information has to be current and real time.

What strategy should the data architect implement to satisfy this requirement?

A.
Explore external data sources in Salesforce to build a 360-degree view of the customer.
A.
Explore external data sources in Salesforce to build a 360-degree view of the customer.
Answers
B.
Use a customer data mart to create the 360-degree view of the customer.
B.
Use a customer data mart to create the 360-degree view of the customer.
Answers
C.
Periodically upload summary information in Salesforce to build a 360-degree view.
C.
Periodically upload summary information in Salesforce to build a 360-degree view.
Answers
D.
Migrate customer activities fro, all four system into Salesforce.
D.
Migrate customer activities fro, all four system into Salesforce.
Answers
Suggested answer: A

Explanation:

Exploring external data sources in Salesforce to build a 360 degree view of the customer (option A) is the best strategy to satisfy this requirement, as it allows customer service agents to view the activities from all four systems in real time without storing or replicating the data in Salesforce. Using a customer data mart to create the 360 degree view of the customer (option B) is not a good strategy, as it may introduce additional complexity and cost, and it does not leverage the native Salesforce features. Periodically uploading summary information in Salesforce to build a 360 degree view (option C) is also not a good strategy, as it may cause data latency and inconsistency, and it does not provide real-time information. Migrating customer activities from all four systems into Salesforce (option D) is also not a good strategy, as it may cause data redundancy and conflicts, and it does not scale well with large volumes of data.

Universal Containers has been a customer of Salesforce for 10 years. Currently they have 2 million accounts in the system. Due to an erroneous integration built 3 years ago, it is estimated there are 500,000 duplicates in the system.

Which solution should a data architect recommend to remediate the duplication issue?

A.
Develop an ETL process that utilizers the merge API to merge the duplicate records
A.
Develop an ETL process that utilizers the merge API to merge the duplicate records
Answers
B.
Utilize a data warehouse as the system of truth
B.
Utilize a data warehouse as the system of truth
Answers
C.
Extract the data using data loader and use excel to merge the duplicate records
C.
Extract the data using data loader and use excel to merge the duplicate records
Answers
D.
Implement duplicate rules
D.
Implement duplicate rules
Answers
Suggested answer: D

Explanation:

Implementing duplicate rules (option D) is the best solution to remediate the duplication issue, as it allows the data architect to identify and merge duplicate accounts in Salesforce using native features and tools. Developing an ETL process that utilizes the merge API to merge the duplicate records (option A) is not a good solution, as it may require more coding and testing effort, and it does not prevent duplicates from being created in Salesforce. Utilizing a data warehouse as the system of truth (option B) is also not a good solution, as it may introduce additional complexity and cost, and it does not address the duplication issue in Salesforce. Extracting the data using data loader and using excel to merge the duplicate records (option C) is also not a good solution, as it may be time-consuming and error-prone, and it does not prevent duplicates from being created in Salesforce.

Universal Containers (UC) has implemented a master data management strategy, which uses a central system of truth, to ensure the entire company has the same customer information in all systems. UC customer data changes need to be accurate at all times in all of the systems. Salesforce is the identified system of record for this information.

What is the correct solution for ensuring all systems using customer data are kept up to date?

A.
Send customer data nightly to the system of truth in a scheduled batch job.
A.
Send customer data nightly to the system of truth in a scheduled batch job.
Answers
B.
Send customer record changes from Salesforce to each system in a nightly batch job.
B.
Send customer record changes from Salesforce to each system in a nightly batch job.
Answers
C.
Send customer record changes from Salesforce to the system of truth in real time.
C.
Send customer record changes from Salesforce to the system of truth in real time.
Answers
D.
Have each system pull the record changes from Salesforce using change data capture.
D.
Have each system pull the record changes from Salesforce using change data capture.
Answers
Suggested answer: D

Explanation:

Having each system pull the record changes from Salesforce using change data capture (option D) is the correct solution for ensuring all systems using customer data are kept up to date, as it allows the systems to subscribe to real-time events from Salesforce and receive notifications when customer records are created, updated, deleted, or undeleted. Sending customer data nightly to the system of truth in a scheduled batch job (option A) or sending customer record changes from Salesforce to each system in a nightly batch job (option B) are not good solutions, as they may cause data latency and inconsistency, and they do not provide real-time updates. Sending customer record changes from Salesforce to the system of truth in real time (option C) is also not a good solution, as it does not address how the other systems will receive the updates from the system of truth.

Universal Containers (UC) plans to implement consent management for its customers to be compliant with General Data Protection Regulation (GDPR). UC has the following requirements:

UC uses Person Accounts and Contacts in Salesforce for its customers.

Data Protection and Privacy is enabled in Salesforce.

Consent should be maintained in both these objects.

UC plans to verify the consent provided by customers before contacting them through email or phone.

Which option should the data architect recommend to implement these requirements?

A.
Configure custom fields in Person Account and Contact to store consent provided by customers, and validate consent against the fields.
A.
Configure custom fields in Person Account and Contact to store consent provided by customers, and validate consent against the fields.
Answers
B.
Build Custom object to store consent information in Person Account and Contact, validate against this object before contacting customers.
B.
Build Custom object to store consent information in Person Account and Contact, validate against this object before contacting customers.
Answers
C.
Use the Consent Management Feature to validate consent provide under the person Account and Contact that is provided by the customer.
C.
Use the Consent Management Feature to validate consent provide under the person Account and Contact that is provided by the customer.
Answers
D.
Delete contact information from customers who have declined consent to be contacted.
D.
Delete contact information from customers who have declined consent to be contacted.
Answers
Suggested answer: C

Explanation:

Using the Consent Management Feature to validate consent provided under the person Account and Contact that is provided by the customer (option C) is the best option to implement these requirements, as it allows UC to store and manage consent preferences for Person Accounts and Contacts using native Salesforce features and tools. Configuring custom fields in Person Account and Contact to store consent provided by customers and validate consent against the fields (option A) or building Custom object to store consent information in Person Account and Contact and validate against this object before contacting customers (option B) are not good options, as they may require more customization and maintenance effort, and they do not leverage the existing Data Protection and Privacy feature. Deleting contact information from customers who have declined consent to be contacted (option D) is also not a good option, as it may cause data loss or compliance issues, and it does not allow UC to track or update consent preferences.

A large telecommunication provider that provides internet services to both residence and business has the following attributes:

A customer who purchases its services for their home will be created as an Account in Salesforce.

Individuals within the same house address will be created as Contact in Salesforce.

Businesses are created as Accounts in Salesforce.

Some of the customers have both services at their home and business.

What should a data architect recommend for a single view of these customers without creating multiple customer records?

A.
Customers are created as Contacts and related to Business and Residential Accounts using the Account Contact Relationships.
A.
Customers are created as Contacts and related to Business and Residential Accounts using the Account Contact Relationships.
Answers
B.
Customers are created as Person Accounts and related to Business and Residential Accounts using the Account Contact relationship.
B.
Customers are created as Person Accounts and related to Business and Residential Accounts using the Account Contact relationship.
Answers
C.
Customer are created as individual objects and relate with Accounts for Business and Residence accounts.
C.
Customer are created as individual objects and relate with Accounts for Business and Residence accounts.
Answers
D.
Costumers are created as Accounts for Residence Account and use Parent Account to relate Business Account.
D.
Costumers are created as Accounts for Residence Account and use Parent Account to relate Business Account.
Answers
Suggested answer: A

Explanation:

Creating customers as Contacts and relating them to Business and Residential Accounts using the Account Contact Relationships (option A) is the best option to recommend for a single view of these customers without creating multiple customer records, as it allows the data architect to model complex relationships between customers and accounts using native Salesforce features and tools. Creating customers as Person Accounts and relating them to Business and Residential Accounts using the Account Contact relationship (option B) is not a good option, as it may create data redundancy and inconsistency, and it does not leverage the existing Contact object. Creating customers as individual objects and relating them with Accounts for Business and Residence accounts (option C) is also not a good option, as it may require more customization and maintenance effort, and it does not leverage the existing Account and Contact objects. Creating customers as Accounts for Residence Account and using Parent Account to relate Business Account (option D) is also not a good option, as it may create confusion and complexity with the account hierarchy, and it does not leverage the existing Contact object.

Universals Containers' system administrators have been complaining that they are not able to make changes to its users' record, including moving them to new territories without getting ''unable to lock row'' errors. This is causing the system admins to spend hours updating user records every day.

What should the data architect do to prevent the error?

A.
Reduce number of users updated concurrently.
A.
Reduce number of users updated concurrently.
Answers
B.
Enable granular locking.
B.
Enable granular locking.
Answers
C.
Analyze Splunk query to spot offending records.
C.
Analyze Splunk query to spot offending records.
Answers
D.
Increase CPU for the Salesforce org.
D.
Increase CPU for the Salesforce org.
Answers
Suggested answer: B

Explanation:

Enabling granular locking (option B) is the best option to prevent the error, as it allows finer control over how records are locked during automated or manual processes, and reduces the chances of lock contention or deadlock. Reducing number of users updated concurrently (option A) is not a good option, as it may limit the productivity and efficiency of the system admins, and it does not address the root cause of the error. Analyzing Splunk query to spot offending records (option C) is also not a good option, as it may require more time and effort, and it does not provide a permanent solution for the error. Increasing CPU for the Salesforce org (option D) is also not a good option, as it may introduce additional cost and complexity, and it does not solve the root cause of the error.

Northern Trail Outfitters (NTO) wants to start a loyalty program to reward repeat customers. The program will track every item a customer has bought and grants them points for discounts. The following conditions will exist upon implementation:

Data will be used to drive marketing and product development initiatives.

NTO estimates that the program will generate 100 million rows of date monthly.

NTO will use Salesforce's Einstein Analytics and Discovery to leverage their data and make business and marketing decisions.

What should the Data Architect do to store, collect, and use the reward program data?

A.
Create a custom big object in Salesforce which will be used to capture the Reward Program data for consumption by Einstein.
A.
Create a custom big object in Salesforce which will be used to capture the Reward Program data for consumption by Einstein.
Answers
B.
Have Einstein connect to the point of sales system to capture the Reward Program data.
B.
Have Einstein connect to the point of sales system to capture the Reward Program data.
Answers
C.
Create a big object in Einstein Analytics to capture the Loyalty Program data.
C.
Create a big object in Einstein Analytics to capture the Loyalty Program data.
Answers
D.
Create a custom object in Salesforce that will be used to capture the Reward Program data.
D.
Create a custom object in Salesforce that will be used to capture the Reward Program data.
Answers
Suggested answer: A

Explanation:

According to the official Salesforce guide1, big objects are designed to store and manage massive data volumes within Salesforce without affecting performance. They can be queried by using Async SOQL or standard SOQL, and they can be accessed by using Apex, Visualforce, Lightning components, or APIs. Big objects are ideal for storing data that is used for analytics or reporting purposes, such as the reward program data. Option A is the correct answer because it allows NTO to create a custom big object in Salesforce that can store the reward program data and make it available for consumption by Einstein Analytics and Discovery. Option B is incorrect because Einstein cannot connect directly to the point of sales system to capture the reward program data. Option C is incorrect because Einstein Analytics does not support creating big objects. Option D is incorrect because custom objects are not suitable for storing large volumes of data.

Universal Containers (UC) has lead assignment rules to assign leads to owners. Leads not routed by assignment rules are assigned to a dummy user. Sales rep are complaining of high load times and issues with accessing leads assigned to the dummy user.

What should a data architect recommend to solve these performance issues?

A.
Assign dummy user last role in role hierarchy
A.
Assign dummy user last role in role hierarchy
Answers
B.
Create multiple dummy user and assign leads to them
B.
Create multiple dummy user and assign leads to them
Answers
C.
Assign dummy user to highest role in role hierarchy
C.
Assign dummy user to highest role in role hierarchy
Answers
D.
Periodically delete leads to reduce number of leads
D.
Periodically delete leads to reduce number of leads
Answers
Suggested answer: B

Explanation:

: According to the official Salesforce guide1, assigning leads to a single dummy user can cause performance issues and data skew, especially if the dummy user owns more than 10,000 records. Data skew occurs when a single user or a small number of users own a disproportionately large number of records, which can affect query performance and sharing calculations. Option B is the correct answer because it suggests creating multiple dummy users and assigning leads to them, which can distribute the load and reduce data skew. Option A is incorrect because assigning the dummy user to the last role in the role hierarchy does not affect the performance or data skew issues. Option C is incorrect because assigning the dummy user to the highest role in the role hierarchy can worsen the performance and data skew issues, as it will grant access to more users and records. Option D is incorrect because periodically deleting leads can cause data loss and does not address the root cause of the problem.

Northern Trail Outfitters is concerned because some of its data is sensitive and needs to be identified for access.

What should be used to provide ways to filter and identify the sensitive data?

A.
Define data grouping metadata.
A.
Define data grouping metadata.
Answers
B.
Implement field-level security.
B.
Implement field-level security.
Answers
C.
Custom checkbox denoting sensitive data.
C.
Custom checkbox denoting sensitive data.
Answers
D.
Define data classification metadata.
D.
Define data classification metadata.
Answers
Suggested answer: D

Explanation:

According to the official Salesforce guide1, data classification metadata is a feature that allows administrators to classify data fields based on their sensitivity level, such as confidential, restricted, or general. Data classification metadata can be used to filter and identify sensitive data fields and apply appropriate security measures, such as encryption, masking, or auditing. Option D is the correct answer because it suggests using data classification metadata to provide ways to filter and identify sensitive data. Option A is incorrect because data grouping metadata is not a feature in Salesforce. Option B is incorrect because field-level security is a feature that controls the visibility and editability of fields based on user profiles or permission sets, but it does not provide ways to filter and identify sensitive data. Option C is incorrect because creating a custom checkbox denoting sensitive data is not a scalable or reliable solution, as it requires manual maintenance and does not enforce any security measures.

Total 260 questions
Go to page: of 26