Salesforce Certified Data Cloud Consultant Practice Test - Questions Answers, Page 2
List of questions
Question 11
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
What does the Source Sequence reconciliation rule do in identity resolution?
Explanation:
: The Source Sequence reconciliation rule sets the priority of specific data sources when building attributes in a unified profile, such as a first or last name. This rule allows you to define which data source should be used as the primary source of truth for each attribute, and which data sources should be used as fallbacks in case the primary source is missing or invalid. For example, you can set the Source Sequence rule to use data from Salesforce CRM as the first priority, data from Marketing Cloud as the second priority, and data from Google Analytics as the third priority for the first name attribute. This way, the unified profile will use the first name value from Salesforce CRM if it exists, otherwise it will use the value from Marketing Cloud, and so on. This rule helps you to ensure the accuracy and consistency of the unified profile attributes across different data sources.
Question 12
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
Which two dependencies prevent a data stream from being deleted?
Choose 2 answers
Explanation:
To delete a data stream in Data Cloud, the underlying data lake object (DLO) must not have any dependencies or references to other objects or processes.The following two dependencies prevent a data stream from being deleted1:
Data transform: This is a process that transforms the ingested data into a standardized format and structure for the data model. A data transform can use one or more DLOs as input or output.If a DLO is used in a data transform, it cannot be deleted until the data transform is removed or modified2.
Data model object: This is an object that represents a type of entity or relationship in the data model. A data model object can be mapped to one or more DLOs to define its attributes and values.If a DLO is mapped to a data model object, it cannot be deleted until the mapping is removed or changed3.
1:Delete a Data Streamarticle on Salesforce Help
2: [Data Transforms in Data Cloud] unit on Trailhead
3: [Data Model in Data Cloud] unit on Trailhead
Question 13
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
What should a user do to pause a segment activation with the intent of using that segment again?
Explanation:
The correct answer is A. Deactivate the segment. If a segment is no longer needed, it can be deactivated through Data Cloud and applies to all chosen targets.A deactivated segment no longer publishes, but it can be reactivated at any time1. This option allows the user to pause a segment activation with the intent of using that segment again.
The other options are incorrect for the following reasons:
B . Delete the segment.This option permanently removes the segment from Data Cloud and cannot be undone2. This option does not allow the user to use the segment again.
C . Skip the activation.This option skips the current activation cycle for the segment, but does not affect the future activation cycles3. This option does not pause the segment activation indefinitely.
D . Stop the publish schedule.This option stops the segment from publishing to the chosen targets, but does not deactivate the segment4. This option does not pause the segment activation completely.
1:Deactivated Segmentarticle on Salesforce Help
2:Delete a Segmentarticle on Salesforce Help
3:Skip an Activationarticle on Salesforce Help
4:Stop a Publish Schedulearticle on Salesforce Help
Question 14
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
When creating a segment on an individual, what is the result of using two separate containers linked by an AND as shown below?
GoodsProduct | Count | At Least | 1
Color | Is Equal To | red
AND
GoodsProduct | Count | At Least | 1
PrimaryProductCategory | Is Equal To | shoes
Explanation:
: When creating a segment on an individual, using two separate containers linked by an AND means that the individual must satisfy both the conditions in the containers. In this case, the individual must have purchased at least one product with the color attribute equal to 'red' and at least one product with the primary product category attribute equal to 'shoes'. The products do not have to be the same or purchased in the same transaction. Therefore, the correct answer is A.
The other options are incorrect because they imply different logical operators or conditions. Option B implies that the individual must have purchased a single product that has both the color attribute equal to 'red' and the primary product category attribute equal to 'shoes'. Option C implies that the individual must have purchased only one product that has both the color attribute equal to 'red' and the primary product category attribute equal to 'shoes' and no other products. Option D implies that the individual must have purchased either one product with the color attribute equal to 'red' or one product with the primary product category attribute equal to 'shoes' or both, which is equivalent to using an OR operator instead of an AND operator.
Create a Container for Segmentation
Create a Segment in Data Cloud
Navigate Data Cloud Segmentation
Question 15
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
What should an organization use to stream inventory levels from an inventory management system into Data Cloud in a fast and scalable, near-real-time way?
Explanation:
The Ingestion API is a RESTful API that allows you to stream data from any source into Data Cloud in a fast and scalable way. You can use the Ingestion API to send data from your inventory management system into Data Cloud as JSON objects, and then use Data Cloud to create data models, segments, and insights based on your inventory data. The Ingestion API supports both batch and streaming modes, and can handle up to 100,000 records per second. The Ingestion API also provides features such as data validation, encryption, compression, and retry mechanisms to ensure data quality and security.
Question 16
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
Northern Trail Outfitters (NTO), an outdoor lifestyle clothing brand, recently started a new line of business. The new business specializes in gourmet camping food. For business reasons as well as security reasons, it's important to NTO to keep all Data Cloud data separated by brand.
Which capability best supports NTO's desire to separate its data by brand?
Explanation:
Data spaces are logical containers that allow you to separate and organize your data by different criteria, such as brand, region, product, or business unit1.Data spaces can help you manage data access, security, and governance, as well as enable cross-cloud data integration and activation2. For NTO, data spaces can support their desire to separate their data by brand, so that they can have different data models, rules, and insights for their outdoor lifestyle clothing and gourmet camping food businesses.Data spaces can also help NTO comply with any data privacy and security regulations that may apply to their different brands3. The other options are incorrect because they do not provide the same level of data separation and organization as data spaces.Data streams are used to ingest data from different sources into Data Cloud, but they do not separate the data by brand4.Data model objects are used to define the structure and attributes of the data, but they do not isolate the data by brand5. Data sources are used to identify the origin and type of the data, but they do not partition the data by brand.
Question 17
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
Cumulus Financial created a segment called High Investment Balance Customers. This is a foundational segment that includes several segmentation criteria the marketing team should consistently use.
Which feature should the consultant suggest the marketing team use to ensure this consistency when creating future, more refined segments?
Explanation:
Nested segments are segments that include or exclude one or more existing segments. They allow the marketing team to reuse filters and maintain consistency in their data by using an existing segment to build a new one. For example, the marketing team can create a nested segment that includes High Investment Balance Customers and excludes customers who have opted out of email marketing. This way, they can leverage the foundational segment and apply additional criteria without duplicating the rules. The other options are not the best features to ensure consistency because:
B . A calculated insight is a data object that performs calculations on data lake objects or CRM data and returns a result. It is not a segment and cannot be used for activation or personalization.
C . A data kit is a bundle of packageable metadata that can be exported and imported across Data Cloud orgs. It is not a feature for creating segments, but rather for sharing components.
D . Cloning a segment creates a copy of the segment with the same rules and filters. It does not allow the marketing team to add or remove criteria from the original segment, and it may create confusion and redundancy.
Question 18
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
Cumulus Financial uses Service Cloud as its CRM and stores mobile phone, home phone,
and work phone as three separate fields for its customers on the Contact record. The company plans to use Data Cloud and ingest the Contact object via the CRM Connector.
What is the most efficient approach that a consultant should take when ingesting this data to ensure all the different phone numbers are properly mapped and available for use in activation?
Explanation:
The most efficient approach that a consultant should take when ingesting this data to ensure all the different phone numbers are properly mapped and available for use in activation is B. Ingest the Contact object and use streaming transforms to normalize the phone numbers from the Contact data stream into a separate Phone data lake object (DLO) that contains three rows, and then map this new DLO to the Contact Point Phone data map object. This approach allows the consultant to use the streaming transforms feature of Data Cloud, which enables data manipulation and transformation at the time of ingestion, without requiring any additional processing or storage. Streaming transforms can be used to normalize the phone numbers from the Contact data stream, such as removing spaces, dashes, or parentheses, and adding country codes if needed. The normalized phone numbers can then be stored in a separate Phone DLO, which can have one row for each phone number type (work, home, mobile). The Phone DLO can then be mapped to the Contact Point Phone data map object, which is a standard object that represents a phone number associated with a contact point. This way, the consultant can ensure that all the phone numbers are available for activation, such as sending SMS messages or making calls to the customers.
The other options are not as efficient as option B. Option A is incorrect because it does not normalize the phone numbers, which may cause issues with activation or identity resolution. Option C is incorrect because it requires creating a calculated insight, which is an additional step that consumes more resources and time than streaming transforms. Option D is incorrect because it requires creating formula fields in the Contact data stream, which may not be supported by the CRM Connector or may cause conflicts with the existing fields in the Contact object.
Question 19
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A customer has a Master Customer table from their CRM to ingest into Data Cloud. The table contains a name and primary email address, along with other personally Identifiable information (Pll).
How should the fields be mapped to support identity resolution?
Explanation:
To support identity resolution in Data Cloud, the fields from the Master Customer table should be mapped to the standard data model objects that are designed for this purpose. The Individual object is used to store the name and other personally identifiable information (PII) of a customer, while the Contact Phone Email object is used to store the primary email address and other contact information of a customer. These objects are linked by a relationship field that indicates the contact information belongs to the individual. By mapping the fields to these objects, Data Cloud can use the identity resolution rules to match and reconcile the profiles from different sources based on the name and email address fields. The other options are not recommended because they either create a new custom object that is not part of the standard data model, or map all fields to the Customer object that is not intended for identity resolution, or map all fields to the Individual object that does not have a standard email address field.
Question 20
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
Cloud Kicks received a Request to be Forgotten by a customer.
In which two ways should a consultant use Data Cloud to honor this request?
Choose 2 answers
Explanation:
: To honor a Request to be Forgotten by a customer, a consultant should use Data Cloud in two ways:
Add the Individual ID to a headerless file and use the delete from file functionality.This option allows the consultant to delete multiple Individuals from Data Cloud by uploading a CSV file with their IDs1.The deletion process is asynchronous and can take up to 24 hours to complete1.
Use the Consent API to suppress processing and delete the Individual and related records from source data streams.This option allows the consultant to submit a Data Deletion request for an Individual profile in Data Cloud using the Consent API2.A Data Deletion request deletes the specified Individual entity and any entities where a relationship has been defined between that entity's identifying attribute and the Individual ID attribute2.The deletion process is reprocessed at 30, 60, and 90 days to ensure a full deletion2. The other options are not correct because:
Deleting the data from the incoming data stream and performing a full refresh will not delete the existing data in Data Cloud, only the new data from the source system3.
Using Data Explorer to locate and manually remove the Individual will not delete the related records from the source data streams, only the Individual entity in Data Cloud.
Delete Individuals from Data Cloud
Requesting Data Deletion or Right to Be Forgotten
Data Refresh for Data Cloud
[Data Explorer]
Question