A retailer wants to unify profiles using Loyalty ID which is different than the unique ID of
their customers.
Which object should the consultant use in identity resolution to perform exact match rules on the
Loyalty ID?
The Party Identification object is the correct object to use in identity resolution to perform exact match rules on the Loyalty ID. The Party Identification object is a child object of the Individual object that stores different types of identifiers for an individual, such as email, phone, loyalty ID, social media handle, etc. Each identifier has a type, a value, and a source. The consultant can use the Party Identification object to create a match rule that compares the Loyalty ID type and value across different sources and links the corresponding individuals.
The other options are not correct objects to use in identity resolution to perform exact match rules on the Loyalty ID. The Loyalty Identification object does not exist in Data Cloud. The Individual object is the parent object that represents a unified profile of an individual, but it does not store the Loyalty ID directly. The Contact Identification object is a child object of the Contact object that stores identifiers for a contact, such as email, phone, etc., but it does not store the Loyalty ID.
Data Modeling Requirements for Identity Resolution
Identity Resolution in a Data Space
A consultant is integrating an Amazon 53 activated campaign with the customer's destination system.
In order for the destination system to find the metadata about the segment, which file on the 53 will
contain this information for processing?
Data Cloud receives a nightly file of all ecommerce transactions from the previous day.
Several segments and activations depend upon calculated insights from the updated data in order to
maintain accuracy in the customer's scheduled campaign messages.
What should the consultant do to ensure the ecommerce data is ready for use for each of the
scheduled activations?
The best option that the consultant should do to ensure the ecommerce data is ready for use for each of the scheduled activations is A. Use Flow to trigger a change data event on the ecommerce data to refresh calculated insights and segments before the activations are scheduled to run. This option allows the consultant to use the Flow feature of Data Cloud, which enables automation and orchestration of data processing tasks based on events or schedules. Flow can be used to trigger a change data event on the ecommerce data, which is a type of event that indicates that the data has been updated or changed. This event can then trigger the refresh of the calculated insights and segments that depend on the ecommerce data, ensuring that they reflect the latest data. The refresh of the calculated insights and segments can be completed before the activations are scheduled to run, ensuring that the customer's scheduled campaign messages are accurate and relevant.
A global fashion retailer operates online sales platforms across AMFR, FMFA, and APAC. the data formats for customer, order, and product Information vary by region, and compliance regulations require data to remain unchanged in the original data sources They also require a unified view of customer profiles for real-time personalization and analytics.
Given these requirement, which transformation approach should the company implement to standardise and cleanse incoming data streams?
Given the requirements to standardize and cleanse incoming data streams while keeping the original data unchanged in compliance with regional regulations, the best approach is to implement batch data transformations . Here's why:
Understanding the Requirements
The global fashion retailer operates across multiple regions (AMER, EMEA, APAC), each with varying data formats for customer, order, and product information.
Compliance regulations require the original data to remain unchanged in the source systems.
The company needs a unified view of customer profiles for real-time personalization and analytics.
Why Batch Data Transformations?
Batch Transformations for Standardization :
Batch data transformations allow you to process large volumes of data at scheduled intervals.
They can standardize and cleanse data (e.g., converting different date formats, normalizing product names) without altering the original data in the source systems.
Compliance with Regulations :
Since the original data remains unchanged in the source systems, batch transformations comply with regional regulations.
The transformed data is stored in a separate layer (e.g., a new Data Lake Object or Unified Profile) for downstream use.
Unified Customer Profiles :
After transformation, the cleansed and standardized data can be used to create a unified view of customer profiles in Salesforce Data Cloud.
This enables real-time personalization and analytics across regions.
Steps to Implement This Solution
Step 1: Identify Transformation Needs
Analyze the differences in data formats across regions (e.g., date formats, currency, product IDs).
Define the rules for standardization and cleansing (e.g., convert all dates to ISO format, normalize product names).
Step 2: Create Batch Transformations
Use Data Cloud's Batch Transform feature to apply the defined rules to incoming data streams.
Schedule the transformations to run at regular intervals (e.g., daily or hourly).
Step 3: Store Transformed Data Separately
Store the transformed data in a new Data Lake Object (DLO) or Unified Profile.
Ensure the original data remains untouched in the source systems.
Step 4: Enable Unified Profiles
Use the transformed data to create a unified view of customer profiles in Salesforce Data Cloud.
Leverage this unified view for real-time personalization and analytics.
Why Not Other Options?
A . Implement streaming data transformations : Streaming transformations are designed for real-time processing but may not be suitable for large-scale standardization and cleansing tasks. Additionally, they might not align with compliance requirements to keep the original data unchanged.
C . Transform data before ingesting into Data Cloud : Transforming data before ingestion would require modifying the original data in the source systems, violating compliance regulations.
D . Use Apex to transform and cleanse data : Using Apex is overly complex and resource-intensive for this use case. Batch transformations are a more efficient and scalable solution.
Conclusion
By implementing batch data transformations , the global fashion retailer can standardize and cleanse its data while complying with regional regulations and enabling a unified view of customer profiles for real-time personalization and analytics.
A consultant is setting up Data Cloud for a multi-brand organization and is using data spaces to segregate its data for various brands.
While starting the mapping of a data stream, the consultant notices that they cannot map the object for one of the brands.
What should the consultant do to make the object available for a new data space?
When setting up Data Cloud for a multi-brand organization, if a consultant cannot map an object for one of the brands during data stream setup, they should navigate to the Data Space tab and select the object to include it in the new data space. Here's why:
Understanding the Issue
The consultant is using data spaces to segregate data for different brands.
While mapping a data stream, they notice that an object is unavailable for one of the brands.
This indicates that the object has not been associated with the new data space.
Why Navigate to the Data Space Tab?
Data Spaces and Object Availability :
Objects must be explicitly added to a data space before they can be used in mappings or transformations within that space.
If an object is missing, it means it has not been included in the data space configuration.
Solution Approach :
By navigating to the Data Space tab , the consultant can add the required object to the new data space.
This ensures the object becomes available for mapping and use in the data stream.
Steps to Resolve the Issue
Step 1: Navigate to the Data Space Tab
Go to Data Cloud > Data Spaces and locate the new data space for the brand.
Step 2: Add the Missing Object
Select the data space and click on Edit .
Add the required object (e.g., a Data Model Object or Data Lake Object) to the data space.
Step 3: Save and Verify
Save the changes and return to the data stream setup.
Verify that the object is now available for mapping.
Step 4: Complete the Mapping
Proceed with mapping the object in the data stream.
Why Not Other Options?
A . Create a new data stream and map the second data stream to the data space : Creating a new data stream is unnecessary if the issue is simply object availability in the data space.
B . Copy data from the default data space to a new DMO using the Data Copy feature and link this DMO to the new data space : This is overly complex and not required if the object can simply be added to the data space.
C . Create a batch transform to split data between different data spaces : Batch transforms are used for data processing, not for resolving object availability issues.
Conclusion
The correct solution is to navigate to the Data Space tab and select the object to include it in the new data space . This ensures the object is available for mapping and resolves the issue efficiently.
Evangelina
16 days agoArlean
2 months agoRex
3 months agoValene
4 months agoDannette
5 months agoSherita
5 months agoDevora
6 months agoErasmo
6 months agoLuis
7 months agoJannette
7 months agoJerry
7 months agoElden
8 months agoSalley
8 months agoDenna
8 months agoLoren
9 months agoJunita
9 months agoDelisa
9 months agoRima
9 months agoMicah
10 months agoPortia
10 months agoMelodie
10 months agoCarisa
10 months agoFidelia
11 months agoGearldine
12 months agoEdmond
1 years agoLaurel
1 years agoDylan
1 years agoLorean
1 years agoJeanice
1 years agoEsteban
1 years agoAlexa
1 years ago