Issues Encountered in deployment from lower to higher sandboxes
Issue: Dashboard errored out since recipe needed to be run ( invalid Dataset ID ) Solution: Run the recipe Issue: Recipe can't run because fields were missing Solution: Verify fields exists in SFDC using object manager, then going into the proper perm set >object settings to see if 'read' permission is checked, then assigning the perm set to user, integration user and security user. After this step, verify the fields are ready to be synched by 'analytics studio' > connections > edit objects> and check the fields to be included. Verify fields are in the ingestion nodes is the recipe Run the recipe. After finishing the dashboard should open. Issue: Lightning web page 'X' was missing in the tab list. Solution: go to lightning app builder> find Lightning Web Page there, then save>activate After above, add the menu item in the mage desired. Make sure to put all components needed in the 'Studio Hub Components' tab. This includes perm sets, image widgets, dataflow and recipe, dashboards, lightning web pages.
0 Comments
Use Case: An international media comglomerate client of mine needs to show certain dashboards to certain people. Due to business rules, groups and roles can't be used--only specific people. So Executive Dashbaord should only be seen by John, Jill BUT not Sam even though all 3 are part of a group or a role or a profile Another dahsboard applies to another set of people,etc.
Solution: To achieve this, custom permission sets are used.. These permission sets are then contained inside the 'main' perm set. Assign the custom permission set 'x' to the users allowed to see them using 'manage assignments' in the Settings> User page. Afterwards, drag the dashboard widget onto a lightning web page then use filters for component visibitiy. Use cases for Identity resolution utilizing SFDC Data Cloud:
A publicly traded firm of mine needed some help in unifying their client’s Contact profiles. Use Case# 1: They needed to ‘roll’ their contacts that have the same provider ID into a single profile. Example: John Q Public / Provider ID 12345 Jonathan Q. Public / Provider ID = 12345 UNIFIED into 1 Jonathan Q. Public / Provider ID = 12345 Use Care # 2: Surface a unified profile of 2 or more of the same person using the ‘better’ account name and not their generic account that sometimes get assigned to them. Generic accounts such as ‘customer support’ or ‘case account’ are less preferred that actual account names like ‘xyz hospital’ or ‘ABC Radiology Partners,LLC’. Example: Jane Jones / email: jjones@gmail.com / Account= ‘Customer Support’ Janet Jones / email: jjones @gmail.com / Account= ‘XYZ Radiology Group’ UNIFIED into 1 Janet Jones / email: jjones @gmail.com / Account= ‘XYZ Radiology Group’ Step1: Ingest the SFDC Contacts Object Step 2: Create a data transformation to separate the ‘primary contacts’ (ie the ones with the better accounts) and the secondary contacts (ie the ones with the less preferred accounts—Customer Support,etc. This is done using a number of filter transformations.(additional filters can be added using ‘or’ logic to filter other bad records like emails having the word ’ ‘test’ on them, etc.) Important—the primary and secondary filters must be mutually exclusive. (ie 1 contact can only belong to one group. Primary Filter: NOT(Account == ’Customer Support’ or email in ’Test’) Secondary Filter: Account==’Customer Support’ or email in ‘Test’ When the data transformation is done, inspect the results using Data Explorer to verify proper binning. For instance, out of let’s say 100,000 Contacts, verify mutual exclusivity—ie 30,000 goes to ‘Primary’ Data Lake Object (DLO) and 70,000 goes to ‘Secondary’ DLO. Create 2 formula fields for the Party Labels to be used in ID resolution. (Node ‘addPartyLabel’ x2 ) Example: Party Identification Type = ‘ProvType’ and Party Identification Name = “PROVIDER ID’ Step 3 : Map both Primary and Secondary DLO’s to ‘Individual’ DMO. IMPORTANT: Make sure to properly map any party identifiers to assist in ID resolutions. Examples of party identifiers are AAA membership, Loyalty Club Numbers, Frequent Flyer Miles Id, Driver’s Licenses, Provider ID’s,etc. Step 4: Create Match rules and other ID resolution parameters and then once done, verify success of the Unified Profile using Data Explorer. Simple case statement to render checks and x's to signify true or false. You can buy a vector image ($5-$20) or use microsoft word and snip the image.
case when ( 'primParentPrtner.Training_Compliance__c'==\"false\") then \"✅\" else \"❌\" end as 'icon' Got a use-case from a cohort looking for help in creating a heat map that shows an employees allocation in hours for a project. In the mockup csv file I created for this POC, we can see that Dan is engaged in a project from Jan 2023 to July 2023 for 10 hours.... engaged in Feb 2024 to March 2024 for 20 hours and so on.....The challenge is that the data only provides the start date and end date which is not a problem for a 2 month engagement. However, Dan's hours need to be on the heatmap for the missing months! In above example he needs a plot for Feb 2023, Mar 2023, april 2023 ...up to July 2023. How? By using the fill() function and some string and data manipulation. The big secret is creating a string called 'concat' that embeds the needed info that will be propagated in the newly generated months by the fill() function. One concaenates thie start,enddates and allocation and name... then 'decodes' it in the next geneate by statements, calcualtes the epoch seconds,etc. A case statement then goes row-by-row to mark the 'busy?' row as true or false if the row is inside the project span. Code: q = load "DanCSV"; --code the concat string to contain important info q = foreach q generate q.'Emp' as 'emp',q.'Allocation',q.'Start_Date_Month',q.'Start_Date_Year',q.'Start_Date_Day',q.'End_Date','Start_Date'+'End_Date'+"!"+'Emp'+"!"+number_to_string('Allocation',"0") as 'concat'; --use the fill function to generate the missing dates q= fill q by (dateCols=('Start_Date_Year','Start_Date_Month','Start_Date_Day',"Y-M-D"), startDate="2023-01-01", endDate="2024-12-31",partition='concat'); --start to decode the concat by getting the Nameindex and allocIndex indices for later use q = foreach q generate 'concat','emp','Start_Date_Year'+"-"+'Start_Date_Month'+"-"+'Start_Date_Day' as 'TopDay',substr('concat',1,10)as 'ProjStart',substr('concat',11,10)as 'ProjEnd',index_of('concat',"!",1,1) as 'NameIndx',index_of('concat',"!",1,2) as 'AllocIndx'; --this 'unpacks'the concat string back into original components. q = foreach q generate 'concat','emp', 'TopDay', 'ProjStart', 'ProjEnd', 'NameIndx', 'AllocIndx','AllocIndx'-'NameIndx'-1 as 'NameLength'; --retrieve the indexes q = foreach q generate 'concat','emp', 'TopDay', 'ProjStart', 'ProjEnd', 'NameIndx', 'AllocIndx','NameLength',substr('concat','NameIndx'+1,'NameLength') as 'NewEmp',substr('concat','AllocIndx'+1) as 'NewAlloc'; ---surface the epoch secords for all the dates q = foreach q generate 'NewEmp', 'TopDay', 'ProjStart', 'ProjEnd', 'NewAlloc',date_to_epoch(toDate('TopDay',"yyyy-MM-dd")) as 'TopDaySec',date_to_epoch(toDate('ProjStart',"yyyy-MM-dd")) as 'ProjStartSec',date_to_epoch(toDate('ProjEnd',"yyyy-MM-dd")) as 'ProjEndSec',month_first_day(toDate('TopDay',"yyyy-MM-dd")) as 'monthFD'; --compare topday to start end dates and flag rows that are within span q = foreach q generate 'NewEmp', 'TopDay', 'ProjStart', 'ProjEnd', 'NewAlloc', 'TopDaySec','ProjStartSec','ProjEndSec', 'monthFD',case when ('TopDaySec'>= 'ProjStartSec' and 'TopDaySec'<= 'ProjEndSec') then "true" else "false" end as 'busy'; --show only busy rows. q2=filter q by 'busy'=="true"; q2 = foreach q2 generate 'NewEmp', 'TopDay', 'ProjStart', 'ProjEnd', 'NewAlloc', 'monthFD','busy'; q2=group q2 by ('NewEmp','monthFD'); q2 = foreach q2 generate 'NewEmp','monthFD' ,string_to_number(min('NewAlloc') ) as 'alloc'; User Story: As a user, I need to know tally any tasks (call, email, meeting, etc) that was done in a certain period. I also want to know whether the source of the email, call,etc was the oppty, account, contact or lead object.
Solution: Ingest the Task and/or Event objects and use the whoId and WhatId fields to join to the above 4. These 2 fields in the task and event are 'polymorphic'-- which means its purpose changes to what object it is connected to. (In object oriented programming, a command 'animal.move' is said to be polymorphic since I means crawl for a snake or fly for a bird.. but I digress). To this point, whatId can either augment to an account or opportunity... WhoId can be augmented to either a lead or a contact. More documentation here from saleForceBen... https://www.salesforceben.com/what-is-the-difference-between-whoid-and-whatid/ After joining the 4 objects into the task (left grain), we can then use a case statement in the recipe to label each row in task--whether the taks is from an oppty, acct, lead or contact. Here is the saql. case when WhoId is null and "opty.Id" is not null then 'Opty' when WhoId is null and "opty.Id" is null then 'Acct' when WhoId is not null and "Lead.Id" is not null then 'Lead' when WhoId is not null and "Lead.Id" is null then 'Contact' end One of the core value proposition of Data Cloud is data harmonization-- a fancy term for consolidating multiple profiles from disparate data sources into a single 'Unified Profile'. Why is this important? Clean data is foundational to effective and accurate AI endeavors. The training data for Machine Learning will be riddled with inaccurate samples, duplicate rows,etc. Simply put..bad data begets bad AI.
User Story: Company XYZ has an SFDC instance with 3 customers in Contacts -- William Hall (email:W_Hall@gmail.com), William Hall (email: WilliamHall3@aol.com) and Will Hall (email:W_Hall@gmail.com). Through proper configuration of Match and Reconciliation rules, Data Cloud (formerly known as Customer Data Platform or CDP) will be able to consolidate the 3 Mr. Halls into the cleaner version consisting of 2 William Halls -- the William Hall with the gmail address and the William Hall with the aol.com address. The match rule that will trigger the above harmonization is 'exact last name+fuzzy first name+exact email' . Additional rules can be layered to this such as 'exact frequent flyer miles' or 'exact driver license number',etc. to further 'unify' profiles-- more rules added means more consolidation (ie increase consolidation rate). The snips below illustrate this... 1st set of snips has 4 match rules -- Driver License (DL) OR car club OR exact last name+email OR frequent flyer. This set unified 200 profiles into 196 . The 2nd set added a 5th match rule 'Motor Club" which means if a person has the exact motor club Id in 1 or more objects involved in the harmonization, they are to be unified into 1 profile. Security Regime for a CRM Analytics Implementation (January 19, 2024)
User Story: • Imagine you are tasked with implementing a robust security strategy for a Salesforce CRM environment. The organization deals with sensitive customer data and requires a secure analytics solution that balances data accessibility with stringent security measures. Assignment Task - Share how you would approach the following: • Data Access Control. • Audit Trail and Monitoring • External Data Source Security. Proposed Solution: Data Access Control - can be achieved by utilizing varying methods or layers in the CRMA environment. It would start with app level security where CRMA asset access can be restricted to a certain group of users. As an example, if business unit “A” should only have access to 3 dashboards, the 3 dashboards (and their underlying datasets) should be saved in an app which is then shared to that group of users. The next aspect of CRMA security is object or field level security. This would involve modifying the object and field access of the integration user and the security user since any attempt to access object and fields not given permission would result in the dataflows failing. This layer can be implemented in place of or in combination with app access layer should the requirements call for object access with restrictions on certain fields for certain groups. The next layer of CRMA is utilizing SFDC’s sharing rules in combination with security predicates. The use of one or both layers would depend on the size of the enterprise and the types of objects to be secured (due to the limitations of the sharing rules). In addition, performance issues might need to be considered since there are overhead costs to using the sharing rules. In terms of security predicates, flatten transformations would be used to ingest the manager hierarchy, the role hierarchy, opportunity teams and other sharing hierarchies (ingested through csv’s) to enforce row-level security requirements. A sharing hierarchy dataset can also be curated. This dataset – which can be refreshed weekly contains the aforementioned hierarchies and ingested into new recipes – an efficient way to streamline the process and replace 4+ nodes with a single node. In addition to the layers above, a superuser category can also be created – achieved by 1) adding a ‘Dataset Access’ field in the user profile, set to let’s say “True” and 2) adding to the recipe transformations a constant ‘ViewAllFlag’ field set to “True” . This would enable a certain category of users to bypass the security restrictions and have complete access to the CRMA datasets/assets. Examples of these personas would be external ad-hoc CRMA developers, or senior people that manages business units not adequately expressed by existing role hierarchies. An example of a security predicate which makes rows available only to opportunity leaders, opportunity owners, account owners, managers of the owners, users belonging in the roles hierarchy and superusers. 'OpportunityId.Opportunity_Leader__c' == "$User.Id" || 'OwnerId.Id' == "$User.Id" || 'Account.OwnerId' == "$User.Id" || 'OwnerId.ManagerMulti' == "$User.Id" || 'OwnerId.UserRoleId.Roles' == "$User.UserRoleId" || 'ViewAllFlag' == "$User.Dataset_Access__c" Once all these layers of CRMA get implemented, the CRMA admin will need to test the layers by assuming different identities and checking if the asset access, object/field and row level controls work as defined. Audit Trail and Monitoring - Monitoring the security regime discussed would involve subscribing to external tools to track changes in the security predicates.This is so because security predicate changes are not logged in the SFDC audit log. * Change data capture (CDC) 3rd party tools are available which captures changes to SFDC data such as
*https://trailhead.salesforce.com/trailblazer-community/feed/0D54S00000HDtugSAD In addition to monitoring changes to the predicates, the sharing hierarchy dataset has to be refreshed periodically to track users that have been activated / deactivated in the user object. External Data Source Security - can be implemented by either pre-filtering the data rows before synching it into CRMA (using connectors) or post-filtering them after it gets synched. There are performance issues and generalizability of use factors to be considered in deciding which method to use.
|
|