So let's say you are able to plot out an object through time using a dataflow loop. You would have a set of data (eg. a snapshot of open opportunities every day that runs every5am.) In addition to plotting this time-series, you want to plot changes from snapshot Date1 to snapshot Date 2 grouped by a certain field like per zip code, or region, or business unit,etc. Below is a snip of the waterfall chart and its SAQL q = load \"zds_IoTForecast5\";\nstartVal = filter q by {{column(qDateTest_1.selection,[\"cTimeStampText\"]).asEquality('cTimeStampText')}};\nendVal = filter q by {{column(q_SnapDate2_1.selection,[\"cTimeStampText\"]).asEquality('cTimeStampText')}};\nq_A = filter q by {{column(qDateTest_1.selection,[\"cTimeStampText\"]).asEquality('cTimeStampText')}};\nq_B = filter q by {{column(q_SnapDate2_1.selection,[\"cTimeStampText\"]).asEquality('cTimeStampText')}};\nstartVal = group startVal by all;\nstartVal = foreach startVal generate \"Start Value\" as '{{column(static_6.selection,[\"Valu\"]).asObject()}}',number_to_string(sum (startVal.'TotValueUSD'),\"$#,###\") as 'Forecast Delta';\nendVal = group endVal by all;\nresult = group q_A by '{{column(static_6.selection,[\"Valu\"]).asObject()}}' full, q_B by '{{column(static_6.selection,[\"Valu\"]).asObject()}}';\nresult = foreach result generate coalesce(q_A.'{{column(static_6.selection,[\"Valu\"]).asObject()}}', q_B.'{{column(static_6.selection,[\"Valu\"]).asObject()}}') as '{{column(static_6.selection,[\"Valu\"]).asObject()}}',coalesce(sum(q_A.'TotValueUSD'),0) as 'A', coalesce(sum(q_B.'TotValueUSD'),0) as 'B';\nresult = foreach result generate '{{column(static_6.selection,[\"Valu\"]).asObject()}}',number_to_string( B-A,\"$#,###\") as 'Forecast Delta';\nendVal = foreach endVal generate \"Ending Value\" as '{{column(static_6.selection,[\"Valu\"]).asObject()}}',number_to_string(sum (endVal.'TotValueUSD'),\"$#,###\") as 'Forecast Delta';\nresult = order result by '{{column(static_6.selection,[\"Valu\"]).asObject()}}' asc;\nfinal = union startVal,result,endVal;
0 Comments
Here is how to set your y-values dynamically so it is not set to zero all the time. Having this functionality enables user to view data at a better angle since the variance will be more 'pronounced' by not having the minimum value start at zero all the time. Go into the dashboard, find the widget and set the JSON to the following.
."measureAxis1": { "sqrtScale": true, "showTitle": true, "showAxis": true, "title": "", "customDomain": { "low": "{{column(q_AxisDomain_1.result,[\"lowest\"]).asObject()}}", "showDomain": true "query": "q = load \"zds_IoTForecast5\";\nstartVal = filter q by {{column(qDateTest_1.selection,[\"cTimeStampText\"]).asEquality('cTimeStampText')}};\nendVal = filter q by {{column(q_SnapDate2_1.selection,[\"cTimeStampText\"]).asEquality('cTimeStampText')}};\nresult = group startVal by all full, endVal by all;\nresult = foreach result generate ( (sum(endVal.'TotValueUSD') + sum(startVal.'TotValueUSD'))/2) *.5 as 'Avg',(case when sum(endVal.'TotValueUSD') < sum(startVal.'TotValueUSD') then (sum(endVal.'TotValueUSD')) * .95 else (sum(startVal.'TotValueUSD'))*.95 end) as 'lowest';", replace the generate statement with number_to_string(sum (startVal.'TotValueUSD'),\"$#,###\") as 'Forecast Delta'
So this is a use-case requested by a non-profit org ("NPO"). They are using a variant of LYBUNT ('Last year but unfortunately not this year and SYBUNT ('some years...") report to track donations. There are several issues to consider for this particular ask. The first is that the NPO wants the reports to track fiscal year donations. This means that the org should have fiscal year enabled. Assuming it is enabled, the complexity is picking out accounts that had donations last FY and did not make any donation this FY OR ANY OTHER combinations thereafter. It is somewhat complex on the CRM side. (This would be a breeze using the Einstein / Tableau CRM platform but we will reserve that discussion for another day.) Illustratively, given the following donation records for account A and B, last FY being June1, 2019-May 30,2020 the report should pick out only account B if you want donations last FY year and donations this year. In addition, one can use the logical operators of <,>,== to create other variations.(ie no donations LY, but this FY,etc.)
Account A , donated $100 with CloseDate = June 2,2017 Account B, donated $50 with CloseDate = June 2, 2019 Account B, donated $10 with CloseDate = Jan 20,2021 Account C, donated $75 with CloseDate = June 5, 2019 Here are the steps: 1) create a custom field "lastFY" with a rollup summary formula. This resolves to number datatype and is a rollup summary from the opportunity object (aka donations in the NPSP app). The snip of the formula is below. 2) create a report with the following filters. By varying the logical operators, you can create different reports as described above. At first glance, this seems like a trivial use-case. First, contacts that were sent emails in 2018 and 2019 need to be counted.Afterwards all those contacts will be considered 'lasped' if their emails bounced in 2020 or emails not opened in 2020. I had to extract the year because the date field containing email opened and email bounced were in string form so I had to use the len() function since dates can be in xx/xx/2020 or x/x/2020 or x/xx/2020 form.Here is the function substr('Date_Email_Opened',len('Date_Email_Opened') -3,4). Two fields were created-openedYr and bouncedYr. The left part of the snip below represents the raw data with derived fields from above added on right. In addition, 2 compute relative transformations were added to create 'bouncedIn2020' and 'openedIn2020' fields. After dataset creation, it becomes a simple cogroup in SAQL to bring out the different totals 1)# of contacts with emails sent in 2018-19 2)from that list, number of contacts where no emails were sent in 2020 or bounced in 2020. 3) % of lapsed = list2 / list 1
q = load "zds_processedYears"; q_B = filter q by 'OpenedYear' in ["2018", "2019"]; q_C = filter q by 'OpenedYear' in ["2018", "2019"] && ('openedIn2020' == "false" or 'bouncedIn2020' =="true"); result = group q by all full, q_B by all full, q_C by all; result = foreach result generate count(q) as 'RawData', unique(q_B.'Contact_Id') as 'EmailSent2yrs', unique(q_C.'Contact_Id') as 'SentButNo2020OrBounced2020', unique(q_C.'Contact_Id') / unique(q_B.'Contact_Id') as 'lapsedPercentage'; To calculate the elapsed # of days between a date field in an Einstein dataset and a string date field, use the daysBetween and toDate functions. The number multipliers have to do with epoch seconds.. Thanks for Pedro Gagliardi for his blog "Mastering Dates on Einstein Analytics using epochs". Here are the calculations.
1 hour = 3600 seconds 1 day = 86400 seconds 1 week = 604800 seconds 1 month (30.44 days) = 2629743 seconds 1 year (365.24 days) = 31556926 seconds Here is a compute expression transformation in a data flow which is of type number.: daysBetween( toDate('hdr2OLI.user2Oppty.CreatedDate_sec_epoch'),toDate(( ( string_to_number( substr("09/01/2020",1,2)) - 1 ) * 2629742 ) + ( ( string_to_number(substr("09/01/2020",4,2)) - 1) * 86400 ) + ( (string_to_number(substr("09/01/2020",7,4) )-1970) * 31556926) ) ) If you want to apply formatting to an Einstein dataset, you can go to dataset > edit dataset, then locate 'Extended Metadata File top right and download it. Paste it on an online JSON editor to make your life easier, then add formatting there. After saving it, go back to the edit dataset and Replace the json with the new one you just edited. Going forward, any dashboard,lens that uses this dataset will default to the settings you specified in the XMD. The code below takes the skeleton XMD and adds formatting on 2 fields GP$ and TotValue. The initial XMD had an empty list. The bolded code was added which turned those fields from something like 43,203.11 to $43203 in any future dashboards.
{"dataset":{},"dates":[],"derivedDimensions":[],"derivedMeasures":[],"dimensions":[],"measures":[ { "field": "TotValueUSD", "format": { "customFormat": "[\"$#,###,###\",1]" }, "label": "Tot Value (USD)", "showInExplorer": true },{ "field": "GPDollarsUSD", "format": { "customFormat": "[\"$#,###,###\",1]" }, "label": "GP$ (USD)", "showInExplorer": true}],"organizations":[],"showDetailsDefaultFields":[]} Data Science 101
Great video by Salesforce about different model metrics in Einstein Discovery. Precision, recall, accuracy and the versatile F1 score are discussed. A must view for aspiring data scientists. https://salesforce.vidyard.com/watch/5UpTbk6D24GdBzyZof2dWf?video_id=8801025 Steps in creating a Salesforce Einstein external connector which enables you to connect to an external instance of Salesforce. It is a great way to achieve roll-up reporting or develop in a sandbox but have fresh data for Discovery development.
1) Analytics Studio > Data Manager >Connect 2)Create Connection and find appropriate connector-ie Heroku, , Azure… for this use-case pick Salesforce external connector. 3)Go to the destination instance and generate a securitytoken by clicking on your user profile and settings then ‘reset my security token’ 4) wait for the email, cut the security token (eg. 8ItzDv0matM8stuyuhT5uXXXs) 5)in the Setup your connection dialogue box, create a connection name/developer name/description and enter your username and password that you use to login to the destination instance. 6)append the security token to your password. So if your password is ‘mypassword’, that box should be ‘mypassword8ItzDv0matM8stuyuhT5uXXXs’ 7)click ‘save and test’ and hope to get ‘connection successful’ message. 8) Once the connector is up and running , you can add the objects that you need and their fields. Run the connector or schedule it to run periodically. 9) In a dataflow, create a digest node and pick the connector name and object from the list. In fact, we found that people embrace AI’s recommendations as long as AI works in partnership with humans. For instance, in one experiment, we framed AI as augmented intelligence that enhances and supports human recommenders rather than replacing them. The AI-human hybrid recommender fared as well as the human-only recommender even when experiential and sensory considerations were important. These findings are important because they represent the first empirical test of augmented intelligence that focuses on AI’s assistive role in advancing human capabilities, rather than as an alternative to humans, which is how it is typically perceived.
|
|