In this informative and engaging video, Salesforce Practice Lead at Robots and Pencils, Daniel Peter, offers actionable, practical tips on data chunking for massive organizations. Peters first identifies the challenge of querying large amounts of data. Typically, this challenge falls into one of two primary areas: the first issue is returning a large number of records, specifically when Salesforce limits query results. The second is finding a small subset of relevant data within a large repository of data. Peter identifies the user pain points in both of these cases.
Peter then breaks down various methods to hold large volumes of data to prepare for query and analysis. He identifies options for container and batch toolkits, which are important options for users to consider prior to proceeding with data chunking and analysis.
In the main portion of the talk Peter describes data chunking. He offers a step-by-step demonstration of how data chunking, specifically PK chunking, works in Salesforce. Finally, he offers some tips developers may use to decide what method of PK chunking is most appropriate for their current project and dataset. He wraps up the discussion by further clarifying the application of PK chunking in the Salesforce context.
This talk will interest anyone who regularly queries large amounts of data or seeks to find relevant results buried in a sizeable amount of irrelevant data. Peter gives Salesforce users the tools they require in order to choose a pathway for analysis. That may or may not include an AJAX toolkit with Visual Force, a batch apex, or others for a query locator or, alternative, a base primary key. Peter leads users to the questions they might want to ask before proceeding with a method, such as whether they have high or low levels of fragmentation on their drive.