Support log analysis that is at least equivalent to that available in VSCode

Issue #2113 new
Phil W created an issue

VSCode includes log analysis functionality that closely resembles that available in the Analysis perspective in the Developer Console for debug logs. The advantage of the VSCode feature is that it can be applied to any log file, not one captured while the developer console is open.

Without this feature some of our developers are forced to use VSCode at times. Ideally we would be able to do everything we want in Illuminated Cloud instead.

Official response

  • Scott Wells repo owner

    Thanks for the additional info, Eric. It's very helpful. I think that quite a few of these are already in place, though this discussion does beg the question of how discoverable/understandable they are. For example, here's a leaderboard showing the top resource consumers for DML statements, SOQL queries, and Apex methods/triggers:

    Issue_2113_Leaderboard.png

    It's admittedly missing some of the other code unit types such as workflows, flows, PBs, etc., so that certainly goes on my TODO list. You can then easily review any of these up or down the call stack to understand how that DML/query is being executed or why that method/trigger's is consuming so much of a certain resource (transitively):

    Issue_2113_Show_Callers_Callees.png

    which produces the following that be further reviewed for any of the (supported) consumed resources:

    Issue_2113_Merged_Callees.png

    I agree that a timeline view is particularly useful for a bird's eye inspection of what happened in a process, of course showing all executed code unit types and supporting easy navigation to the associated log entries (in raw or tree view) and therefore source code.

    So I think based on this information, I'm going to focus initially on:

    1. Ensuring that all code unit types are properly represented in the log analyzer.
    2. Adding a third log analysis tab, Timeline, alongside Raw and Tree that works as described above.

    Not 100% sure when this will happen, but I should be able to slot it into my work queue in the relatively near-term.

Comments (8)

  1. Scott Wells repo owner

    Hi, Phil. What specific aspects of these other log analysis tools are your developers needing? Overall I prefer to understand the problem that needs to be solved rather than trying to replicate an existing solution. Please include one or more use cases, and of course feel free to refer to Dev Console, VS Code, or whatever other tool is applicable as a concrete example of how each use case can be addressed. I've just found that in the past when you "clone" an existing implementation, you also risk cloning any shortcomings it might have vs. taking a fresh look at the actual problem that needs to be solved.

  2. Phil W reporter

    The aspects most useful to me would be:

    1. Analysis against some arbitrary log file (with linking into source within the IDE when the source is available).
    2. Timeline visualization, allowing identification of long-running steps (e.g. amount of time in a DB query or some apex method/processing sequence) and navigation into the source.
    3. Execution tree with performance (CPU usage) rollup/breakdown.

    I’d like the timeline to be able to take me to the point in the execution tree where the given operation was initiated (which may not be exact since it may take me to an entry point, rather than specific line of code part way through the processing, since the timeline may show a whole sequence of apex code flow as a single processing block).

  3. Eric Kintzer

    Here’s my take – although the existing tools are useful, they are difficult to use when the problem space is large and could benefit from some alternate views to aid in diagnosing performance and limits breaching issues. To this point, I’d like to see the tools be extended to allow focus on low-hanging fruit actionable items:

    • Easy identification of where/which objects are undergoing multiple Save procedures (in the sense of Triggers and Order of Execution). Often, this is the source of too many SOQL or too much CPU and can be eliminated by converting to before Save flows or smarter recursion prevention in apex. This gets at the timeline point raised by Phil - I’d like to see this as trigger on Account → Workflow A → Update Account → Flow F → Trigger on 3 ContactS → etc. Let’s see the forest, not the trees where we have to infer the forest.
    • Leaderboard of each trigger, workflow, process builder, and flow in terms of SOQL and CPU consumed - this would be both total from entity’s start to entity’s end as well as resources consumed without contribution of components' resources consumed
    • Candidates for moving to async processing (e.g. Flows that call Apex that doesn't return any values, Flows that call subflows that don't return any values)

    Put another way, the goal of an analyzer should be to try and put the analysis in an abstraction that is closest to the top level entities that the SFDC dev/admin works with and where the solutions lend themselves to the common solutions:

    • Deferring work to async transactions
    • Reducing the number of times something goes through a Save operation
    • A method that is exceptionally CPU expensive

    I’m less interested in finding SOQL or DML in for loops because I never let this happen as a developer/ PR reviewer

  4. Scott Wells repo owner

    Thanks for the additional info, Eric. It's very helpful. I think that quite a few of these are already in place, though this discussion does beg the question of how discoverable/understandable they are. For example, here's a leaderboard showing the top resource consumers for DML statements, SOQL queries, and Apex methods/triggers:

    Issue_2113_Leaderboard.png

    It's admittedly missing some of the other code unit types such as workflows, flows, PBs, etc., so that certainly goes on my TODO list. You can then easily review any of these up or down the call stack to understand how that DML/query is being executed or why that method/trigger's is consuming so much of a certain resource (transitively):

    Issue_2113_Show_Callers_Callees.png

    which produces the following that be further reviewed for any of the (supported) consumed resources:

    Issue_2113_Merged_Callees.png

    I agree that a timeline view is particularly useful for a bird's eye inspection of what happened in a process, of course showing all executed code unit types and supporting easy navigation to the associated log entries (in raw or tree view) and therefore source code.

    So I think based on this information, I'm going to focus initially on:

    1. Ensuring that all code unit types are properly represented in the log analyzer.
    2. Adding a third log analysis tab, Timeline, alongside Raw and Tree that works as described above.

    Not 100% sure when this will happen, but I should be able to slot it into my work queue in the relatively near-term.

  5. Eric Kintzer

    Scott - thanks for this detail. You hit the nail on the head with “birds-eye” view as in my experience, one needs to look at things high level to realize where unnecessary (or perhaps, surprising) work is occurring so that optimization can occur. This would look like some sort of graph where starting from some initial stimulus (Save on a record page), all sorts of stuff fans out and cascades - a mix of apex + configurable automation.

    Another way to think about it would be if one were documenting a business operation, one might use a series of blocks and arrows (e.g. Lucidchart) or a UML sequence diagram. These visualizations enable one to understand the conceptual logic flow. It’s your initial point of entry at understanding. If one could put CPU time/SOQL/DML annotations on top of such visualizations, then it becomes easier to see where to focus one’s time on optimization. From there, the drill down into call stacks etc then become more meaningful. And such metrics would show values directly attributable to the element on its own and values attributable to everything as a result of that element’s side effects (cascading, descendent operations).

    Your challenge is to reverse engineer a graphical visualization out of a log file and abstract it in a way to be useful.

    * Hide elements that contribute little -to-nothing in terms of resources. For example, a DML operation might cause 15 workflows to execute but if only one does a field update that triggers DML that triggers something else (a Flow or Apex trigger) might be the only Workflow worth surfacing in a visualization (e.g. show as WFR XXX (CPU = x) + 14 others “negligible”). Apex code that even if called 1000 times but contributes 1% to overall CPU is likely irrelevant
    * Don’t let the top level visualization get bogged down in class methods calling methods calling other methods, etc. Let that be available as a drill down with an easy way to come back to the top level visualization
    * Less-is-more - too many numbers/stats/etc at top level (or too much scrolling/clicking) means the trees are obscuring the forest

    anyway - all this is easier for me to say than put into actual practice so I appreciate whatever you come up with

  6. Log in to comment