@TestSetup methods getting counted by code coverage results

Issue #826 resolved
johndesantiago
created an issue

I notice on occasion that some of my test classes are getting counted as part of the code coverage metrics. I also, noticed that the number of lines being counted matched the lines of code in my @Testsetup annotated method. I commented out that method and the code coverage ignored the class as expected. Re-enabled that method and once again the class was included in the code coverage metrics.

I have other test classes that use @Testsetup and are not getting counted in code coverage. So I renamed my class and ran tests and that resolved the issue. I then renamed my class back to what it was and again the issue is resolved.

I have a since managed to figure out how to reproduce this scenario. My assumption is that because that class was part of a test run where it had coverage results that the class name was cached and for some reason @Testsetup methods are not ignored as @ISaaaaa methods are and so it ends up getting counted as part of the code coverage metrics.

Steps to reproduce:

  1. Create a class and create any method you want. It just needs something to call and to count as part of the code coverage results.

  2. Run all unit tests and make sure that the class from Step 1 shows up in code coverage results.

  3. Modify the class from Step 1 and turn it in to a test class and add a @Testsetup method with any number of lines of code along with a test method to execute (see attached files).

  4. Run tests and you will see that class shows up in the code coverage results. Remove the @Testsetup method and you will see it resolves itself.

  5. If you add the @Testsetup back in it will start to count it again. The only way to resolve is to rename the class using the refactoring tools (which to my understanding is basically a create new class and delete old class mechanic)

  6. Run the unit test and

Comments (9)

  1. Scott Wells repo owner

    John, I'm curious as to whether you see the same behavior when running tests and measuring coverage from outside of IC. Just wondering if this is a Salesforce issue or something in the way that IC is calling the APIs and/or interpreting the results. Have you tried reproducing the issue in isolation of any tooling?

  2. johndesantiago reporter

    I was thinking the same thing when I was researching. I hadn't had a chance to reproduce in another environment but will do that and let you know what I find out.

  3. Scott Wells repo owner

    Sounds good. Let me know what you find. IC is just calling the standard APIs for running tests and collecting coverage metrics, just integrated into the corresponding IDE features. It shouldn't be interpreting or changing the resulting values..."shouldn't be" being the key part of that statement, though! If you do find differences between the two either IC is doing something wrong or the other system, e.g., Developer Console, is doing its own post-processing. I'll be curious to know if either of those is happening.

  4. Doug Ayers

    This is a known issue with Salesforce:

    The issue is with Salesforce returning code coverage metrics for methods annotated with @Testsetup. I did test in both IC2 and Developer Console with same results in both (as expected per Scott saying IC2 simply displays whatever the Salesforce API returns).

    If there are no statements in a @Testsetup method then we get expected results, the test class NOT flagged as 0% coverage.

    If there is at least 1 statement in a @Testsetup method then Salesforce sends back 0% code coverage for the test class.

    Now, this doesn't affect actual code coverage metrics required to deploy code to production, it's just "inconvenient" and "noisey".

    Developer Console when @Testsetup method has statements

    1.png

    Developer Console when @Testsetup method does not have statements

    2.png

  5. Scott Wells repo owner

    Thanks for providing the details, Doug. I thought I'd read that was the case, but it's good to see some concrete references on the topic.

    It certainly wouldn't be hard to perform some level of filtering in IC to remove these from the rollups. I'm just not sure if that's the right thing to do since IC's reported values would be different from other tools' (including Salesforce's) reported values. What are you guys' thoughts?

  6. Doug Ayers

    Continue to rally folks to vote “me too” on that Known Issue so Salesforce fixes the root problem. More important and cooler features to work on ;)

  7. Scott Wells repo owner

    Yep, that's my thought as well. In fact, I have some really cool stuff already teed up for the next IC2 build! I'd definitely prefer to keep my focus on that stuff.

  8. johndesantiago reporter

    Thanks Doug for your feedback on this one. I hadn't had a chance to replicate using other tools. I agree that this is something we should leave to Salesforce to address.

    Thanks Scott for looking into this as well.

  9. Log in to comment