Skip to content

Fine grained differentiation of coverage context #1427

Open
@tysonclugg

Description

@tysonclugg

Is your feature request related to a problem? Please describe.
It's easy to increase coverage using some testing strategies (eg: SnapshotTest), but coverage stats are skewed for other strategies (eg: BDD) where coverage is often far lower. I'd like to have coverage not just report the high level coverage which could well be 100% across all test strategies, but to report finer grained coverage depending on execution context.

The problem I'm having is that I can't filter out subsets of execution context from the test results.

Describe the solution you'd like
To be able to provide additional context from within the code under test so that coverage can be differentiated in a more meaningful way. Perhaps something like this:

import coverage
import django.test
import snapshottest


class APITestCase(snapshottest.TestCase, django.test.TestCase):
    def test_api_me(self):
        """Testing the API for /me"""
        with coverage.extra_context("SnapshotTest"):
            my_api_response = self.client.get('/me')
            self.assertMatchSnapshot(my_api_response)

Describe alternatives you've considered

  • Running different subsets of the entire test suite with different contexts results in expensive setup/teardown tasks being run multiple times.
  • Perhaps hooking into unittest.TestCase.subTest could yield results without depending on Coverge.py specific APIs.
  • There is also the opportunity to integrate with the OpenTelemetry Python API to provide a much richer set of execution context, once again without Coverage.py specific APIs.

Additional context
I'd like to ensure certain modules (eg: **/viewsets.py) have very high coverage within particular execution contexts.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions