Description
Here's what I came up with while trying to replicate mypy's own testcases, which interleave the expected errors in the source code like so:
https://github.com/ikonst/pynamodb-mypy/blob/9453404f576c4ead04749d61eb041e10bec13229/tests/test_plugin.py
The implementation is here:
https://github.com/ikonst/pynamodb-mypy/blob/9453404f576c4ead04749d61eb041e10bec13229/tests/mypy_helpers.py
In a failure case, the failed assertion output will look like:
> assert actual == expected
E assert 'from pynamod...ibute "lower"' == 'from pynamodb..._attr.lower()'
E from pynamodb.attributes import UnicodeAttribute
E from pynamodb.models import Model
E
E class MyModel(Model):
E my_attr = UnicodeAttribute(null=True)
E
E reveal_type(MyModel.my_attr) # E: Revealed type is 'pynamodb.attributes._NullableAttribute[pynamodb.attributes.UnicodeAttribute, builtins.str]'
E reveal_type(MyModel().my_attr) # E: Revealed type is 'Union[builtins.str*, None]'
E - MyModel().my_attr.lower() # E: Item "None" of "Optional[str]" has no attribute "lower"
E + MyModel().my_attr.lower()
Of course, this requires assertion rewriting, which would've been done in the pytest plugin normally:
https://github.com/ikonst/pynamodb-mypy/blob/9453404f576c4ead04749d61eb041e10bec13229/tests/__init__.py#L3
There are two advantages of interleaving the errors in the test code:
- changing the test code does not necessitate updating of line numbers
- updating the test code is easier (by diffing the program against the "expected program", i.e. data-driven testing)
Have you considered allowing a similar approach?