Skip to content

Change all xfail decorated tests currently xpassing so that they all fail #4516

Closed
@matteo-pallini

Description

@matteo-pallini

Description of the problem

Some tests are expected to fail if certain conditions are met. Given that this is expected these tests are marked as expected to fail through the use of the decorator @pytest.mark.xfail.

Some of xfail tests are not always running with the same conditions. These may change for every run and time to time generate unexpected passes rather than failures.

The check_logcdf check for the normal distribution is an example affected by this behavior (although in the current version of the codebase it's not xfailed, it will be). The non-deterministic component in this case, is introduced by the sampling of the points to be tested from the distribution values domain (n_samples set to 100).
https://github.com/pymc-devs/pymc3/blob/d248a0e94d67f12c67342c54f90f9b72752e1a1b/pymc3/tests/test_distributions.py#L892-L906

Potential Solution

Change all xfail decorated tests so that they always fail, no matter what conditions are in place. Also set all the @pytest.mark.xfail tests to @pytest.mark.xfail(strict=True) so that any xfail test unexpectedly passing will result in an error in the test run.

A set of tests that should probably be checked are all the ones using any sampling. ie any test running check_logcdf, check_logp, check_selfconsistency_discrete_logcdf, check_int_to_1 (and any other test using these functions). It is possible to test whether these tests ever result in a pass by setting n_samples=-1, which will run the test for all possible cases in the domain

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions