Skip to content

Avoid replicated tests for lists of algorithms #136

Open
@wdevazelhes

Description

@wdevazelhes

As suggested by @bellet here #117 (review) (-3), we should avoid as much as possible to have different lists of metric_learners at different places in the code to parametrize tests. This problem was in the scope of PR #117 but we could say more generally that any test that should be ran on a list of estimators should take its list from a common place.
This is to make it easier when we add more algorithms to just add them to one master list.
We could also create this list by automatically discovering estimators, inpecting modules etc...
Currently there are some code not in the scope of PR #117 that have some mutualizable code and could benefit from using a common list:

  • everything that is inside test_fit_transform
  • the beginning of test_sklearn_compat where all estimators are listed in their deterministic form
  • all tests in test_transformer_metric_conversion

Currently the list of metric learners to use is at the beginning of test_utils

Another point (related to factorizing tests but a bit separated to what is above): we could also mutualize datasets of test: for instance metric_learn_test test_fit_transform and test_transformer_metric_conversion build an iris dataset in a Setup class. They could maybe use some mutualized dataset from some common dataset place (for now in test_utils)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions