Skip to content

Parametrize based on another parameter #4050

Open
@Sup3rGeo

Description

@Sup3rGeo

Hi,

This feature is related to a concept I think that pytest is missing. Let's say I have this very simple test:

@pytest.mark.parametrize("param", [1,2,3])
def test(param):
    mysetupcode(param)
    myassertfunc(param)

Now, I want to refactor the setup as a fixture, but still have the same parametrization for both the test and the fixture. In that case, I end up having to create a fixture:

@pytest.fixture(params=[1,2,3])
def param():
    # This is not a real fixture because it does not run any code
    return request.param

@pytest.fixture
def mysetupcode(param):
    # ... setup code
    pass

def test(param, mysetupcode):
    pass

So basically the concept that pytest is missing is this param fixture that is not a real fixture as does not run any code, but is just a parameter broadcast entity that is different from parametrizing at the individual fixture/test level.

Another example where this would be useful, with some API suggestions:

datasetA = [data1_a, data2_a, data3_a]
datasetB = [data1_b]
datasetC = [data1_c, data2_c]


# HERE the concept missing. The following line is equivalent to:
# @pytest.fixture(params=[datasetA, datasetB])
# def dataset(request)
#     return request.param
# Also you can think as a pytest.mark.parametrize not tied to any function
pytest.parameterize('dataset', [datasetA, datasetB])

# Fixtures could use it just as well (as it is just like a fixture)
# Actually becomes another way to parametrize
@pytest.fixture
def fixture1(dataset):
    setup(dataset)
    pass

# tests can use this concept as well, as we would with fixtures
def test_one(dataset)

# Following line is almost as if we are parametrizing the parametrize
# So it will be doubly parametrized: 
# test_data[datasetA-data1_a]
# test_data[datasetA-data2_a]
# test_data[datasetA-data3_a]
# test_data[datasetB-data1_b]
# test_data[datasetC-data1_c]
# test_data[datasetC-data2_c]
@pytest.mark.parametrize('data', pytest.parameter('dataset')])
def test_data(data):
    #do test

This feature was discussed in
TvoroG/pytest-lazy-fixture#25
#349 (comment)

I decided to open a new feature request because I believe it is a different use case/implementation than #349, although they might be related. The difference is that we don't need to worry about this that @nicoddemus mentioned:

Under the current design, pytest needs to generate all items during the collection stage. Your examples generate new items during the session testing stage (during fixture instantiation), which is not currently possible.

Because we don't need to run any code to have all the parameters in place at collection time.

Metadata

Metadata

Assignees

No one assigned

    Labels

    topic: parametrizerelated to @pytest.mark.parametrizetype: proposalproposal for a new feature, often to gather opinions or design the API around the new feature

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions