Skip to content

[Tracker] modularize inferencing during and after training in the example scripts #6545

Closed
@sayakpaul

Description

@sayakpaul

We provide support for running validation inference during and after training in our officially maintained training examples. This is very helpful to keep track of the training progress.

We could modularize some bits in the example to reduce the LoC.

The train_lcm_distill_lora_sdxl.py script already does this:

def log_validation(vae, args, accelerator, weight_dtype, step, unet=None, is_final_validation=False):

It would be nice to follow something similar for the rest of the scripts too. Here's a handy list of the scripts where we'd like to incorporate this change:

Feel free to comment here if you're interested.

Also, when opening PRs, please target one example at a time and please tag me in the PRs.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions