Skip to content

Running LoRA fine tuned versions of LLama 3.2 using transformers.js #8

Open
@younestouati

Description

@younestouati

Hello,

I am wondering if it’s possible to run a LoRA fine-tuned version of LLaMA 3.2 in the browser using transformers.js. Ideally, I would like to load the base model once and then dynamically load and swap between different LoRA adapters at runtime based on the current task, without reloading the base model each time.

Is this supported in transformers.js? If so, are there any tutorials or examples illustrating how to set this up in a browser environment?

Any guidance or documentation on this would be greatly appreciated. Thank you!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions