Open
Description
Hello,
I am wondering if it’s possible to run a LoRA fine-tuned version of LLaMA 3.2 in the browser using transformers.js. Ideally, I would like to load the base model once and then dynamically load and swap between different LoRA adapters at runtime based on the current task, without reloading the base model each time.
Is this supported in transformers.js? If so, are there any tutorials or examples illustrating how to set this up in a browser environment?
Any guidance or documentation on this would be greatly appreciated. Thank you!
Metadata
Metadata
Assignees
Labels
No labels