Skip to content

[LoRA] feat: save_lora_adapter() #9862

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 23 commits into from
Nov 19, 2024
Merged

[LoRA] feat: save_lora_adapter() #9862

merged 23 commits into from
Nov 19, 2024

Conversation

sayakpaul
Copy link
Member

@sayakpaul sayakpaul commented Nov 4, 2024

What does this PR do?

Complementing #9712, this PR adds a save_lora_adapter() for the models that support LoRA loading. It also adds tests to ensure things don't break.

Additionally, this PR:

  • Deprecates the load_attn_procs() method when it tries to load a LoRA state dict (and adds tests for it).
  • Replaces the load_attn_procs() method with load_lora_adapter() in src/diffusers/loaders/lora_pipeline.py. I have run the integration tests for the SD and SDXL LoRAs (as those are impacted by this change), and the tests passed.
  • Some minor code improvements on the loading side (comments in line).

Additionally, just to be sure, I have run the integration tests under tests/lora to ensure they pass.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@@ -877,8 +892,6 @@ def _set_gradient_checkpointing_new(self, module, value=False):
model = model_class_copy(**init_dict)
model.enable_gradient_checkpointing()

print(f"{set(modules_with_gc_enabled.keys())=}, {expected_set=}")
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unrelated but my hands were itching.

Copy link
Member

@BenjaminBossan BenjaminBossan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding a method to save the LoRA adapter. Overall, this looks good, I have a few comments but no blockers.

Comment on lines +199 to +201
if prefix is not None:
keys = list(state_dict.keys())
model_keys = [k for k in keys if k.startswith(f"{prefix}.")]
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a better and more robust way to filter out the state dict.

Comment on lines +205 to +209
if len(state_dict) > 0:
if adapter_name in getattr(self, "peft_config", {}):
raise ValueError(
f"Adapter name {adapter_name} already in use in the model - please select a new adapter name."
)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Catching this error earlier than previous.

rank = {}
for key, val in state_dict.items():
if "lora_B" in key:
rank[key] = val.shape[1]

if network_alphas is not None and len(network_alphas) >= 1:
alpha_keys = [k for k in network_alphas.keys() if k.startswith(prefix) and k.split(".")[0] == prefix]
alpha_keys = [k for k in network_alphas.keys() if k.startswith(f"{prefix}.")]
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing redundant conditions.

Comment on lines +227 to +234
if lora_config_kwargs["use_dora"]:
if is_peft_version("<", "0.9.0"):
raise ValueError(
"You need `peft` 0.9.0 at least to use DoRA-enabled LoRAs. Please upgrade your installation of `peft`."
)
else:
lora_config_kwargs.pop("use_dora")
if is_peft_version("<", "0.9.0"):
lora_config_kwargs.pop("use_dora")
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Breaking the conditionals to be more explicit.

@@ -1078,30 +1078,7 @@ def test_load_sharded_checkpoint_device_map_from_hub_local_subfolder(self):
assert new_output.sample.shape == (4, 4, 16, 16)

@require_peft_backend
def test_lora(self):
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unneeded to test here now.

Copy link
Member

@BenjaminBossan BenjaminBossan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this PR, LGTM. I assume the docstring will be completed before merging.

@sayakpaul sayakpaul requested a review from DN6 November 5, 2024 22:34
Copy link
Collaborator

@DN6 DN6 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍🏽

@sayakpaul sayakpaul requested a review from yiyixuxu November 9, 2024 15:35
@sayakpaul
Copy link
Member Author

@yiyixuxu could you give this a review?

@bghira
Copy link
Contributor

bghira commented Nov 16, 2024

why isn't this merged yet?

@sayakpaul
Copy link
Member Author

Pending reviews from Yiyi. It will take a bit as she is off for some days.

@yiyixuxu yiyixuxu merged commit 7d0b9c4 into main Nov 19, 2024
17 of 18 checks passed
@yiyixuxu yiyixuxu deleted the save-lora-adapters branch November 19, 2024 07:03
@yiyixuxu
Copy link
Collaborator

thanks!

sayakpaul added a commit that referenced this pull request Dec 23, 2024
* feat: save_lora_adapter.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants