Skip to content

fix: missing AutoencoderKL lora adapter #9807

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Dec 3, 2024

Conversation

beniz
Copy link
Contributor

@beniz beniz commented Oct 30, 2024

What does this PR do?

This PR fixes the missing lora adapter with VAE (AutoencoderKL class).
Discussion is here: #9771
Related reports:
GaParmar/img2img-turbo#64
radames/Real-Time-Latent-Consistency-Model#38

Who can review?

cc @sayakpaul

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! Just a single comment.

@beniz beniz force-pushed the fix-vae-lora branch 4 times, most recently from 09cd44d to 9fb4880 Compare November 2, 2024 11:42
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@yiyixuxu
Copy link
Collaborator

yiyixuxu commented Nov 6, 2024

can you run make style?

@yiyixuxu
Copy link
Collaborator

yiyixuxu commented Nov 8, 2024

the failing tests are related, can we look into them?

I think we may need to add a @require_peft_backend similar to

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I had actually forgot to submit my reviews.

@@ -49,7 +49,7 @@
from diffusers.utils.torch_utils import randn_tensor

from ..test_modeling_common import ModelTesterMixin, UNetTesterMixin

from peft import LoraConfig
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs to be guarded like this:

@@ -299,7 +299,38 @@ def test_output_pretrained(self):

self.assertTrue(torch_all_close(output_slice, expected_output_slice, rtol=1e-2))

def test_lora_adapter(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs to be decorated with:

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@beniz seems like this was not resolved?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sayakpaul ah apologies, may have forgot to push to repo. Done. Thanks for your vigilance.

@beniz
Copy link
Contributor Author

beniz commented Nov 11, 2024

the failing tests are related, can we look into them?

I think we may need to add a @require_peft_backend similar to

All done @yiyixuxu I believe, let me know if anything else remains.

@sayakpaul
Copy link
Member

@beniz thanks! Possible to fix the quality issues by running make style && make quality?

@beniz
Copy link
Contributor Author

beniz commented Nov 29, 2024

@beniz thanks! Possible to fix the quality issues by running make style && make quality?

I've fix a missing dependency. FYI running make style && qualilty fails early on other unrelated files, so I've been looking at the underlying ruff calls to apply them to the relevant files.

@sayakpaul
Copy link
Member

@beniz I pushed the quality fixes directly to your branch, which I hope is okay. If not, please let me know, I will revert immediately

@beniz
Copy link
Contributor Author

beniz commented Nov 29, 2024

@beniz I pushed the quality fixes directly to your branch, which I hope is okay. If not, please let me know, I will revert immediately

Much appreciated, thank you.

@sayakpaul sayakpaul requested a review from yiyixuxu November 29, 2024 11:48
@yiyixuxu yiyixuxu merged commit 963ffca into huggingface:main Dec 3, 2024
15 checks passed
sayakpaul added a commit that referenced this pull request Dec 3, 2024
* fix: missing AutoencoderKL lora adapter

* fix

---------

Co-authored-by: Sayak Paul <[email protected]>
lawrence-cj pushed a commit to lawrence-cj/diffusers that referenced this pull request Dec 4, 2024
* fix: missing AutoencoderKL lora adapter

* fix

---------

Co-authored-by: Sayak Paul <[email protected]>
sayakpaul added a commit that referenced this pull request Dec 4, 2024
…h bnb components (#9840)

* allow device placement when using bnb quantization.

* warning.

* tests

* fixes

* docs.

* require accelerate version.

* remove print.

* revert to()

* tests

* fixes

* fix: missing AutoencoderKL lora adapter (#9807)

* fix: missing AutoencoderKL lora adapter

* fix

---------

Co-authored-by: Sayak Paul <[email protected]>

* fixes

* fix condition test

* updates

* updates

* remove is_offloaded.

* fixes

* better

* empty

---------

Co-authored-by: Emmanuel Benazera <[email protected]>
sayakpaul added a commit that referenced this pull request Dec 23, 2024
* fix: missing AutoencoderKL lora adapter

* fix

---------

Co-authored-by: Sayak Paul <[email protected]>
sayakpaul added a commit that referenced this pull request Dec 23, 2024
…h bnb components (#9840)

* allow device placement when using bnb quantization.

* warning.

* tests

* fixes

* docs.

* require accelerate version.

* remove print.

* revert to()

* tests

* fixes

* fix: missing AutoencoderKL lora adapter (#9807)

* fix: missing AutoencoderKL lora adapter

* fix

---------

Co-authored-by: Sayak Paul <[email protected]>

* fixes

* fix condition test

* updates

* updates

* remove is_offloaded.

* fixes

* better

* empty

---------

Co-authored-by: Emmanuel Benazera <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants