-
Notifications
You must be signed in to change notification settings - Fork 5.9k
fix: missing AutoencoderKL lora adapter #9807
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you! Just a single comment.
09cd44d
to
9fb4880
Compare
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
can you run |
the failing tests are related, can we look into them? I think we may need to add a @require_peft_backend similar to
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I had actually forgot to submit my reviews.
@@ -49,7 +49,7 @@ | |||
from diffusers.utils.torch_utils import randn_tensor | |||
|
|||
from ..test_modeling_common import ModelTesterMixin, UNetTesterMixin | |||
|
|||
from peft import LoraConfig |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to be guarded like this:
if is_peft_available(): |
@@ -299,7 +299,38 @@ def test_output_pretrained(self): | |||
|
|||
self.assertTrue(torch_all_close(output_slice, expected_output_slice, rtol=1e-2)) | |||
|
|||
def test_lora_adapter(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to be decorated with:
@require_peft_backend |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@beniz seems like this was not resolved?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sayakpaul ah apologies, may have forgot to push to repo. Done. Thanks for your vigilance.
All done @yiyixuxu I believe, let me know if anything else remains. |
@beniz thanks! Possible to fix the quality issues by running |
I've fix a missing dependency. FYI running make style && qualilty fails early on other unrelated files, so I've been looking at the underlying ruff calls to apply them to the relevant files. |
@beniz I pushed the quality fixes directly to your branch, which I hope is okay. If not, please let me know, I will revert immediately |
Much appreciated, thank you. |
* fix: missing AutoencoderKL lora adapter * fix --------- Co-authored-by: Sayak Paul <[email protected]>
* fix: missing AutoencoderKL lora adapter * fix --------- Co-authored-by: Sayak Paul <[email protected]>
…h bnb components (#9840) * allow device placement when using bnb quantization. * warning. * tests * fixes * docs. * require accelerate version. * remove print. * revert to() * tests * fixes * fix: missing AutoencoderKL lora adapter (#9807) * fix: missing AutoencoderKL lora adapter * fix --------- Co-authored-by: Sayak Paul <[email protected]> * fixes * fix condition test * updates * updates * remove is_offloaded. * fixes * better * empty --------- Co-authored-by: Emmanuel Benazera <[email protected]>
* fix: missing AutoencoderKL lora adapter * fix --------- Co-authored-by: Sayak Paul <[email protected]>
…h bnb components (#9840) * allow device placement when using bnb quantization. * warning. * tests * fixes * docs. * require accelerate version. * remove print. * revert to() * tests * fixes * fix: missing AutoencoderKL lora adapter (#9807) * fix: missing AutoencoderKL lora adapter * fix --------- Co-authored-by: Sayak Paul <[email protected]> * fixes * fix condition test * updates * updates * remove is_offloaded. * fixes * better * empty --------- Co-authored-by: Emmanuel Benazera <[email protected]>
What does this PR do?
This PR fixes the missing lora adapter with VAE (AutoencoderKL class).
Discussion is here: #9771
Related reports:
GaParmar/img2img-turbo#64
radames/Real-Time-Latent-Consistency-Model#38
Who can review?
cc @sayakpaul