Skip to content

[QoL] Small fixes #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Jul 19, 2023
Merged

[QoL] Small fixes #1

merged 9 commits into from
Jul 19, 2023

Conversation

sayakpaul
Copy link
Collaborator

Related to huggingface#4147. Comments inline.

@sayakpaul sayakpaul requested a review from isidentical July 19, 2023 08:57
@@ -554,7 +554,7 @@ def test_a1111(self):

images = images[0, -3:, -3:, -1].flatten()

expected = np.array([0.3743, 0.3893, 0.3835, 0.3891, 0.3949, 0.3649, 0.3858, 0.3802, 0.3245])
expected = np.array([0.3636, 0.3708, 0.3694, 0.3679, 0.3829, 0.3677, 0.3692, 0.3688, 0.3292])
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because we have a better coverage of the keys that are loaded in diffusers, thanks to you :)

@sayakpaul
Copy link
Collaborator Author

Currently, the LoRA unloading slow test is failing. Investigating.

@sayakpaul
Copy link
Collaborator Author

sayakpaul commented Jul 19, 2023

Ah, I think I know why this is happening.

We are not unloading the auxiliary modules when unload_lora_weights() is called. We need to address this.

This won't happen for LoRA files that don't have auxiliary modules in the UNet (the following, for example):

from diffusers import StableDiffusionPipeline
import torch
import numpy as np

generator = torch.manual_seed(0)
prompt = "masterpiece, best quality, mountain"
num_inference_steps = 2

pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", safety_checker=None).to(
    "cuda"
)
initial_images = pipe(
    prompt, output_type="np", generator=generator, num_inference_steps=num_inference_steps
).images
initial_images = initial_images[0, -3:, -3:, -1].flatten()

lora_model_id = "sayakpaul/dog-dreambooth-lora-pt2"
# lora_filename = "Colored_Icons_by_vizsumit.safetensors"

print("Loading LoRA weights...")
# pipe.load_lora_weights(lora_model_id, weight_name=lora_filename)
pipe.load_lora_weights(lora_model_id)
lora_images = pipe(
    prompt, output_type="np", generator=generator, num_inference_steps=num_inference_steps
).images
lora_images = lora_images[0, -3:, -3:, -1].flatten()

print("Unloading LoRA weights...")
pipe.unload_lora_weights()
generator = torch.manual_seed(0)
unloaded_lora_images = pipe(
    prompt, output_type="np", generator=generator, num_inference_steps=num_inference_steps
).images
unloaded_lora_images = unloaded_lora_images[0, -3:, -3:, -1].flatten()

print(f"initial_images: {initial_images}")
print(f"lora_images: {lora_images}")
print(f"unloaded_lora_images: {unloaded_lora_images}")

print(np.allclose(initial_images, unloaded_lora_images, atol=1e-4))

Will brainstorm.

@sayakpaul
Copy link
Collaborator Author

Update: 9ae0186 should have fixed #1 (comment)

Co-authored-by: Batuhan Taskaya <[email protected]>
@sayakpaul sayakpaul merged commit c2239a4 into kohya-lora-aux-features Jul 19, 2023
@sayakpaul sayakpaul deleted the fixes branch July 19, 2023 15:43
isidentical pushed a commit that referenced this pull request Mar 3, 2024
* Implement `CustomDiffusionAttnProcessor2_0`

* Doc-strings and type annotations for `CustomDiffusionAttnProcessor2_0`. (#1)

* Update attnprocessor.md

* Update attention_processor.py

* Interops for `CustomDiffusionAttnProcessor2_0`.

* Formatted `attention_processor.py`.

* Formatted doc-string in `attention_processor.py`

* Conditional CustomDiffusion2_0 for training example.

* Remove unnecessary reference impl in comments.

* Fix `save_attn_procs`.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants