-
Notifications
You must be signed in to change notification settings - Fork 1
[QoL] Small fixes #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@@ -554,7 +554,7 @@ def test_a1111(self): | |||
|
|||
images = images[0, -3:, -3:, -1].flatten() | |||
|
|||
expected = np.array([0.3743, 0.3893, 0.3835, 0.3891, 0.3949, 0.3649, 0.3858, 0.3802, 0.3245]) | |||
expected = np.array([0.3636, 0.3708, 0.3694, 0.3679, 0.3829, 0.3677, 0.3692, 0.3688, 0.3292]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because we have a better coverage of the keys that are loaded in diffusers, thanks to you :)
Currently, the LoRA unloading slow test is failing. Investigating. |
Ah, I think I know why this is happening. We are not unloading the auxiliary modules when This won't happen for LoRA files that don't have auxiliary modules in the UNet (the following, for example): from diffusers import StableDiffusionPipeline
import torch
import numpy as np
generator = torch.manual_seed(0)
prompt = "masterpiece, best quality, mountain"
num_inference_steps = 2
pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", safety_checker=None).to(
"cuda"
)
initial_images = pipe(
prompt, output_type="np", generator=generator, num_inference_steps=num_inference_steps
).images
initial_images = initial_images[0, -3:, -3:, -1].flatten()
lora_model_id = "sayakpaul/dog-dreambooth-lora-pt2"
# lora_filename = "Colored_Icons_by_vizsumit.safetensors"
print("Loading LoRA weights...")
# pipe.load_lora_weights(lora_model_id, weight_name=lora_filename)
pipe.load_lora_weights(lora_model_id)
lora_images = pipe(
prompt, output_type="np", generator=generator, num_inference_steps=num_inference_steps
).images
lora_images = lora_images[0, -3:, -3:, -1].flatten()
print("Unloading LoRA weights...")
pipe.unload_lora_weights()
generator = torch.manual_seed(0)
unloaded_lora_images = pipe(
prompt, output_type="np", generator=generator, num_inference_steps=num_inference_steps
).images
unloaded_lora_images = unloaded_lora_images[0, -3:, -3:, -1].flatten()
print(f"initial_images: {initial_images}")
print(f"lora_images: {lora_images}")
print(f"unloaded_lora_images: {unloaded_lora_images}")
print(np.allclose(initial_images, unloaded_lora_images, atol=1e-4)) Will brainstorm. |
Update: 9ae0186 should have fixed #1 (comment) |
Co-authored-by: Batuhan Taskaya <[email protected]>
* Implement `CustomDiffusionAttnProcessor2_0` * Doc-strings and type annotations for `CustomDiffusionAttnProcessor2_0`. (#1) * Update attnprocessor.md * Update attention_processor.py * Interops for `CustomDiffusionAttnProcessor2_0`. * Formatted `attention_processor.py`. * Formatted doc-string in `attention_processor.py` * Conditional CustomDiffusion2_0 for training example. * Remove unnecessary reference impl in comments. * Fix `save_attn_procs`.
Related to huggingface#4147. Comments inline.