Skip to content

update to latest version #2

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 275 commits into from
Feb 8, 2025
Merged

update to latest version #2

merged 275 commits into from
Feb 8, 2025

Conversation

staoxiao
Copy link
Owner

@staoxiao staoxiao commented Feb 8, 2025

What does this PR do?

Fixes # (issue)

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

hlky and others added 30 commits December 16, 2024 09:24
* Add `dynamic_shifting` to SD3

* calculate_shift

* FlowMatchHeunDiscreteScheduler doesn't support mu

* Inpaint/img2img
use_flow_sigmas copy
…10214)

* Use non-human subject in StableDiffusion3ControlNetPipeline example

* make style
Fix repaint scheduler
* torchao quantizer


---------

Co-authored-by: Sayak Paul <[email protected]>
Co-authored-by: Steven Liu <[email protected]>
* attnprocessors

* lora

* make style

* fix

* fix

* sana

* typo
add contribution note for lawrence.
* add lora support for ltx

* add tests

* fix copied from comments

* update

---------

Co-authored-by: Sayak Paul <[email protected]>
* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* Update src/diffusers/quantizers/gguf/utils.py

Co-authored-by: Sayak Paul <[email protected]>

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* Update docs/source/en/quantization/gguf.md

Co-authored-by: Steven Liu <[email protected]>

* update

* update

* update

* update

---------

Co-authored-by: Sayak Paul <[email protected]>
Co-authored-by: Steven Liu <[email protected]>
* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* update

* Update src/diffusers/models/transformers/transformer_mochi.py

Co-authored-by: Aryan <[email protected]>

---------

Co-authored-by: Sayak Paul <[email protected]>
Co-authored-by: Aryan <[email protected]>
delete_adapters

Co-authored-by: Sayak Paul <[email protected]>
* feat: lora support for SANA.

* make fix-copies

* rename test class.

* attention_kwargs -> cross_attention_kwargs.

* Revert "attention_kwargs -> cross_attention_kwargs."

This reverts commit 23433bf.

* exhaust 119 max line limit

* sana lora fine-tuning script.

* readme

* add a note about the supported models.

* Apply suggestions from code review

Co-authored-by: Aryan <[email protected]>

* style

* docs for attention_kwargs.

* remove lora_scale from pag pipeline.

* copy fix

---------

Co-authored-by: Aryan <[email protected]>
* Use `torch` in `get_2d_rotary_pos_embed`

* Add deprecation
* Support pass kwargs to sd3 custom attention processor


---------

Co-authored-by: hlky <[email protected]>
Co-authored-by: YiYi Xu <[email protected]>
* flux_control_inpaint - failing test_flux_different_prompts

* removing test_flux_different_prompts?

* fix style

* fix from PR comments

* fix style

* reducing guidance_scale in demo

* Update src/diffusers/pipelines/flux/pipeline_flux_control_inpaint.py

Co-authored-by: hlky <[email protected]>

* make

* prepare_latents is not copied from

* update docs

* typos

---------

Co-authored-by: affromero <[email protected]>
Co-authored-by: Sayak Paul <[email protected]>
Co-authored-by: hlky <[email protected]>
hlky and others added 29 commits January 27, 2025 08:15
* [training] Convert to ImageFolder script

* make
#10663)

controlnet union XL, make control_image immutible

when this argument is passed a list, __call__
modifies its content, since it is pass by reference
the list passed by the caller gets its content
modified unexpectedly

make a copy at method intro so this does not happen

Co-authored-by: Teriks <[email protected]>
* start pyramid attention broadcast

* add coauthor

Co-Authored-By: Xuanlei Zhao <[email protected]>

* update

* make style

* update

* make style

* add docs

* add tests

* update

* Update docs/source/en/api/pipelines/cogvideox.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/en/api/pipelines/cogvideox.md

Co-authored-by: Steven Liu <[email protected]>

* Pyramid Attention Broadcast rewrite + introduce hooks (#9826)

* rewrite implementation with hooks

* make style

* update

* merge pyramid-attention-rewrite-2

* make style

* remove changes from latte transformer

* revert docs changes

* better debug message

* add todos for future

* update tests

* make style

* cleanup

* fix

* improve log message; fix latte test

* refactor

* update

* update

* update

* revert changes to tests

* update docs

* update tests

* Apply suggestions from code review

Co-authored-by: Steven Liu <[email protected]>

* update

* fix flux test

* reorder

* refactor

* make fix-copies

* update docs

* fixes

* more fixes

* make style

* update tests

* update code example

* make fix-copies

* refactor based on reviews

* use maybe_free_model_hooks

* CacheMixin

* make style

* update

* add current_timestep property; update docs

* make fix-copies

* update

* improve tests

* try circular import fix

* apply suggestions from review

* address review comments

* Apply suggestions from code review

* refactor hook implementation

* add test suite for hooks

* PAB Refactor (#10667)

* update

* update

* update

---------

Co-authored-by: DN6 <[email protected]>

* update

* fix remove hook behaviour

---------

Co-authored-by: Xuanlei Zhao <[email protected]>
Co-authored-by: Steven Liu <[email protected]>
Co-authored-by: DN6 <[email protected]>
…de (#10600)

* fix: refer to use_framewise_encoding on AutoencoderKLHunyuanVideo._encode

* fix: comment about tile_sample_min_num_frames

---------

Co-authored-by: Aryan <[email protected]>
* update

* remove unused fn

* apply suggestions based on review

* update + cleanup 🧹

* more cleanup 🧹

* make fix-copies

* update test
…_max_memory` (#10669)

* conditionally check if compute capability is met.

* log info.

* fix condition.

* updates

* updates

* updates

* updates
…ity pipelines in float16 mode (#10670)

Fix pipeline dtype unexpected change when using SDXL reference community pipelines
update llamatokenizer in hunyuanvideo tests
* support StableDiffusionAdapterPipeline.from_single_file

* make style

---------

Co-authored-by: Teriks <[email protected]>
Co-authored-by: hlky <[email protected]>
* fix enable memory efficient attention on ROCm

while calling CK implementation

* Update attention_processor.py

refactor of picking a set element
* Update train_instruct_pix2pix.py

Fix inconsistent random transform in instruct_pix2pix

* Update train_instruct_pix2pix_sdxl.py
…ity_for_timestep_sampling (#10699)

* feat(training-utils): support device and dtype params in compute_density_for_timestep_sampling

* chore: update type hint

* refactor: use union for type hint

---------

Co-authored-by: Sayak Paul <[email protected]>
* fix dequantization for latest bnb.

* smol fixes.

* fix type annotation

* update peft link

* updates
* Fix Doc Tutorial.

* Add 4 Notebooks and improve their example
scripts.
…#10714)

* Update pipeline_utils.py

Added Self in from_pretrained method so  inference will correctly recognize pipeline

* Use typing_extensions

---------

Co-authored-by: hlky <[email protected]>
* Added `auto_load_textual_inversion` and `auto_load_lora_weights`

* update README.md

* fix

* make quality

* Fix and `make style`
* NPU Adaption for Sanna

* NPU Adaption for Sanna

* NPU Adaption for Sanna

* NPU Adaption for Sanna

* NPU Adaption for Sanna

* NPU Adaption for Sanna

* NPU Adaption for Sanna

* NPU Adaption for Sanna

* NPU Adaption for Sanna

* NPU Adaption for Sanna

* NPU Adaption for Sanna

* NPU Adaption for Sanna

* NPU Adaption for Sanna

* NPU Adaption for Sanna

* NPU Adaption for Sanna

* NPU Adaption for Sanna

* NPU Adaption for Sanna

* NPU Adaption for Sanna

* [bugfix]NPU Adaption for Sanna

---------

Co-authored-by: J石页 <[email protected]>
Co-authored-by: Sayak Paul <[email protected]>
@staoxiao staoxiao merged commit b0c6267 into staoxiao:main Feb 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.