-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can I make diffuser pipeline to use .safetensors file for SDXL? #4029
Comments
You can do:
to directly load the safetensors and fp16 variants of the checkpoints. |
thank you. how can i add kohya script trained LoRA safetensors file to this pipeline? |
Now, that is a bit deviating from the original issue you posted. But we have a document here: https://huggingface.co/docs/diffusers/main/en/training/lora#supporting-a1111-themed-lora-checkpoints-from-diffusers. We have ongoing threads on Kohya:
So, to centralize the discussions there I am going to close this thread assuming #4029 (comment) solved your initial query. If not, please feel free reopen. |
Please forgive this comment on a closed ticket, but this may be helpful to others who stumbled upon this issue: For others who got here via Google and are trying to load a safetensors file (like one downloaded from a website that aggregates models), please try this command: If one tries to load a standalone safetensors file with |
@JosephCatrambone What about if use |
If the safetensors requires a text_encoder then it will still download. There is a flag to disable this if your system cannot (or should not) connect to the internet while deployed. https://huggingface.co/docs/diffusers/v0.24.0/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained.local_files_only
😖 If I am understanding, from_single_file is hard-coding the text_encoder? That is not good. You may be able to load text_encoder separately with my_text_encoder = load_my_text_encoder_here(...)
diffuser = StableDiffusionXLPipeline.from_single_file(
"/home/yuo/path/etc/my_sdxl_model.safetensors",
use_safetensors=True,
text_encoder=my_text_encoder
) 中文: 如果 如果 model 硬编码 text_encoder,则可以尝试 对不起。 我的中文不好。:') |
Cloning entire repo is taking 100 GB
How can I make below code to use .safetensors file instead of diffusers?
Lets say I have downloaded my safetensors file into path.safetensors
How to provide it?
The below code working but we are cloning 100 GB instead of just single 14 GB safetensors. Waste of bandwidth
Also how can I add a LoRA checkpoint to this pipeline? a LoRA checkpoint made by Kohya script
The text was updated successfully, but these errors were encountered: