-
Notifications
You must be signed in to change notification settings - Fork 5.9k
Unable to run inference with provided 🤗 example scripts #2344
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hey @jndietz, What |
I'll double check, but pretty sure I'm using the latest from |
@patrickvonplaten Sorry, I misspoke. I am using version 0.12.1, which is the latest as of today. |
@patrickvonplaten I checked out the latest from main and ran Now I am getting this error -- is it due to my configuration in the above script?
|
My LoRA was trained with the following script: accelerate launch train_dreambooth_lora.py \
--pretrained_model_name_or_path=$1 \
--instance_data_dir=$2 \
--output_dir=$3 \
--instance_prompt="a photo of laskajavids" \
--resolution=512 \
--train_batch_size=1 \
--gradient_accumulation_steps=1 \
--checkpointing_steps=200 \
--learning_rate=1e-4 \
--report_to="wandb" \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--max_train_steps=1000 \
--validation_prompt="A photo of laskajavids by the pool" \
--validation_epochs=50 \
--seed="0" \
--mixed_precision="fp16" \
--use_8bit_adam |
@patrickvonplaten I got it - I had to also upgrade https://discuss.huggingface.co/t/error-expected-scalar-type-half-but-found-float/25685 Is there some way of knowing which version of |
This is tangentially related to my issue #2326. I believe I have successfully generated a LoRA, and would like to use it in a
DiffusionPipeline
for inference. Full disclosure: I am trying to run this on a 1050ti with 4GB VRAM. I am able to run inference from within the automatic1111 webui. Following some of the docs on 🤗, I think this should be optimized to run on under 4GB VRAM.It seems like it is about to do something cool, then I get the following error:
I'm not sure if
triton
is related to this issue. I did make an attempt at runningpip install triton
but got a not found error, even though it is on pypi 🤷♂️The text was updated successfully, but these errors were encountered: