You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've noticed that with the release master-9578fdc , and specifically with this commit, the ability to quantize CLIP on SDXL models has been removed, causing it to always load as FP32. And it's still the case in the current release
Interesting, it seems to only be happening when using a .safetensors file. With a .gguf, the clip models are quantized as expected, which might explain why it got unnoticed before...
I've noticed that with the release master-9578fdc , and specifically with this commit, the ability to quantize CLIP on SDXL models has been removed, causing it to always load as FP32. And it's still the case in the current release
log 9578fdc
log_9578fdc.txt
log b5f4932 (the just previous release)
log_b5f4932.txt
I'm wondering if this change was intentional. If not, I'm surprised that no one has noticed it yet
The text was updated successfully, but these errors were encountered: