-
Notifications
You must be signed in to change notification settings - Fork 5.9k
Dreambooth class sampling to use xformers if enabled #3312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. |
Hmm given that much higher speed-ups can be gained with PT 2.0 now, should we maybe just advertise PT 2.0 ? cc @sayakpaul |
I think the community still uses xFormers a lot. Maybe it's better to make it clear from the docs that if someone is using PyTorch 2.0, the efficient attention processor will be used by default and they shouldn't have to enable xFormers for that.
WDYT? |
Actually yes, I found no difference for Turing and Ampere GPUs, between PT2.0 and xformers. My PR was 'misled' due to my experiments on a Pascal-gen GPU (still usable :D), for which PT 2.0 perhaps does not launch the correct kernels, but xformers help tremendously. Also the current DreamBooth script fails on PT 2.0 due to some CUDA errors but works with xformers, suggesting that the PT backend is still quirky - see #3325. |
Yes good idea, we could throw a warning if we detect both PT and xformers to be installed maybe at init? |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
Currently, the training can use xformers, but not the inference for class sampling before the training.
The prior preservation class sampling of ~200 images is a major bottleneck right now (~6x time over actual training).
This PR allows xformers to be used for both training/inference, for both full and LoRA dreambooth.