-
Notifications
You must be signed in to change notification settings - Fork 3.5k
Trainer: auto default #16847
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trainer: auto default #16847
Conversation
⚡ Required checks status: All passing 🟢Groups summary🟢 pytorch_lightning: Tests workflow
These checks are required after the changes to 🟢 pytorch_lightning: Azure GPU
These checks are required after the changes to 🟢 pytorch_lightning: Azure HPU
These checks are required after the changes to 🟢 pytorch_lightning: Azure IPU
These checks are required after the changes to 🟢 pytorch_lightning: Docs
These checks are required after the changes to 🟢 mypy
These checks are required after the changes to 🟢 installThese checks are required after the changes to 🟢 link-check
These checks are required after the changes to Thank you for your contribution! 💜
|
This reverts commit c3dd282.
I wouldn't do this. I think the current way would be preferred as other wise CUDA CI might become a bottleneck.
They are already fragile enough so I'd prefer to run as little tests as possible there. |
I added one conenctor test that asserts "auto" under each accelerator availability. Just as the one in #16842 |
commit autogluon@dd96a19 forces a minimum version bump of pytorch-lightning to 2.0.0 ref Lightning-AI/pytorch-lightning#16847
commit dd96a19 forces a minimum version bump of pytorch-lightning to 2.0.0 ref Lightning-AI/pytorch-lightning#16847
What does this PR do?
Fixes #10606
Follow-up questions:
"auto"
now, should we have CUDA CI run all tests?cc @justusschock @awaelchli @Borda @carmocca