You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After 2/n simplify spawning logic @awaelchli has done, we can take a step further to simplify hooks
It was in the plan of #10059 in step 4, "deprecate dispatch and post_dispatch"
Will it make sense to deprecate pre/post dispatch and run_stage and aggregate logic into dispatch in both trainer and accelerator?
Motivation
Pitch
Additional context
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
Alright, I looked at how/whether we can cleanly merge the setup() and pre_dispatch() methods and everything seems to work out quite well. I propose to merge them like so, with the on_fit_start call previously in the middle:
Before:
# trainer.py, _run():
...
self.training_type_plugin.setup(self) # WE WANT TO MERGE THIS ...call_hook("on_fit_start")
self.training_type_plugin.pre_dispatch() # ... AND THIS
...
Had to be very careful here, because various plugins who previously overrode the pre_dispatch() or setup() had to be adjusted depending on whether they called super().x, to avoid unwanted calls to trickle down the inheritance chain. For this reason, the PR might be a bit hard to review for anyone who is not fully on top of these details ^^
Uh oh!
There was an error while loading. Please reload this page.
Proposed refactor
Discussion raised from #10896
After 2/n simplify spawning logic @awaelchli has done, we can take a step further to simplify hooks
It was in the plan of #10059 in step 4, "deprecate dispatch and post_dispatch"
Will it make sense to deprecate pre/post dispatch and run_stage and aggregate logic into dispatch in both trainer and accelerator?
Motivation
Pitch
Additional context
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
cc @justusschock @awaelchli @akihironitta @kaushikb11 @ananthsub
The text was updated successfully, but these errors were encountered: