-
Notifications
You must be signed in to change notification settings - Fork 3.5k
Fix passing Accelerator
objects to the accelerator flag with devices=x
& resolve training type plugin
#10773
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Accelerator
objects to the accelerator flag with devices=x
& resolve training type plugin
- Fixed passing `Accelerator` objects to the accelerator flag with `devices=x` ([#10773](https://github.com/PyTorchLightning/pytorch-lightning/pull/10773)) | ||
|
||
|
||
- Resolve training type plugin when passed within `Accelerator` ([#10773](https://github.com/PyTorchLightning/pytorch-lightning/pull/10773)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am confused by this fix. IMO it is not worth doing that considering the refactors that happen right now for 1.6. I could however see this being included in 1.5.x
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@awaelchli But it would be required for 1.5.x, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes sure, it can be merged to master first, then be included in 1.5.x, but then tests that you add here become mostly outdated immediately after. That's all I'm saying.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that case, let's just change the target branch to the bugfix release branch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM !
] | ||
assert not trainer.training_type_plugin.sync_batchnorm | ||
|
||
trainer = Trainer(accelerator=CPUAccelerator(PrecisionPlugin(), DDPPlugin()), devices=2, sync_batchnorm=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
trainer = Trainer(accelerator=CPUAccelerator(PrecisionPlugin(), DDPPlugin()), devices=2, sync_batchnorm=True) | |
trainer = Trainer(accelerator=CPUAccelerator(PrecisionPlugin(), DDPPlugin()), devices=2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why remove it?
Closing it. As the issue has been resolved after the AcceleratorConnector refactor |
What does this PR do?
Fixes #10592
Fixes #10775
Does your PR introduce any breaking changes? If yes, please list them.
Before submitting
PR review
Anyone in the community is welcome to review the PR.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:
Did you have fun?
Make sure you had fun coding 🙃
cc @Borda @tchaton @akihironitta