Skip to content

Support passing Accelerator objects to the accelerator flag with devices=x #10592

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
kaushikb11 opened this issue Nov 17, 2021 · 5 comments
Closed
Assignees
Labels
accelerator bug Something isn't working priority: 0 High priority task trainer: argument

Comments

@kaushikb11
Copy link
Contributor

kaushikb11 commented Nov 17, 2021

🐛 Bug

Expected behavior

As we are walking down the path to support #10410, it is important to support this behavior.

trainer = Trainer(accelerator=GPUAccelerator(CustomStrategy()), devices=4)

# should be equivalent to

trainer = Trainer(accelerator="gpu", devices=4, strategy=CustomStrategy())

Environment

  • PyTorch Lightning Version (e.g., 1.3.0):
  • PyTorch Version (e.g., 1.8)
  • Python version:
  • OS (e.g., Linux):
  • CUDA/cuDNN version:
  • GPU models and configuration:
  • How you installed PyTorch (conda, pip, source):
  • If compiling from source, the output of torch.__config__.show():
  • Any other relevant information:

Additional context

cc @tchaton @justusschock @kaushikb11 @awaelchli @Borda

@kaushikb11 kaushikb11 added bug Something isn't working priority: 0 High priority task accelerator labels Nov 17, 2021
@kaushikb11 kaushikb11 self-assigned this Nov 17, 2021
@ananthsub
Copy link
Contributor

I disagree with this proposal. #10416 is going the opposite route: the strategy should own the accelerator, not the other way around. @kaushikb11 it'd be good for us to sync up along with @awaelchli and @four4fish to ensure we're all aligned

@kaushikb11
Copy link
Contributor Author

@ananthsub Noted! The example was considering the current state.

The core part, it needs to support is

trainer = Trainer(accelerator=GPUAccelerator(), devices=4)

# should be equivalent to

trainer = Trainer(accelerator="gpu", devices=4)

Also, this will be a small step towards supporting pluggable custom accelerators (which is not possible right now). Will write a RFC for it tomorrow.

trainer = Trainer(accelerator=NewSOTAAccelerator(), devices=4)

@awaelchli
Copy link
Contributor

I agree this is a bug. This works today:

trainer = Trainer(accelerator=NewSOTAAccelerator(), gpus=4)

Whether gpus or devices are supplied should not matter.

@ananthsub
Copy link
Contributor

@kaushikb11 what remains to be done here following #12030 ?

@kaushikb11
Copy link
Contributor Author

@kaushikb11 what remains to be done here following #12030 ?

Yup, this issue has been resolved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accelerator bug Something isn't working priority: 0 High priority task trainer: argument
Projects
None yet
4 participants