Skip to content

use auto for MPS #14639

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
williamFalcon opened this issue Sep 10, 2022 · 5 comments
Closed

use auto for MPS #14639

williamFalcon opened this issue Sep 10, 2022 · 5 comments
Labels
accelerator: mps Apple Silicon GPU

Comments

@williamFalcon
Copy link
Contributor

williamFalcon commented Sep 10, 2022

Why do we have this warning? instead, we should use MPS by default...

GPU available: True (mps), used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/Users/williamfalcon/opt/miniconda3/envs/c1/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py:1788: UserWarning: MPS available but not used. Set `accelerator` and `devices` using `Trainer(accelerator='mps', devices=1)`.

also, MPS should show up as one of the accelerators.

@carmocca

cc @akihironitta @justusschock

@williamFalcon williamFalcon added the needs triage Waiting to be triaged by maintainers label Sep 10, 2022
@awaelchli
Copy link
Contributor

@williamFalcon You are on Lightning PyTorch 1.7.5 right?

The Trainer defaults to CPU always, on all systems. This is why the warning shows. Only when setting accelerator="auto" does the selection happen automatically (and the warning shouldn't appear). Can you confirm this for your Trainer settings?

MPS should show up as one of the accelerators.

Agreed :)

@awaelchli awaelchli added accelerator: mps Apple Silicon GPU and removed needs triage Waiting to be triaged by maintainers labels Sep 11, 2022
@williamFalcon
Copy link
Contributor Author

williamFalcon commented Sep 11, 2022

interesting… i thought the default was auto. what about making auto the default? not CPU?

accelerator=auto
devices=auto

@awaelchli
Copy link
Contributor

Yep, I faintly remember discussions regarding this around the time when "auto" was introduced and the topic of Trainer 2.0 came up. It would be a massive "breaking" change, which is why we had held back on exploring this I guess. But I would consider it.

As for MPS, defaulting to it in the current state would IMO be unsafe, as PyTorch is still working on supporting many torch ops. Right now, lots of PyTorch code would break on MPS. I would expect this to improve a lot in the future so that eventually one can default to it. Not an expert opinion, just my impression atm.

@carmocca
Copy link
Contributor

carmocca commented Sep 11, 2022

Adrian already described the reasons why this isn't done. The open issue about defaulting to auto is #10606. I'll close this one as we would want to make this change for all accelerators together, not just MPS. Let's continue discussing auto there.

also, MPS should show up as one of the accelerators.

It does appear as a specification of GPU:
"GPU available: True (mps)"

We chose this for several reasons:

  • In the trainer, you can explicitly pass accelerator="cuda"|"mps" to make an explicit choice or accelerator="gpu" to choose based on your hardware. This means setting accelerator="gpu" is compatible when changing from an intel to an apple-sillicon machine.
  • Apple themselves describe MPS as a GPU type.
  • Your biologist researcher using PL does not understand what's the difference between cuda/mps/rocm..., but they might know that GPU makes things go fast. We wanted to avoid leaking this internal detail too much in such a visible message.

@carmocca carmocca closed this as not planned Won't fix, can't repro, duplicate, stale Sep 11, 2022
@Borda
Copy link
Member

Borda commented Nov 7, 2022

This shall be considered a change for potential v2.0, as in any major release, we are allowed to break some exiting patterns...
Let's also re-evaluate if can introduce this default change sooner so for example, v1.9.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accelerator: mps Apple Silicon GPU
Projects
None yet
Development

No branches or pull requests

4 participants