You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traceback (most recent call last):
File "/Users/adrian/repositories/pytorch-lightning/pl_examples/bug_report/bug_report_model.py", line 68, in <module>
run()
File "/Users/adrian/repositories/pytorch-lightning/pl_examples/bug_report/bug_report_model.py", line 52, in run
trainer = Trainer(
File "/Users/adrian/repositories/pytorch-lightning/pytorch_lightning/utilities/argparse.py", line 336, in insert_env_defaults
return fn(self, **kwargs)
File "/Users/adrian/repositories/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 477, in __init__
self._accelerator_connector = AcceleratorConnector(
File "/Users/adrian/repositories/pytorch-lightning/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 194, in __init__
self._strategy_flag = self._choose_strategy()
File "/Users/adrian/repositories/pytorch-lightning/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 531, in _choose_strategy
return SingleTPUStrategy(device=self._parallel_devices[0]) # type: ignore
File "/Users/adrian/repositories/pytorch-lightning/pytorch_lightning/strategies/single_tpu.py", line 44, in __init__
device=xm.xla_device(device),
NameError: name 'xm' is not defined
Expected behavior
Should error with
File "/Users/adrian/repositories/pytorch-lightning/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 218, in select_accelerator_type
raise MisconfigurationException(f"You passed `accelerator='tpu'`, but {msg}.")
pytorch_lightning.utilities.exceptions.MisconfigurationException: You passed `accelerator='tpu'`, but TPUs are not available.
Yeah, this is the follow up item "Enable accelerator.is_available() check" in #11449
The proper way is to call accelerator.is_available() in _init_acceleartor(), left it as a follow up because it cause a lot of GPU tests fails. The previous accl_con logic, device availability check doesn't apply to GPU. Now calling self.accelerator.is_available() will apply device check to GPU as well, and a lot of tests need adding mocks. To avoid massive tests change in one PR, I left this as follow up.
Is this urgent? Do you prefer a quick fix or waiting for "Enable accelerator.is_available() check" to fix this? @awaelchli@kaushikb11
Uh oh!
There was an error while loading. Please reload this page.
🐛 Bug
To Reproduce
Produces
Expected behavior
Should error with
like on 1.5.10
Environment
Latest master.
Ask if you need more.
cc @kaushikb11 @rohitgr7
The text was updated successfully, but these errors were encountered: