Skip to content

Unify checks #12069

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 45 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
45 commits
Select commit Hold shift + click to select a range
79183b6
Remove `Trainer._strategy_type`
carmocca Feb 18, 2022
e21d607
Minor change
carmocca Feb 18, 2022
d37ff3e
Undo change
carmocca Feb 18, 2022
2c75a7b
mypy
carmocca Feb 18, 2022
e00a14b
Remove `Trainer._device_type`
carmocca Feb 18, 2022
7018049
Support gradient accumulation using Horovod's `backward_passes_per_st…
krshrimali Feb 19, 2022
6e3168d
Fix import error when running doctests for RL examples (#12010)
awaelchli Feb 19, 2022
4f491a2
Restore test after #11448 (#11986)
carmocca Feb 19, 2022
89d7d38
Add back deterministic support in accelerator_connector (#11999)
four4fish Feb 20, 2022
983af0b
first_start
justusschock Feb 21, 2022
b810bea
change further checks
justusschock Feb 23, 2022
074742b
More checks
justusschock Feb 23, 2022
5748eb6
Merge branch 'master' of github.com:PyTorchLightning/pytorch-lightnin…
justusschock Feb 23, 2022
5d0457f
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 23, 2022
e0d6e3b
fix missing type
justusschock Feb 23, 2022
f26927e
Merge branch 'unify_checks' of github.com:PyTorchLightning/pytorch-li…
justusschock Feb 23, 2022
d88b0b1
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 23, 2022
e9c4c1e
updates
justusschock Feb 26, 2022
f08702f
Merge branch 'unify_checks' of github.com:PyTorchLightning/pytorch-li…
justusschock Feb 26, 2022
b6895d2
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 26, 2022
80add63
update changelog
justusschock Feb 26, 2022
83eef49
Merge branch 'unify_checks' of github.com:PyTorchLightning/pytorch-li…
justusschock Feb 26, 2022
d4d0ddb
mypy
justusschock Feb 26, 2022
29c1b67
add missing self
justusschock Feb 26, 2022
ea69ecc
fix flake8
justusschock Feb 26, 2022
2d0b3fa
Merge branch 'master' into unify_checks
justusschock Feb 26, 2022
c3eeafa
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 26, 2022
ba34815
Undo horovod
justusschock Feb 26, 2022
7c0bf5f
Apply suggestions from code review
justusschock Feb 26, 2022
aa0b208
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Feb 26, 2022
b7965b4
Apply suggestions from code review
justusschock Mar 1, 2022
9060cb6
Merge branch 'master' into unify_checks
justusschock Mar 7, 2022
c19a512
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Mar 7, 2022
afedb53
fix bad merge
justusschock Mar 7, 2022
18a4ee5
Merge branch 'master' into unify_checks
Borda Mar 23, 2022
ee049f2
Merge branch 'master' into unify_checks
justusschock Mar 28, 2022
fdfba34
Update dataconnector to only use m._use_amp
justusschock Mar 28, 2022
7e741c9
update changelog
justusschock Mar 28, 2022
58af81f
update docstrings to use .. deprecated::
justusschock Mar 28, 2022
2dcc7c7
mport
justusschock Mar 28, 2022
5ddb327
fix import
justusschock Mar 28, 2022
3061171
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Mar 28, 2022
1d88442
fix sphinx
justusschock Mar 28, 2022
0265dfe
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Mar 28, 2022
0451073
finally docstrings that aren't reformatted by pre-commit
justusschock Mar 28, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -175,6 +175,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

### Changed

- Internal checks for PrecisionType, StrategyType and AcceleratorType have been removed in favor of instance-checks against the respective classes ([#12069](https://github.com/PyTorchLightning/pytorch-lightning/pull/12069))


- Drop PyTorch 1.7 support ([#12191](https://github.com/PyTorchLightning/pytorch-lightning/pull/12191)), ([#12432](https://github.com/PyTorchLightning/pytorch-lightning/pull/12432))


Expand Down Expand Up @@ -423,6 +426,10 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

### Deprecated

- Deprecated `amp_backend` property of the `Trainer` in favor of instance-checks ([#12069](https://github.com/PyTorchLightning/pytorch-lightning/pull/12069))

- Deprecated `backend` property of `MixedPrecisionPlugin` in favor of instance-checks ([#12069](https://github.com/PyTorchLightning/pytorch-lightning/pull/12069))

- Deprecated `training_type_plugin` property in favor of `strategy` in `Trainer` and updated the references ([#11141](https://github.com/PyTorchLightning/pytorch-lightning/pull/11141))


Expand Down
61 changes: 54 additions & 7 deletions pytorch_lightning/lite/lite.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,18 +16,28 @@
from contextlib import contextmanager
from functools import partial
from pathlib import Path
from typing import Any, Callable, cast, Dict, Generator, List, Optional, overload, Sequence, Tuple, Union
from typing import Any, Callable, cast, Dict, Generator, List, Optional, overload, Sequence, Tuple, Type, Union

import torch
import torch.nn as nn
from torch import Tensor
from torch.optim import Optimizer
from torch.utils.data import DataLoader, DistributedSampler, RandomSampler, SequentialSampler

from pytorch_lightning.accelerators.accelerator import Accelerator
from pytorch_lightning.accelerators import Accelerator, CPUAccelerator, GPUAccelerator, TPUAccelerator
from pytorch_lightning.lite.wrappers import _LiteDataLoader, _LiteModule, _LiteOptimizer
from pytorch_lightning.plugins import PLUGIN_INPUT
from pytorch_lightning.strategies import DeepSpeedStrategy, Strategy, TPUSpawnStrategy
from pytorch_lightning.strategies import (
DataParallelStrategy,
DDPShardedStrategy,
DDPSpawnShardedStrategy,
DDPSpawnStrategy,
DDPStrategy,
DeepSpeedStrategy,
SingleDeviceStrategy,
Strategy,
TPUSpawnStrategy,
)
from pytorch_lightning.strategies.strategy import TBroadcast
from pytorch_lightning.trainer.connectors.accelerator_connector import AcceleratorConnector
from pytorch_lightning.utilities import _AcceleratorType, _StrategyType, move_data_to_device
Expand Down Expand Up @@ -78,8 +88,9 @@ def __init__(
gpus: Optional[Union[List[int], str, int]] = None,
tpu_cores: Optional[Union[List[int], str, int]] = None,
) -> None:
self._check_accelerator_support(accelerator)
self._check_strategy_support(strategy)
self._check_accelerator_flag(accelerator)
self._check_strategy_flag(strategy)

gpu_ids, tpu_cores = _parse_devices(gpus=gpus, auto_select_gpus=False, tpu_cores=tpu_cores)
self._accelerator_connector = AcceleratorConnector(
num_processes=None,
Expand All @@ -103,6 +114,10 @@ def __init__(
self._strategy = self._accelerator_connector.strategy
self._accelerator = self._strategy.accelerator
self._precision_plugin = self._strategy.precision_plugin

self._check_accelerator_type(self._accelerator)
self._check_strategy_type(self._strategy)

self._models_setup: int = 0

# wrap the run method so we can inject setup logic or spawn processes for the user
Expand Down Expand Up @@ -442,7 +457,7 @@ def _get_distributed_sampler(dataloader: DataLoader, **kwargs: Any) -> Distribut
kwargs.setdefault("seed", int(os.getenv("PL_GLOBAL_SEED", 0)))
return DistributedSampler(dataloader.dataset, **kwargs)

def _check_accelerator_support(self, accelerator: Optional[Union[str, Accelerator]]) -> None:
def _check_accelerator_flag(self, accelerator: Optional[Union[str, Accelerator]]) -> None:
supported = [t.value.lower() for t in self._supported_device_types()] + ["auto"]
valid = accelerator is None or isinstance(accelerator, Accelerator) or accelerator in supported
if not valid:
Expand All @@ -451,7 +466,7 @@ def _check_accelerator_support(self, accelerator: Optional[Union[str, Accelerato
f" Choose one of {supported} or pass in a `Accelerator` instance."
)

def _check_strategy_support(self, strategy: Optional[Union[str, Strategy]]) -> None:
def _check_strategy_flag(self, strategy: Optional[Union[str, Strategy]]) -> None:
supported = [t.lower() for t in self._supported_strategy_types()]
valid = strategy is None or isinstance(strategy, Strategy) or strategy in supported
if not valid:
Expand All @@ -460,6 +475,26 @@ def _check_strategy_support(self, strategy: Optional[Union[str, Strategy]]) -> N
f" Choose one of {supported} or pass in a `Strategy` instance."
)

def _check_accelerator_type(self, accelerator: Accelerator) -> None:
if not isinstance(accelerator, self._supported_accelerators()):
supported_values = ["auto"] + [x.lower() for x in self._supported_device_types]
raise MisconfigurationException(
f"`accelerator={accelerator!r}` is not a valid choice for `LightningLite`."
f" Choose one of {supported_values} or pass in a `Accelerator` instance."
)

def _check_strategy_type(self, strategy: Optional[Union[str, Strategy]]) -> None:
if not isinstance(strategy, self._supported_strategies()):
valid = [t.lower() for t in self._supported_strategy_types()]
raise MisconfigurationException(
f"`strategy={strategy!r}` is not a valid choice for `LightningLite`."
f" Choose one of {valid} or pass in a `Strategy` instance."
)

@staticmethod
def _supported_accelerators() -> Tuple[Type[Accelerator], ...]:
return (CPUAccelerator, GPUAccelerator, TPUAccelerator)

@staticmethod
def _supported_device_types() -> Sequence[_AcceleratorType]:
return (
Expand All @@ -468,6 +503,18 @@ def _supported_device_types() -> Sequence[_AcceleratorType]:
_AcceleratorType.TPU,
)

@staticmethod
def _supported_strategies() -> Tuple[Type[Strategy], ...]:
return (
SingleDeviceStrategy,
DataParallelStrategy,
DDPStrategy,
DDPSpawnStrategy,
DeepSpeedStrategy,
DDPShardedStrategy,
DDPSpawnShardedStrategy,
)

@staticmethod
def _supported_strategy_types() -> Sequence[_StrategyType]:
return (
Expand Down
7 changes: 4 additions & 3 deletions pytorch_lightning/loops/optimization/optimizer_loop.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,9 @@
_extract_hiddens,
check_finite_loss,
)
from pytorch_lightning.plugins.precision.apex_amp import ApexMixedPrecisionPlugin
from pytorch_lightning.plugins.precision.native_amp import NativeMixedPrecisionPlugin
from pytorch_lightning.trainer.progress import OptimizationProgress
from pytorch_lightning.utilities import AMPType
from pytorch_lightning.utilities.exceptions import MisconfigurationException
from pytorch_lightning.utilities.finite_checks import detect_nan_parameters
from pytorch_lightning.utilities.types import STEP_OUTPUT
Expand Down Expand Up @@ -353,7 +354,7 @@ def _optimizer_step(
is_lbfgs = isinstance(optimizer, torch.optim.LBFGS)

# wraps into LightningOptimizer only for running step
if self.trainer.amp_backend == AMPType.APEX:
if isinstance(self.trainer.strategy.precision_plugin, ApexMixedPrecisionPlugin):
# apex overrides .step function and need to be wrapped on each step
optimizer = LightningOptimizer._to_lightning_optimizer(optimizer, self.trainer.strategy, opt_idx)
else:
Expand All @@ -374,7 +375,7 @@ def _optimizer_step(
opt_idx,
train_step_and_backward_closure,
on_tpu=isinstance(self.trainer.accelerator, TPUAccelerator),
using_native_amp=(self.trainer.amp_backend == AMPType.NATIVE),
using_native_amp=isinstance(self.trainer.strategy.precision_plugin, NativeMixedPrecisionPlugin),
using_lbfgs=is_lbfgs,
)

Expand Down
11 changes: 9 additions & 2 deletions pytorch_lightning/plugins/precision/apex_amp.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
from pytorch_lightning.plugins.precision.mixed import MixedPrecisionPlugin
from pytorch_lightning.utilities import _APEX_AVAILABLE, AMPType
from pytorch_lightning.utilities.exceptions import MisconfigurationException
from pytorch_lightning.utilities.rank_zero import rank_zero_deprecation
from pytorch_lightning.utilities.types import _PARAMETERS

if _APEX_AVAILABLE:
Expand All @@ -30,8 +31,6 @@
class ApexMixedPrecisionPlugin(MixedPrecisionPlugin):
"""Mixed Precision Plugin based on Nvidia/Apex (https://github.com/NVIDIA/apex)"""

backend = AMPType.APEX

def __init__(self, amp_level: str = "O2") -> None:
if not _APEX_AVAILABLE:
raise MisconfigurationException(
Expand Down Expand Up @@ -98,3 +97,11 @@ def state_dict(self) -> Dict[str, Any]:

def load_state_dict(self, state_dict: Dict[str, Any]) -> None:
amp.load_state_dict(state_dict)

@property
def backend(self) -> AMPType:
rank_zero_deprecation(
"The backend property has been deprecated in v1.6 and will be removed in v1.7."
" Please switch to `isinstance(X, ApexMixedPrecisionPlugin)` check instead."
)
return AMPType.APEX
18 changes: 13 additions & 5 deletions pytorch_lightning/plugins/precision/mixed.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,16 +11,24 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import TYPE_CHECKING, Union
from typing import Union

from pytorch_lightning.plugins.precision.precision_plugin import PrecisionPlugin

if TYPE_CHECKING:
from pytorch_lightning.utilities import AMPType
from pytorch_lightning.utilities import AMPType


class MixedPrecisionPlugin(PrecisionPlugin):
"""Base Class for mixed precision."""

backend: "AMPType"
@property
def backend(self) -> AMPType:
"""AMP-Backend used by this plugin.

Typically one of AMPType.NATIVE | AMPType.APEX

.. deprecated:: v1.6
This property is deprecated in 1.6 and will be removed in 1.7.
Please use instance checks against the plugin class instead.
"""

precision: Union[str, int] = "mixed"
11 changes: 9 additions & 2 deletions pytorch_lightning/plugins/precision/native_amp.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
from pytorch_lightning.plugins.precision.mixed import MixedPrecisionPlugin
from pytorch_lightning.utilities import _TORCH_GREATER_EQUAL_1_10, AMPType
from pytorch_lightning.utilities.exceptions import MisconfigurationException
from pytorch_lightning.utilities.rank_zero import rank_zero_deprecation

if _TORCH_GREATER_EQUAL_1_10:
from torch import autocast as new_autocast
Expand All @@ -39,8 +40,6 @@ class NativeMixedPrecisionPlugin(MixedPrecisionPlugin):
scaler: An optional :class:`torch.cuda.amp.GradScaler` to use.
"""

backend = AMPType.NATIVE

def __init__(
self, precision: Union[str, int], device: str, scaler: Optional[torch.cuda.amp.GradScaler] = None
) -> None:
Expand Down Expand Up @@ -116,3 +115,11 @@ def state_dict(self) -> Dict[str, Any]:
def load_state_dict(self, state_dict: Dict[str, Any]) -> None:
if self.scaler is not None:
self.scaler.load_state_dict(state_dict)

@property
def backend(self) -> AMPType:
rank_zero_deprecation(
"The backend property has been deprecated in v1.6 and will be removed in v1.7."
" Please switch to `isinstance(X, NativeMixedPrecisionPlugin)` check instead."
)
return AMPType.NATIVE
8 changes: 5 additions & 3 deletions pytorch_lightning/strategies/deepspeed.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,8 @@
from pytorch_lightning.overrides.base import _LightningModuleWrapperBase
from pytorch_lightning.plugins.environments.cluster_environment import ClusterEnvironment
from pytorch_lightning.plugins.precision import PrecisionPlugin
from pytorch_lightning.plugins.precision.apex_amp import ApexMixedPrecisionPlugin
from pytorch_lightning.plugins.precision.native_amp import NativeMixedPrecisionPlugin
from pytorch_lightning.strategies.ddp import DDPStrategy
from pytorch_lightning.trainer.states import TrainerFn
from pytorch_lightning.utilities import GradClipAlgorithmType
Expand All @@ -39,7 +41,7 @@
get_default_process_group_backend_for_device,
log,
)
from pytorch_lightning.utilities.enums import AMPType, PrecisionType
from pytorch_lightning.utilities.enums import PrecisionType
from pytorch_lightning.utilities.exceptions import MisconfigurationException
from pytorch_lightning.utilities.imports import _DEEPSPEED_AVAILABLE
from pytorch_lightning.utilities.model_helpers import is_overridden
Expand Down Expand Up @@ -651,7 +653,7 @@ def _auto_select_batch_size(self):

def _format_precision_config(self) -> None:
if self.precision_plugin.precision in (PrecisionType.HALF, PrecisionType.MIXED):
if "fp16" not in self.config and self.precision_plugin.amp_type == AMPType.NATIVE:
if "fp16" not in self.config and isinstance(self.precision_plugin, NativeMixedPrecisionPlugin):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note, the test (test_trainer_model_hook_system_fit) failed because the type was actually DeepSpeedPrecisionPlugin which also has a field amp_type (set to NATIVE).
The same would be true for the apex check in the "elif" block below.
Both if statements will be skipped with the changes in this PR.

Since the DeepSpeedPrecisionPlugin is a weird basically bundling both native and apex together, we probably have to revert the changes here or have a more radical change.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Deepspeed only supports native AMP, right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It appears so yes. I too think it would be worth investigating #12323

# FP16 is a DeepSpeed standalone AMP implementation
rank_zero_info("Enabling DeepSpeed FP16.")
self.config["fp16"] = {
Expand All @@ -662,7 +664,7 @@ def _format_precision_config(self) -> None:
"hysteresis": self.hysteresis,
"min_loss_scale": self.min_loss_scale,
}
elif "amp" not in self.config and self.precision_plugin.amp_type == AMPType.APEX:
elif "amp" not in self.config and isinstance(self.precision_plugin, ApexMixedPrecisionPlugin):
rank_zero_info("Enabling DeepSpeed APEX Implementation.")
self.config["amp"] = {"enabled": True, "opt_level": self.precision_plugin.amp_level}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -70,14 +70,14 @@
TPUSpawnStrategy,
)
from pytorch_lightning.utilities import (
_StrategyType,
AMPType,
device_parser,
LightningEnum,
rank_zero_deprecation,
rank_zero_info,
rank_zero_warn,
)
from pytorch_lightning.utilities.enums import _StrategyType
from pytorch_lightning.utilities.exceptions import MisconfigurationException
from pytorch_lightning.utilities.imports import _HOROVOD_AVAILABLE, _HPU_AVAILABLE, _IPU_AVAILABLE, _TPU_AVAILABLE

Expand Down
3 changes: 2 additions & 1 deletion pytorch_lightning/trainer/connectors/data_connector.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
import pytorch_lightning as pl
from pytorch_lightning.accelerators.ipu import IPUAccelerator
from pytorch_lightning.overrides.distributed import UnrepeatedDistributedSampler
from pytorch_lightning.plugins.precision.mixed import MixedPrecisionPlugin
from pytorch_lightning.strategies import DDPSpawnStrategy
from pytorch_lightning.trainer.states import RunningStage, TrainerFn
from pytorch_lightning.trainer.supporters import CombinedLoader, CycleIterator
Expand Down Expand Up @@ -165,7 +166,7 @@ def _copy_trainer_model_properties(self, model):
for m in [model, ref_model]:
m.trainer = proxy(self.trainer)
# Remove setting use_amp in v1.8
m._use_amp = self.trainer.amp_backend is not None
m._use_amp = isinstance(self.trainer.strategy.precision_plugin, MixedPrecisionPlugin)
m.precision = self.trainer.precision

def attach_dataloaders(
Expand Down
4 changes: 4 additions & 0 deletions pytorch_lightning/trainer/trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -2179,6 +2179,10 @@ def optimizer_frequencies(self, new_freqs: List[int]) -> None:

@property
def amp_backend(self) -> Optional[AMPType]:
rank_zero_deprecation(
"amp_backend is deprecated in v1.6 and will be removed in v1.7. "
"Use `isinstance` check against the `PrecisionPlugins` directly."
)
if isinstance(self.precision_plugin, ApexMixedPrecisionPlugin):
return AMPType.APEX
if isinstance(self.precision_plugin, NativeMixedPrecisionPlugin):
Expand Down
22 changes: 22 additions & 0 deletions tests/deprecated_api/test_remove_1-7.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,8 @@
SLURMEnvironment,
TorchElasticEnvironment,
)
from pytorch_lightning.plugins.precision.apex_amp import ApexMixedPrecisionPlugin
from pytorch_lightning.plugins.precision.native_amp import NativeMixedPrecisionPlugin
from pytorch_lightning.strategies import SingleDeviceStrategy
from tests.deprecated_api import _soft_unimport_module
from tests.helpers import BoringModel
Expand Down Expand Up @@ -516,3 +518,23 @@ def post_dispatch(self, trainer):

with pytest.deprecated_call(match=escape("`CustomPlugin.post_dispatch()` has been deprecated in v1.6")):
CustomPlugin(torch.device("cpu"))


def test_v1_7_0_trainer_amp_backend():
trainer = Trainer()
with pytest.deprecated_call(match="amp_backend is deprecated in v1.6 and will be removed in v1.7."):
trainer.amp_backend


def test_v1_7_0_mixed_precision_plugin_backend_native():
plugin = NativeMixedPrecisionPlugin(16, "cpu")

with pytest.deprecated_call(match="The backend property has been deprecated in v1.6 and will be removed in v1.7."):
plugin.backend


@RunIf(amp_apex=True)
def test_v1_7_0_mixed_precision_plugin_backend_apex():
plugin = ApexMixedPrecisionPlugin()
with pytest.deprecated_call(match="The backend property has been deprecated in v1.6 and will be removed in v1.7."):
plugin.backend
Loading