Skip to content

Commit 47def52

Browse files
committed
changelog + update docstring
1 parent bb67776 commit 47def52

File tree

2 files changed

+5
-6
lines changed

2 files changed

+5
-6
lines changed

CHANGELOG.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -338,6 +338,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
338338
- Removed deprecated properties `DeepSpeedPlugin.cpu_offload*` in favor of `offload_optimizer`, `offload_parameters` and `pin_memory` ([#9244](https://github.com/PyTorchLightning/pytorch-lightning/pull/9244))
339339

340340

341+
- Removed `call_configure_sharded_model_hook` property from `Accelerator` and `TrainingTypePlugin` ([#9612](https://github.com/PyTorchLightning/pytorch-lightning/pull/9612))
342+
343+
341344
### Fixed
342345

343346

pytorch_lightning/core/hooks.py

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -297,12 +297,8 @@ def configure_sharded_model(self) -> None:
297297
where we'd like to shard the model instantly, which is useful for extremely large models which can save
298298
memory and initialization time.
299299
300-
The accelerator manages whether to call this hook at every given stage.
301-
For sharded plugins where model parallelism is required, the hook is usually on called once
302-
to initialize the sharded parameters, and not called again in the same process.
303-
304-
By default for accelerators/plugins that do not use model sharding techniques,
305-
this hook is called during each fit/val/test/predict stages.
300+
This hook is called during each of fit/val/test/predict stages in the same process, so ensure that
301+
implementation of this hook is idempotent.
306302
"""
307303

308304

0 commit comments

Comments
 (0)