Skip to content
This repository was archived by the owner on Sep 28, 2022. It is now read-only.

Commit 56c9bf1

Browse files
carmoccaRaalsky
authored andcommitted
Fix to_torchscript() causing false positive deprecation warnings (Lightning-AI#10470)
1 parent 8c90354 commit 56c9bf1

File tree

2 files changed

+15
-5
lines changed

2 files changed

+15
-5
lines changed

CHANGELOG.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -142,6 +142,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
142142
- Fixed `CombinedLoader` and `max_size_cycle` didn't receive a `DistributedSampler` ([#10374](https://github.com/PyTorchLightning/pytorch-lightning/issues/10374))
143143

144144

145+
- Fixed `to_torchscript()` causing false positive deprecation warnings ([#10470](https://github.com/PyTorchLightning/pytorch-lightning/issues/10470))
146+
147+
145148
- Fixed `isinstance` not working with `init_meta_context`, materialized model not being moved to the device ([#10493](https://github.com/PyTorchLightning/metrics/pull/10493))
146149

147150

pytorch_lightning/core/lightning.py

Lines changed: 12 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -115,6 +115,8 @@ def __init__(self, *args: Any, **kwargs: Any) -> None:
115115
self._param_requires_grad_state = {}
116116
self._metric_attributes: Optional[Dict[int, str]] = None
117117
self._should_prevent_trainer_and_dataloaders_deepcopy: bool = False
118+
# TODO: remove after the 1.6 release
119+
self._running_torchscript = False
118120

119121
self._register_sharded_tensor_state_dict_hooks_if_available()
120122

@@ -1893,6 +1895,8 @@ def to_torchscript(
18931895
"""
18941896
mode = self.training
18951897

1898+
self._running_torchscript = True
1899+
18961900
if method == "script":
18971901
torchscript_module = torch.jit.script(self.eval(), **kwargs)
18981902
elif method == "trace":
@@ -1918,6 +1922,8 @@ def to_torchscript(
19181922
with fs.open(file_path, "wb") as f:
19191923
torch.jit.save(torchscript_module, f)
19201924

1925+
self._running_torchscript = False
1926+
19211927
return torchscript_module
19221928

19231929
@property
@@ -1927,11 +1933,12 @@ def model_size(self) -> float:
19271933
Note:
19281934
This property will not return correct value for Deepspeed (stage 3) and fully-sharded training.
19291935
"""
1930-
rank_zero_deprecation(
1931-
"The `LightningModule.model_size` property was deprecated in v1.5 and will be removed in v1.7."
1932-
" Please use the `pytorch_lightning.utilities.memory.get_model_size_mb`.",
1933-
stacklevel=5,
1934-
)
1936+
if not self._running_torchscript: # remove with the deprecation removal
1937+
rank_zero_deprecation(
1938+
"The `LightningModule.model_size` property was deprecated in v1.5 and will be removed in v1.7."
1939+
" Please use the `pytorch_lightning.utilities.memory.get_model_size_mb`.",
1940+
stacklevel=5,
1941+
)
19351942
return get_model_size_mb(self)
19361943

19371944
def add_to_queue(self, queue: torch.multiprocessing.SimpleQueue) -> None:

0 commit comments

Comments
 (0)