Skip to content

Commit 095a1c2

Browse files
awaelchlikaushikb11
andcommitted
Fix sanity check for RichProgressBar (#10913)
Co-authored-by: Kaushik B <[email protected]>
1 parent d45ab97 commit 095a1c2

File tree

3 files changed

+25
-15
lines changed

3 files changed

+25
-15
lines changed

CHANGELOG.md

Lines changed: 2 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -4,29 +4,17 @@ All notable changes to this project will be documented in this file.
44

55
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
66

7-
## [1.5.6] - 2021-12-14
7+
## [1.5.6] - 2021-12-15
88

99
### Fixed
1010

1111
- Fixed a bug where the DeepSpeedPlugin arguments `cpu_checkpointing` and `contiguous_memory_optimization` were not being forwarded to deepspeed correctly ([#10874](https://github.com/PyTorchLightning/pytorch-lightning/issues/10874))
12-
13-
1412
- Fixed an issue with `NeptuneLogger` causing checkpoints to be uploaded with a duplicated file extension ([#11015](https://github.com/PyTorchLightning/pytorch-lightning/issues/11015))
15-
=======
16-
17-
1813
- Fixed support for logging within callbacks returned from `LightningModule` ([#10991](https://github.com/PyTorchLightning/pytorch-lightning/pull/10991))
19-
20-
14+
- Fixed running sanity check with `RichProgressBar` ([#10913](https://github.com/PyTorchLightning/pytorch-lightning/pull/10913))
2115
- Fixed support for `CombinedLoader` while checking for warning raised with eval dataloaders ([#10994](https://github.com/PyTorchLightning/pytorch-lightning/pull/10994))
2216

2317

24-
-
25-
26-
27-
-
28-
29-
3018
## [1.5.5] - 2021-12-07
3119

3220
### Fixed

pytorch_lightning/callbacks/progress/rich_progress.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -328,7 +328,8 @@ def on_sanity_check_start(self, trainer, pl_module):
328328

329329
def on_sanity_check_end(self, trainer, pl_module):
330330
super().on_sanity_check_end(trainer, pl_module)
331-
self._update(self.val_sanity_progress_bar_id, visible=False)
331+
if self.progress is not None:
332+
self.progress.update(self.val_sanity_progress_bar_id, advance=0, visible=False)
332333

333334
def on_train_epoch_start(self, trainer, pl_module):
334335
super().on_train_epoch_start(trainer, pl_module)

tests/callbacks/test_rich_progress_bar.py

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -180,3 +180,24 @@ def test_rich_progress_bar_leave(tmpdir, leave, reset_call_count):
180180
)
181181
trainer.fit(model)
182182
assert mock_progress_reset.call_count == reset_call_count
183+
184+
185+
@RunIf(rich=True)
186+
@pytest.mark.parametrize("limit_val_batches", (1, 5))
187+
def test_rich_progress_bar_num_sanity_val_steps(tmpdir, limit_val_batches: int):
188+
model = BoringModel()
189+
190+
progress_bar = RichProgressBar()
191+
num_sanity_val_steps = 3
192+
193+
trainer = Trainer(
194+
default_root_dir=tmpdir,
195+
num_sanity_val_steps=num_sanity_val_steps,
196+
limit_train_batches=1,
197+
limit_val_batches=limit_val_batches,
198+
max_epochs=1,
199+
callbacks=progress_bar,
200+
)
201+
202+
trainer.fit(model)
203+
assert progress_bar.progress.tasks[0].completed == min(num_sanity_val_steps, limit_val_batches)

0 commit comments

Comments
 (0)