Skip to content

Commit 2c44c2b

Browse files
committed
update docs
1 parent 4f48372 commit 2c44c2b

File tree

3 files changed

+5
-5
lines changed

3 files changed

+5
-5
lines changed

docs/source/common/debugging.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -79,10 +79,10 @@ argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)
7979

8080
.. testcode::
8181

82-
# use only 1% of training data (and use the same training dataloader (with shuffle off) in val and test)
82+
# use only 1% of training data (and turn-off validation)
8383
trainer = Trainer(overfit_batches=0.01)
8484

85-
# similar, but with a fixed 10 batches no matter the size of the dataset
85+
# similar, but with a fixed 10 batches
8686
trainer = Trainer(overfit_batches=10)
8787

8888
With this flag, the train, val, and test sets will all be the same train set. We will also replace the sampler

docs/source/common/trainer.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1074,7 +1074,7 @@ overfit_batches
10741074

10751075
|
10761076
1077-
Uses this much data of the training set. If nonzero, will use the same training set for validation and testing.
1077+
Uses this much data of the training set. If nonzero, will turn-off validation.
10781078
If the training dataloaders have `shuffle=True`, Lightning will automatically disable it.
10791079

10801080
Useful for quickly debugging or trying to overfit on purpose.
@@ -1084,7 +1084,7 @@ Useful for quickly debugging or trying to overfit on purpose.
10841084
# default used by the Trainer
10851085
trainer = Trainer(overfit_batches=0.0)
10861086

1087-
# use only 1% of the train set (and use the train set for val and test)
1087+
# use only 1% of the train set
10881088
trainer = Trainer(overfit_batches=0.01)
10891089

10901090
# overfit on 10 of the same batches

docs/source/guides/speed.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -336,7 +336,7 @@ If you don't want to check 100% of the training/validation/test set set these fl
336336

337337
If you also pass ``shuffle=True`` to the dataloader, a different random subset of your dataset will be used for each epoch; otherwise the same subset will be used for all epochs.
338338

339-
.. note:: ``limit_train_batches``, ``limit_val_batches`` and ``limit_test_batches`` will be overwritten by ``overfit_batches`` if ``overfit_batches`` > 0. ``limit_val_batches`` will be ignored if ``fast_dev_run=True``.
339+
.. note:: ``limit_train_batches`` will be overwritten by ``overfit_batches`` if ``overfit_batches > 0`` and will turn-off validation.
340340

341341
.. note:: If you set ``limit_val_batches=0``, validation will be disabled.
342342

0 commit comments

Comments
 (0)