Skip to content

[2/4] Add DeviceStatsMonitor callback #2

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 55 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
55 commits
Select commit Hold shift + click to select a range
fe062b2
add device stats callback
daniellepintz Sep 17, 2021
351dc5a
wip
daniellepintz Sep 18, 2021
88ada79
Merge branch 'get_device_stats' of github.com:daniellepintz/pytorch-l…
daniellepintz Sep 18, 2021
f5608e9
Document exceptions in accelerators (#9558)
akihironitta Sep 18, 2021
a0e4bb9
Merge branch 'get_device_stats' of github.com:daniellepintz/pytorch-l…
daniellepintz Sep 18, 2021
b0c014e
Merge branch 'get_device_stats' of github.com:daniellepintz/pytorch-l…
daniellepintz Sep 18, 2021
290398f
Deprecate TrainerProperties Mixin and move property definitions direc…
daniellepintz Sep 18, 2021
c66d30a
Fix typo in `LightningModule.configure_callbacks()` (#9591)
hankyul2 Sep 18, 2021
381343a
Put back initialization of properties in trainer (#9594)
daniellepintz Sep 18, 2021
2e17b47
Fix missing url (#9602)
LaserBit Sep 20, 2021
cc77367
Deprecate `LightningLoggerBase.close` (#9422)
jjenniferdai Sep 20, 2021
cd8cb60
[CLI] Fix registry decorator return value (#9587)
carmocca Sep 21, 2021
61b4e33
[CLI] Avoid warning when `configure_optimizers` will not be overridde…
carmocca Sep 21, 2021
74c3536
Prune `DeprecatedTrainerAttributes` (#9598)
daniellepintz Sep 21, 2021
d022f6f
Update versions after recent PyTorch releases (#9623)
carmocca Sep 21, 2021
73e53e5
Fixing order of operations bug in Qnet (#9621)
Benjamin-Etheredge Sep 21, 2021
a71be50
Fix gradient accumulation for `ShardedDataParallel` (#9122)
ananthsub Sep 22, 2021
e64f358
Fix broken links to PyTorch Lightning Bolts (#9634)
danielykim Sep 22, 2021
8f1c855
Fix `ResultCollection._get_cache` with multielement tensors (#9582)
carmocca Sep 22, 2021
3f7872d
[CLI] Shorthand notation to instantiate models (#9588)
carmocca Sep 22, 2021
fd2e778
Improvements for rich progress (#9579)
Sep 22, 2021
b98ce0a
revert #9125 / fix back-compatibility with saving hparams as a whole …
awaelchli Sep 22, 2021
37469cd
fix modify _DDPSinkBackward view inplace error for pytorch nightly 1.…
four4fish Sep 22, 2021
4bebf82
update changelog after 1.4.8 release (#9650)
awaelchli Sep 22, 2021
eb6aa7a
[CLI] Add option to enable/disable config save to preserve multiple f…
mauvilsa Sep 23, 2021
bffc534
Remove `InternalDebugger.track_lr_schedulers_update` (#9653)
carmocca Sep 23, 2021
86dd318
Reset metrics before each task starts (#9410)
rohitgr7 Sep 23, 2021
491e4a2
Deprecate `progress_bar_refresh_rate` from Trainer constructor (#9616)
daniellepintz Sep 23, 2021
87b11fb
add legacy load utility (#9166)
awaelchli Sep 23, 2021
ca5459e
Use `completed` over `processed` in `reset_on_restart` (#9656)
carmocca Sep 23, 2021
fd4f2f6
Remove `InternalDebugger.track_event` (#9654)
carmocca Sep 23, 2021
8dcba38
Add `is_last_batch` to progress tracking (#9657)
carmocca Sep 23, 2021
568a1e0
Disallow invalid seed string values (#8787)
stancld Sep 23, 2021
2b2537d
Use `searchsorted` over `argmax` (#9670)
carmocca Sep 23, 2021
41e3be1
Remove `call_configure_sharded_model` lifecycle property (#9612)
ananthsub Sep 24, 2021
c2e3ec1
Add torch v1.11.0 to the list of versions in adjust_versions.py (#9679)
daniellepintz Sep 24, 2021
714331b
Report leaking environment variables in tests (#5872)
awaelchli Sep 24, 2021
d67aff7
remove `InternalDebugger.track_load_dataloader_call` (#9675)
awaelchli Sep 24, 2021
ce00053
Support skipping to validation (#9681)
carmocca Sep 24, 2021
9148a13
Enable DataLoader state restoration for the evaluation loop (#9563)
tchaton Sep 24, 2021
8fcdcb5
Fix `accumulate_grad_batches` on init (#9652)
rohitgr7 Sep 24, 2021
d02fc2b
Rename `reset_on_epoch` to `reset_on_run` (#9658)
carmocca Sep 25, 2021
b3a5c7f
Add `enable_progress_bar` to Trainer constructor (#9664)
daniellepintz Sep 25, 2021
5395ceb
move get_active_optimizers to utilities (#9581)
awaelchli Sep 25, 2021
a3def9d
Use a unique filename to save temp ckpt in tuner (#9682)
rohitgr7 Sep 25, 2021
444b21d
Optimize non-empty directory warning check in model checkpoint callba…
jjenniferdai Sep 25, 2021
ddf6967
Deprecate LightningDistributed and keep logic in ddp/ddpSpawn directl…
four4fish Sep 25, 2021
a4bc0ac
Update warnings in `TrainingTricksConnector` (#9595)
rohitgr7 Sep 25, 2021
36b9ff2
Deprecate `stochastic_weight_avg` from the `Trainer` constructor (#8989)
ananthsub Sep 26, 2021
83d83ab
Fix `lr_find` to generate same results on multiple calls (#9704)
rohitgr7 Sep 26, 2021
ab06987
[1/4] Add get_device_stats to accelerator interface (#9586)
daniellepintz Sep 27, 2021
e83a362
Merge branch 'master' of https://github.com/PyTorchLightning/pytorch-…
daniellepintz Sep 27, 2021
07dd196
small fix
daniellepintz Sep 27, 2021
4124f15
small fix
daniellepintz Sep 27, 2021
4b81dd9
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Sep 27, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
89 changes: 74 additions & 15 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,12 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

- Progress tracking
* Integrate `TrainingEpochLoop.total_batch_idx` ([#8598](https://github.com/PyTorchLightning/pytorch-lightning/pull/8598))
* Add `BatchProgress` and integrate `TrainingEpochLoop.is_last_batch` ([#9657](https://github.com/PyTorchLightning/pytorch-lightning/pull/9657))
* Avoid optional `Tracker` attributes ([#9320](https://github.com/PyTorchLightning/pytorch-lightning/pull/9320))
* Reset `current` progress counters when restarting an epoch loop that had already finished ([#9371](https://github.com/PyTorchLightning/pytorch-lightning/pull/9371))
* Call `reset_on_restart` in the loop's `reset` hook instead of when loading a checkpoint ([#9561](https://github.com/PyTorchLightning/pytorch-lightning/pull/9561))
* Use `completed` over `processed` in `reset_on_restart` ([#9656](https://github.com/PyTorchLightning/pytorch-lightning/pull/9656))
* Rename `reset_on_epoch` to `reset_on_run` ([#9658](https://github.com/PyTorchLightning/pytorch-lightning/pull/9658))


- Added `batch_size` and `rank_zero_only` arguments for `log_dict` to match `log` ([#8628](https://github.com/PyTorchLightning/pytorch-lightning/pull/8628))
Expand Down Expand Up @@ -70,7 +73,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
* Added partial support for global random state fault-tolerance in map-style datasets ([#8950](https://github.com/PyTorchLightning/pytorch-lightning/pull/8950))
* Converted state to tuple explicitly when setting Python random state ([#9401](https://github.com/PyTorchLightning/pytorch-lightning/pull/9401))
* Added support for restarting an optimizer loop (multiple optimizers) ([#9537](https://github.com/PyTorchLightning/pytorch-lightning/pull/9537))
* Added support for restarting within Evaluation Loop ([#9563](https://github.com/PyTorchLightning/pytorch-lightning/pull/9563))
* Added mechanism to detect a signal has been sent so the Trainer can gracefully exit ([#9566](https://github.com/PyTorchLightning/pytorch-lightning/pull/9566))
* Support skipping to validation during fitting ([#9681](https://github.com/PyTorchLightning/pytorch-lightning/pull/9681))


- Checkpoint saving & loading extensibility:
Expand Down Expand Up @@ -142,12 +147,24 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Added `PL_RECONCILE_PROCESS` environment variable to enable process reconciliation regardless of cluster environment settings ([#9389](https://github.com/PyTorchLightning/pytorch-lightning/pull/9389))


- Added `get_device_stats` to Accelerator interface and implement it for GPU and TPU ([#9586](https://github.com/PyTorchLightning/pytorch-lightning/pull/9586))
- Added `get_device_stats` to the Accelerator Interface and added its implementation for GPU and TPU ([#9586](https://github.com/PyTorchLightning/pytorch-lightning/pull/9586))


- Added `multifile` option to `LightningCLI` to enable/disable config save to preserve multiple files structure ([#9073](https://github.com/PyTorchLightning/pytorch-lightning/pull/9073))


- Added `RichModelSummary` callback ([#9546](https://github.com/PyTorchLightning/pytorch-lightning/pull/9546))


- Added `DeviceStatsMonitor` callback ([#](https://github.com/PyTorchLightning/pytorch-lightning/pull/))


- Added `enable_progress_bar` to Trainer constructor ([#9664](https://github.com/PyTorchLightning/pytorch-lightning/pull/9664))


- Added `pl_legacy_patch` load utility for loading old checkpoints that have pickled legacy Lightning attributes ([#9166](https://github.com/PyTorchLightning/pytorch-lightning/pull/9166))


### Changed

- `pytorch_lightning.loggers.neptune.NeptuneLogger` is now consistent with new [neptune-client](https://github.com/neptune-ai/neptune-client) API ([#6867](https://github.com/PyTorchLightning/pytorch-lightning/pull/6867)).
Expand Down Expand Up @@ -192,8 +209,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Improve coverage of `self.log`-ing in any `LightningModule` or `Callback` hook ([#8498](https://github.com/PyTorchLightning/pytorch-lightning/pull/8498))


- Removed restrictions in the trainer that loggers can only log from rank 0. Existing logger behavior has not changed. ([#8608]
(https://github.com/PyTorchLightning/pytorch-lightning/pull/8608))
- Removed restrictions in the trainer that loggers can only log from rank 0. Existing logger behavior has not changed. ([#8608](https://github.com/PyTorchLightning/pytorch-lightning/pull/8608))


- `Trainer.request_dataloader` now takes a `RunningStage` enum instance ([#8858](https://github.com/PyTorchLightning/pytorch-lightning/pull/8858))
Expand All @@ -208,9 +224,21 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Executing the `optimizer_closure` is now required when overriding the `optimizer_step` hook ([#9360](https://github.com/PyTorchLightning/pytorch-lightning/pull/9360))


- Removed `TrainerProperties` mixin and moved property definitions directly into `Trainer` ([#9495](https://github.com/PyTorchLightning/pytorch-lightning/pull/9495))


- Changed logging of `LightningModule` and `LightningDataModule` hyperparameters to raise an exception only if there are colliding keys with different values ([#9496](https://github.com/PyTorchLightning/pytorch-lightning/pull/9496))


- Reset metrics before each task starts ([#9410](https://github.com/PyTorchLightning/pytorch-lightning/pull/9410))


- `seed_everything` now fails when an invalid seed value is passed instead of selecting a random seed ([#8787](https://github.com/PyTorchLightning/pytorch-lightning/pull/8787))


- Use a unique filename to save temp ckpt in tuner ([#96827](https://github.com/PyTorchLightning/pytorch-lightning/pull/9682))


### Deprecated

- Deprecated `LightningModule.summarize()` in favor of `pytorch_lightning.utilities.model_summary.summarize()`
Expand All @@ -237,9 +265,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Deprecated `on_{train/val/test/predict}_dataloader()` from `LightningModule` and `LightningDataModule` [#9098](https://github.com/PyTorchLightning/pytorch-lightning/pull/9098)


- Updated deprecation of `argparse_utils.py` from removal in 1.4 to 2.0 ([#9162](https://github.com/PyTorchLightning/pytorch-lightning/pull/9162))


- Deprecated `on_keyboard_interrupt` callback hook in favor of new `on_exception` hook ([#9260](https://github.com/PyTorchLightning/pytorch-lightning/pull/9260))


Expand All @@ -249,6 +274,17 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Deprecated passing `flush_logs_every_n_steps` as a Trainer argument, instead pass it to the logger init if supported ([#9366](https://github.com/PyTorchLightning/pytorch-lightning/pull/9366))


- Deprecated `LightningLoggerBase.close`, `LoggerCollection.close` in favor of `LightningLoggerBase.finalize`, `LoggerCollection.finalize` ([#9422](https://github.com/PyTorchLightning/pytorch-lightning/pull/9422))


- Deprecated passing `progress_bar_refresh_rate` to the `Trainer` constructor in favor of adding the `ProgressBar` callback with `refresh_rate` directly to the list of callbacks, or passing `enable_progress_bar=False` to disable the progress bar ([#9616](https://github.com/PyTorchLightning/pytorch-lightning/pull/9616))


- Deprecate `LightningDistributed` and move the broadcast logic to `DDPPlugin` and `DDPSpawnPlugin` directly ([#9691](https://github.com/PyTorchLightning/pytorch-lightning/pull/9691))


- Deprecated passing `stochastic_weight_avg` from the `Trainer` constructor in favor of adding the `StochasticWeightAveraging` callback directly to the list of callbacks ([#8989](https://github.com/PyTorchLightning/pytorch-lightning/pull/8989))


### Removed

Expand Down Expand Up @@ -315,6 +351,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Removed deprecated `profiled_functions` argument from `PyTorchProfiler` ([#9178](https://github.com/PyTorchLightning/pytorch-lightning/pull/9178))


- Removed deprecated `pytorch_lighting.utilities.argparse_utils` module ([#9166](https://github.com/PyTorchLightning/pytorch-lightning/pull/9166))


- Removed deprecated property `Trainer.running_sanity_check` in favor of `Trainer.sanity_checking` ([#9209](https://github.com/PyTorchLightning/pytorch-lightning/pull/9209))


Expand All @@ -333,6 +372,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Removed deprecated properties `DeepSpeedPlugin.cpu_offload*` in favor of `offload_optimizer`, `offload_parameters` and `pin_memory` ([#9244](https://github.com/PyTorchLightning/pytorch-lightning/pull/9244))


- Removed `call_configure_sharded_model_hook` property from `Accelerator` and `TrainingTypePlugin` ([#9612](https://github.com/PyTorchLightning/pytorch-lightning/pull/9612))


### Fixed


Expand All @@ -357,7 +399,33 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Fixed `BasePredictionWriter` not returning the batch_indices in a non-distributed setting ([#9432](https://github.com/PyTorchLightning/pytorch-lightning/pull/9432))


- Fixed check on torchmetrics logged whose `compute()` output is a multielement tensor ([#9582](https://github.com/PyTorchLightning/pytorch-lightning/pull/9582))


- Fixed gradient accumulation for `DDPShardedPlugin` ([#9122](https://github.com/PyTorchLightning/pytorch-lightning/pull/9122))


- Fixed missing deepspeed distributed call ([#9540](https://github.com/PyTorchLightning/pytorch-lightning/pull/9540))


- Fixed wrapping issue: avoid wrapping LightningModule with data-parallel modules when not fitting in `DDPPlugin`, `DDPSpawnPlugin`, `DDPShardedPlugin`, `DDPSpawnShardedPlugin` ([#9096](https://github.com/PyTorchLightning/pytorch-lightning/pull/9096))


- Fixed `trainer.accumulate_grad_batches` to be an int on init. Default value for it is now `None` inside Trainer ([#9652](https://github.com/PyTorchLightning/pytorch-lightning/pull/9652))


- Fixed `broadcast` in `DDPPlugin` and ``DDPSpawnPlugin` to respect the `src` input ([#9691](https://github.com/PyTorchLightning/pytorch-lightning/pull/9691))


- Fixed `lr_find` to generate same results on multiple calls ([#9704](https://github.com/PyTorchLightning/pytorch-lightning/pull/9704))


## [1.4.8] - 2021-09-22

- Fixed error reporting in DDP process reconciliation when processes are launched by an external agent ([#9389](https://github.com/PyTorchLightning/pytorch-lightning/pull/9389))
- Added PL_RECONCILE_PROCESS environment variable to enable process reconciliation regardless of cluster environment settings ([#9389](https://github.com/PyTorchLightning/pytorch-lightning/pull/9389))
- Fixed `add_argparse_args` raising `TypeError` when args are typed as `typing.Generic` in Python 3.6 ([#9554](https://github.com/PyTorchLightning/pytorch-lightning/pull/9554))
- Fixed back-compatibility for saving hyperparameters from a single container and inferring its argument name by reverting [#9125](https://github.com/PyTorchLightning/pytorch-lightning/pull/9125) ([#9642](https://github.com/PyTorchLightning/pytorch-lightning/pull/9642))


## [1.4.7] - 2021-09-14
Expand Down Expand Up @@ -389,12 +457,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Fixed signature of `Timer.on_train_epoch_end` and `StochasticWeightAveraging.on_train_epoch_end` to prevent unwanted deprecation warnings ([#9347](https://github.com/PyTorchLightning/pytorch-lightning/pull/9347))


- Fixed error reporting in DDP process reconciliation when processes are launched by an external agent ([#9389](https://github.com/PyTorchLightning/pytorch-lightning/pull/9389))


- Fixed missing deepspeed distributed call ([#9540](https://github.com/PyTorchLightning/pytorch-lightning/pull/9540))


## [1.4.5] - 2021-08-31

- Fixed reduction using `self.log(sync_dict=True, reduce_fx={mean,max})` ([#9142](https://github.com/PyTorchLightning/pytorch-lightning/pull/9142))
Expand All @@ -409,9 +471,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Fixed a bug causing logging with `log_gpu_memory='min_max'` not working ([#9013](https://github.com/PyTorchLightning/pytorch-lightning/pull/9013))


- Fixed wrapping issue: avoid wrapping LightningModule with data-parallel modules when not fitting in `DDPPlugin`, `DDPSpawnPlugin`, `DDPShardedPlugin`, `DDPSpawnShardedPlugin` ([#9096]https://github.com/PyTorchLightning/pytorch-lightning/pull/9096)


## [1.4.3] - 2021-08-17

- Fixed plateau scheduler stepping on incomplete epoch ([#8861](https://github.com/PyTorchLightning/pytorch-lightning/pull/8861))
Expand Down
2 changes: 1 addition & 1 deletion benchmarks/test_basic_parity.py
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ def lightning_loop(cls_model, idx, device_type: str = "cuda", num_epochs=10):
trainer = Trainer(
# as the first run is skipped, no need to run it long
max_epochs=num_epochs if idx > 0 else 1,
progress_bar_refresh_rate=0,
enable_progress_bar=False,
weights_summary=None,
gpus=1 if device_type == "cuda" else 0,
checkpoint_callback=False,
Expand Down
59 changes: 48 additions & 11 deletions docs/source/common/lightning_cli.rst
Original file line number Diff line number Diff line change
Expand Up @@ -415,22 +415,59 @@ as described above:

$ python ... --trainer.callbacks=CustomCallback ...

This callback will be included in the generated config:
.. note::

.. code-block:: yaml
This shorthand notation is only supported in the shell and not inside a configuration file. The configuration file
generated by calling the previous command with ``--print_config`` will have the ``class_path`` notation.

.. code-block:: yaml

trainer:
callbacks:
- class_path: your_class_path.CustomCallback
init_args:
...

trainer:
callbacks:
- class_path: your_class_path.CustomCallback
init_args:
...

Multiple models and/or datasets
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

In the previous examples :class:`~pytorch_lightning.utilities.cli.LightningCLI` works only for a single model and
datamodule class. However, there are many cases in which the objective is to easily be able to run many experiments for
multiple models and datasets. For these cases the tool can be configured such that a model and/or a datamodule is
multiple models and datasets.

The model argument can be left unset if a model has been registered first, this is particularly interesting for library
authors who want to provide their users a range of models to choose from:

.. code-block:: python

import flash.image
from pytorch_lightning.utilities.cli import MODEL_REGISTRY


@MODEL_REGISTRY
class MyModel(LightningModule):
...


# register all `LightningModule` subclasses from a package
MODEL_REGISTRY.register_classes(flash.image, LightningModule)
# print(MODEL_REGISTRY)
# >>> Registered objects: ['MyModel', 'ImageClassifier', 'ObjectDetector', 'StyleTransfer', ...]

cli = LightningCLI()

.. code-block:: bash

$ python trainer.py fit --model=MyModel --model.feat_dim=64

.. note::

This shorthand notation is only supported in the shell and not inside a configuration file. The configuration file
generated by calling the previous command with ``--print_config`` will have the ``class_path`` notation described
below.

Additionally, the tool can be configured such that a model and/or a datamodule is
specified by an import path and init arguments. For example, with a tool implemented as:

.. code-block:: python
Expand Down Expand Up @@ -750,7 +787,7 @@ A corresponding example of the config file would be:

.. note::

This short-hand notation is only supported in the shell and not inside a configuration file. The configuration file
This shorthand notation is only supported in the shell and not inside a configuration file. The configuration file
generated by calling the previous command with ``--print_config`` will have the ``class_path`` notation.

Furthermore, you can register your own optimizers and/or learning rate schedulers as follows:
Expand Down Expand Up @@ -894,8 +931,8 @@ Notes related to reproducibility
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The topic of reproducibility is complex and it is impossible to guarantee reproducibility by just providing a class that
people can use in unexpected ways. Nevertheless :class:`~pytorch_lightning.utilities.cli.LightningCLI` tries to give a
framework and recommendations to make reproducibility simpler.
people can use in unexpected ways. Nevertheless, the :class:`~pytorch_lightning.utilities.cli.LightningCLI` tries to
give a framework and recommendations to make reproducibility simpler.

When an experiment is run, it is good practice to use a stable version of the source code, either being a released
package or at least a commit of some version controlled repository. For each run of a CLI the config file is
Expand Down
17 changes: 17 additions & 0 deletions docs/source/common/trainer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1281,6 +1281,10 @@ See the :doc:`profiler documentation <../advanced/profiler>`. for more details.

progress_bar_refresh_rate
^^^^^^^^^^^^^^^^^^^^^^^^^
``progress_bar_refresh_rate`` has been deprecated in v1.5 and will be removed in v1.7.
Please pass :class:`~pytorch_lightning.callbacks.progress.ProgressBar` with ``refresh_rate``
directly to the Trainer's ``callbacks`` argument instead. To disable the progress bar,
pass ``enable_progress_bar = False`` to the Trainer.

.. raw:: html

Expand All @@ -1305,6 +1309,19 @@ Note:
Lightning will set it to 20 in these environments if the user does not provide a value.
- This argument is ignored if a custom callback is passed to :paramref:`~Trainer.callbacks`.

enable_progress_bar
^^^^^^^^^^^^^^^^^^^

Whether to enable or disable the progress bar. Defaults to True.

.. testcode::

# default used by the Trainer
trainer = Trainer(enable_progress_bar=True)

# disable progress bar
trainer = Trainer(enable_progress_bar=False)

reload_dataloaders_every_n_epochs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Expand Down
16 changes: 8 additions & 8 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -119,14 +119,14 @@ PyTorch Lightning
:caption: Examples

ecosystem/community_examples
Autoencoder <https://lightning-bolts.readthedocs.io/en/latest/autoencoders.html#autoencoders>
BYOL <https://lightning-bolts.readthedocs.io/en/latest/self_supervised_models.html#byol>
DQN <https://lightning-bolts.readthedocs.io/en/latest/reinforce_learn.html#deep-q-network-dqn>
GAN <https://lightning-bolts.readthedocs.io/en/latest/gans.html#basic-gan>
GPT-2 <https://lightning-bolts.readthedocs.io/en/latest/convolutional.html#gpt-2>
Image-GPT <https://lightning-bolts.readthedocs.io/en/latest/convolutional.html#image-gpt>
SimCLR <https://lightning-bolts.readthedocs.io/en/latest/self_supervised_models.html#simclr>
VAE <https://lightning-bolts.readthedocs.io/en/latest/autoencoders.html#basic-vae>
Autoencoder <https://lightning-bolts.readthedocs.io/en/latest/deprecated/models/autoencoders.html>
BYOL <https://lightning-bolts.readthedocs.io/en/latest/deprecated/callbacks/self_supervised.html#byolmaweightupdate>
DQN <https://lightning-bolts.readthedocs.io/en/latest/deprecated/models/reinforce_learn.html#deep-q-network-dqn>
GAN <https://lightning-bolts.readthedocs.io/en/latest/deprecated/models/gans.html#basic-gan>
GPT-2 <https://lightning-bolts.readthedocs.io/en/latest/deprecated/models/convolutional.html#gpt-2>
Image-GPT <https://lightning-bolts.readthedocs.io/en/latest/deprecated/models/convolutional.html#image-gpt>
SimCLR <https://lightning-bolts.readthedocs.io/en/latest/deprecated/transforms/self_supervised.html#simclr-transforms>
VAE <https://lightning-bolts.readthedocs.io/en/latest/deprecated/models/autoencoders.html#basic-vae>

.. toctree::
:maxdepth: 1
Expand Down
Loading