Skip to content

Commit 0cd4fba

Browse files
rohitgr7ethanwharristchaton
authored
Apply suggestions from code review
Co-authored-by: Ethan Harris <[email protected]> Co-authored-by: thomas chaton <[email protected]>
1 parent e73e4f2 commit 0cd4fba

File tree

3 files changed

+6
-6
lines changed

3 files changed

+6
-6
lines changed

docs/source/starter/introduction_guide.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ Let's first start with the model. In this case, we'll design a 3-layer neural ne
9999
return x
100100

101101
Notice this is a :doc:`lightning module <../common/lightning_module>` instead of a ``torch.nn.Module``. A LightningModule is
102-
equivalent to a pure PyTorch module except it has added functionality. However, you can use it **exactly** the same as you would a PyTorch module.
102+
equivalent to a pure PyTorch ``nn.Module`` except it has added functionality. However, you can use it **exactly** the same as you would a PyTorch ``nn.Module``.
103103

104104
.. testcode::
105105

docs/source/starter/lightning_lite.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -221,7 +221,7 @@ but there are several major challenges ahead of you now:
221221
:header-rows: 0
222222

223223
* - Processes divergence
224-
- This happens when processes execute a different section of the code due to different if/else conditions, race conditions on existing files, etc., resulting in hanging.
224+
- This happens when processes execute a different section of the code due to different if/else conditions, race conditions on existing files and so on, resulting in hanging.
225225
* - Cross processes reduction
226226
- Miscalculated metrics or gradients due to errors in their reduction.
227227
* - Large sharded models
@@ -416,7 +416,7 @@ Configure the devices to run on. Can be of type:
416416
# equivalent
417417
lite = Lite(devices=0)
418418
419-
# int: run on to GPUs
419+
# int: run on two GPUs
420420
lite = Lite(devices=2, accelerator="gpu")
421421
422422
# list: run on GPUs 1, 4 (by bus ordering)
@@ -695,7 +695,7 @@ load
695695
====
696696

697697
Load checkpoint contents from a file. Replaces all occurrences of ``torch.load(...)`` in your code. Lite will take care of
698-
handling the loading part correctly, no matter if you are running a single device, multi-devices or multi-nodes.
698+
handling the loading part correctly, no matter if you are running a single device, multi-device, or multi-node.
699699

700700
.. code-block:: python
701701

docs/source/starter/new-project.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -380,9 +380,9 @@ You can also add a forward method to do predictions however you want.
380380

381381
.. code-block:: python
382382
383-
# ----------------------------------
383+
# -------------------------------
384384
# using the AE to generate images
385-
# ----------------------------------
385+
# -------------------------------
386386
class LitAutoEncoder(LightningModule):
387387
def __init__(self):
388388
super().__init__()

0 commit comments

Comments
 (0)