Skip to content

Commit de2c5b2

Browse files
committed
stick to markdown w/o html wherever possible, remove temporary fts requirement
1 parent 8805862 commit de2c5b2

File tree

2 files changed

+2
-20
lines changed

2 files changed

+2
-20
lines changed

docs/requirements.txt

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,6 @@ docutils>=0.16
66
sphinx-paramlinks>=0.4.0
77
ipython[notebook]
88

9-
# temporarily included until hub available to evaluate finetuning_scheduler
10-
git+git://github.com/speediedan/pytorch-lightning.git@24d3e43568814ec381ac5be91627629808d62081#egg=pytorch-lightning
11-
129
https://github.com/PyTorchLightning/lightning_sphinx_theme/archive/master.zip#egg=pt-lightning-sphinx-theme
1310

1411
-r ../.actions/requirements.txt

lightning_examples/finetuning-scheduler/finetuning-scheduler.py

Lines changed: 2 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,3 @@
1-
# ---
2-
# jupyter:
3-
# jupytext:
4-
# formats: ipynb,py:percent
5-
# text_representation:
6-
# extension: .py
7-
# format_name: percent
8-
# format_version: '1.3'
9-
# jupytext_version: 1.13.2
10-
# kernelspec:
11-
# display_name: 'Python 3.7.11 64-bit (''pldev_tutorials'': conda)'
12-
# language: python
13-
# name: python3
14-
# ---
15-
161
# %% [markdown]
172
# ## Scheduled Finetuning
183
#
@@ -34,7 +19,7 @@
3419
# final phase of the schedule has its stopping criteria met. See
3520
# the [early stopping documentation](https://pytorch-lightning.readthedocs.io/en/latest/extensions/generated/pytorch_lightning.callbacks.EarlyStopping.html) for more details on that callback's configuration.
3621
#
37-
# <img src="fts_explicit_loss_anim.gif" width="376px" height="272px">
22+
# ![FinetuningScheduler explicit loss animation](fts_explicit_loss_anim.gif)
3823

3924
# %% [markdown]
4025
# ## Basic Usage
@@ -416,7 +401,7 @@ def training_epoch_end(self, outputs: List[Any]) -> None:
416401
loss = torch.stack([x["loss"] for x in outputs]).mean()
417402
self.log("train_loss", loss, prog_bar=True, sync_dist=True)
418403
if self.finetuningscheduler_callback:
419-
self.log("finetuning_schedule_depth", self.finetuningscheduler_callback.curr_depth)
404+
self.log("finetuning_schedule_depth", float(self.finetuningscheduler_callback.curr_depth))
420405

421406
def validation_step(self, batch, batch_idx, dataloader_idx=0):
422407
outputs = self(**batch)

0 commit comments

Comments
 (0)