This repository was archived by the owner on Nov 1, 2024. It is now read-only.
Update dependency pytorch_lightning to >=1.9.5,<1.10 - autoclosed #3
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
>=1.5.4,<1.7
->>=1.9.5,<1.10
Release Notes
Lightning-AI/lightning (pytorch_lightning)
v1.9.5
: Minor patch releaseCompare Source
App
Changed
healthz
endpoint to plugin server (#16882)Fabric
Changed
TorchCollective
works on thetorch.distributed
WORLD process group by default (#16995)Fixed
_cuda_clearCublasWorkspaces
on teardown (#16907)PyTorch
Changed
NeptuneLogger
(#16761):log()
method withappend()
andextend()
.Handler
as an alternative toRun
for therun
argument. This means that you can call it likeNeptuneLogger(run=run["some/namespace"])
to log everything to thesome/namespace/
location of the run.sys.argv
and args inLightningCLI
(#16808)Depercated
ShardedTensor
state dict hooks inLightningModule.__init__
withtorch>=2.1
(#16892)lightning.pytorch.core.saving.ModelIO
class interface (#16974)Fixed
num_nodes
not being set forDDPFullyShardedNativeStrategy
(#17160)DeepSpeedStrategy
(#16973)rich
that prevented Lightning to be imported in Google Colab (#17156)_cuda_clearCublasWorkspaces
on teardown (#16907)psutil
package is now required for CPU monitoring (#17010)Contributors
@awaelchli, @belerico, @carmocca, @colehawkins, @dmitsf, @Erotemic, @ethanwharris, @kshitij12345, @borda
If we forgot someone due to not matching commit email with GitHub account, let us know :]
v1.9.4
: Weekly patch releaseCompare Source
App
Removed
testing.run_app_in_cloud
in favor of headless login and app selection (#16741)Fabric
Added
Fabric(strategy="auto")
support (#16916)Fixed
find_usable_cuda_devices(num_devices=-1)
(#16866)PyTorch
Added
Fabric(strategy="auto")
support. It will choose DDP over DDP-spawn, contrary tostrategy=None
(default) (#16916)Fixed
lightning.pytorch.utilities.parsing.get_init_args
(#16851)Contributors
@ethanwharris, @carmocca, @awaelchli, @justusschock , @dtuit, @Liyang90
If we forgot someone due to not matching commit email with GitHub account, let us know :]
v1.9.3
: Weekly patch releaseCompare Source
App
Fixed
lightning open
command and improved redirects (#16794)Fabric
Fixed
accelerator=tpu
anddevices > 1
(#16806)--accelerator
and--precision
in Fabric CLI whenaccelerator
andprecision
are set to non-default values in the code (#16818)PyTorch
Fixed
accelerator=tpu
anddevices > 1
(#16806)Contributors
@ethanwharris, @carmocca, @awaelchli, @borda, @tchaton, @yurijmikhalevich
If we forgot someone due to not matching commit email with GitHub account, let us know :]
v1.9.2
: Weekly patch releaseCompare Source
App
Added
rm
: Delete files from your Cloud Platform Filesystemlightning connect data
to register data connection to private s3 buckets (#16738)Fabric
Fixed
PyTorch
Changed
Fixed
min_epochs
ormin_steps
(#16719)Contributors
@akihironitta, @awaelchli, @borda, @tchaton
If we forgot someone due to not matching commit email with GitHub account, let us know :]
v1.9.1
: Weekly patch releaseCompare Source
App
Added
lightning open
command (#16482)ls
: List files from your Cloud Platform Filesystemcd
: Change the current directory within your Cloud Platform filesystem (terminal session based)pwd
: Return the current folder in your Cloud Platform Filesystemcp
: Copy files between your Cloud Platform Filesystem and local filesystemcd
into non-existent folders (#16645)cp
(upload) at project level (#16631)ls
andcp
(download) at project level (#16622)lightning connect data
to register data connection to s3 buckets (#16670)Changed
LightningClient(retry=False)
toretry=True
(#16382)lightning.app.components.LiteMultiNode
tolightning.app.components.FabricMultiNode
(#16505)lightning connect
tolightning connect app
for consistency (#16670)Fixed
lightning cp
(#16626)Fabric
Fixed
accelerator="mps"
andddp
strategy pairing (#16455)torch_xla
requirement (#16476)torch.distributed
is not available (#16658)Pytorch
Fixed
save_hyperparameters
on mixin classes that don't subclassLightningModule
/LightningDataModule
(#16369)MLFlowLogger
logging the wrong keys with.log_hyperparams()
(#16418)MLFlowLogger
and long values are truncated (#16451)torch_xla
requirement (#16476)torch.distributed
is not available (#16658)Contributors
@akihironitta, @awaelchli, @borda, @BrianPulfer, @ethanwharris, @hhsecond, @justusschock, @Liyang90, @RuRo, @senarvi, @shenoynikhil, @tchaton
If we forgot someone due to not matching commit email with GitHub account, let us know :]
v1.9.0
: Stability and additional improvementsCompare Source
App
Added
Changed
DeviceStatsMonitor
(#16002)
lightning_app.components.serve.gradio
tolightning_app.components.serve.gradio_server
(#16201)Fixed
relpath
bug on Windows (#16164)LooseVersion
(#16162)lightning login
with env variables would not correctly save the credentials (#16339)Fabric
Added
Fabric.launch()
to programmatically launch processes (e.g. in Jupyter notebook) (#14992)run
method (#14992)Fabric.setup_module()
andFabric.setup_optimizers()
to support strategies that need to set up the model before an optimizer can be created (#15185)lightning_fabric.accelerators.find_usable_cuda_devices
utility function (#16147)Fabric(callbacks=...)
and emitting events throughFabric.call()
(#16074)Fabric(loggers=...)
to support different Logger frameworks in FabricFabric.log
for logging scalars using multiple loggersFabric.log_dict
for logging a dictionary of multiple metrics at onceFabric.loggers
andFabric.logger
attributes to access the individual logger instancesself.log
andself.log_dict
in a LightningModule when using Fabricself.logger
andself.loggers
in a LightningModule when using Fabriclightning_fabric.loggers.TensorBoardLogger
(#16121)lightning_fabric.loggers.CSVLogger
(#16346).zero_grad(set_to_none=...)
on the wrapped optimizer regardless of which strategy is used (#16275)Changed
LightningLite
toFabric
(#15932, #15938)Fabric.run()
method is no longer abstract (#14992)XLAStrategy
now inherits fromParallelStrategy
instead ofDDPSpawnStrategy
(#15838)DDPSpawnStrategy
intoDDPStrategy
and removedDDPSpawnStrategy
(#14952).setup_dataloaders()
now calls.set_epoch()
on the distributed sampler if one is used (#16101)Strategy.reduce
toStrategy.all_reduce
in all strategies (#16370)Removed
strategy='ddp_sharded'|'ddp_sharded_spawn'
). Use Fully-Sharded Data Parallel instead (strategy='fsdp'
) (#16329)Fixed
DistributedSampler
(#16101)PyTorch
Added
MetricCollection
with enabled compute groups (#15580)pl.loggers.WandbLogger
(#16173)LRFinder
(#15304)pl.utilities.upgrade_checkpoint
script (#15333)ax
to the.lr_find().plot()
to enable writing to a user-defined axes in a matplotlib figure (#15652)log_model
parameter toMLFlowLogger
(#9187)self.log(..., logger=True)
is called without a configured logger (#15814)LightningCLI
support for optimizer and learning schedulers via callable type dependency injection (#15869)DDPFullyShardedNativeStrategy
strategy (#15826)DDPFullyShardedNativeStrategy(cpu_offload=True|False)
via bool instead of needing to pass a configuration object (#15832)LightningModule.configure_optimizers
(#16189)Changed
tensorboard
totensorboardx
inTensorBoardLogger
(#15728)LightningModule.load_from_checkpoint
automatically upgrade the loaded checkpoint if it was produced in an old version of Lightning (#15237)Trainer.{validate,test,predict}(ckpt_path=...)
no longer restores theTrainer.global_step
andtrainer.current_epoch
value from the checkpoints - From now on, onlyTrainer.fit
will restore this value (#15532)ModelCheckpoint.save_on_train_epoch_end
attribute is now computed dynamically every epoch, accounting for changes to the validation dataloaders (#15300)MLFlowLogger
now logs hyperparameters and metrics in batched API calls (#15915)on_train_batch_{start,end}
hooks in conjunction with taking adataloader_iter
in thetraining_step
no longer errors out and instead shows a warning (#16062)tensorboardX
to extra dependencies. Use theCSVLogger
by default (#16349)Deprecated
description
,env_prefix
andenv_parse
parameters inLightningCLI.__init__
in favour of giving them throughparser_kwargs
(#15651)pytorch_lightning.profiler
in favor ofpytorch_lightning.profilers
(#16059)Trainer(auto_select_gpus=...)
in favor ofpytorch_lightning.accelerators.find_usable_cuda_devices
(#16147)pytorch_lightning.tuner.auto_gpu_select.{pick_single_gpu,pick_multiple_gpus}
in favor ofpytorch_lightning.accelerators.find_usable_cuda_devices
(#16147)nvidia/apex
deprecation (#16039)pytorch_lightning.plugins.NativeMixedPrecisionPlugin
in favor ofpytorch_lightning.plugins.MixedPrecisionPlugin
LightningModule.optimizer_step(using_native_amp=...)
argumentTrainer(amp_backend=...)
argumentTrainer.amp_backend
propertyTrainer(amp_level=...)
argumentpytorch_lightning.plugins.ApexMixedPrecisionPlugin
classpytorch_lightning.utilities.enums.AMPType
enumDeepSpeedPrecisionPlugin(amp_type=..., amp_level=...)
argumentshorovod
deprecation (#16141)Trainer(strategy="horovod")
HorovodStrategy
classpytorch_lightning.lite.LightningLite
in favor oflightning.fabric.Fabric
(#16314)FairScale
deprecation (in favor of PyTorch's FSDP implementation) (#16353)pytorch_lightning.overrides.fairscale.LightningShardedDataParallel
classpytorch_lightning.plugins.precision.fully_sharded_native_amp.FullyShardedNativeMixedPrecisionPlugin
classpytorch_lightning.plugins.precision.sharded_native_amp.ShardedNativeMixedPrecisionPlugin
classpytorch_lightning.strategies.fully_sharded.DDPFullyShardedStrategy
classpytorch_lightning.strategies.sharded.DDPShardedStrategy
classpytorch_lightning.strategies.sharded_spawn.DDPSpawnShardedStrategy
classRemoved
pytorch_lightning.utilities.memory.get_gpu_memory_map
in favor ofpytorch_lightning.accelerators.cuda.get_nvidia_gpu_stats
(#15617)pytorch_lightning.profiler.base.AbstractProfiler
in favor ofpytorch_lightning.profilers.profiler.Profiler
(#15637)pytorch_lightning.profiler.base.BaseProfiler
in favor ofpytorch_lightning.profilers.profiler.Profiler
(#15637)pytorch_lightning.utilities.meta
(#16038)LightningDeepSpeedModule
(#16041)pytorch_lightning.accelerators.GPUAccelerator
in favor ofpytorch_lightning.accelerators.CUDAAccelerator
(#16050)pytorch_lightning.profiler.*
classes in favor ofpytorch_lightning.profilers
(#16059)pytorch_lightning.utilities.cli
module in favor ofpytorch_lightning.cli
(#16116)pytorch_lightning.loggers.base
module in favor ofpytorch_lightning.loggers.logger
(#16120)pytorch_lightning.loops.base
module in favor ofpytorch_lightning.loops.loop
(#16142)pytorch_lightning.core.lightning
module in favor ofpytorch_lightning.core.module
(#16318)pytorch_lightning.callbacks.base
module in favor ofpytorch_lightning.callbacks.callback
(#16319)Trainer.reset_train_val_dataloaders()
in favor ofTrainer.reset_{train,val}_dataloader
(#16131)LightningCLI(seed_everything_default=None)
(#16131)strategy='ddp_sharded'|'ddp_sharded_spawn'
). Use Fully-Sharded Data Parallel instead (strategy='fsdp'
) (#16329)Fixed
reduce_boolean_decision
to accommodateany
-analogous semantics expected by theEarlyStopping
callback (#15253)interval
key of the scheduler would be ignored during manual optimization, making the LearningRateMonitor callback fail to log the learning rate (#16308)MLFlowLogger
not finalizing correctly when status code 'finished' was passed (#16340)Contributors
@1SAA, @akihironitta, @AlessioQuercia, @awaelchli, @bipinKrishnan, @Borda, @carmocca, @dmitsf, @erhoo82, @ethanwharris, @Forbu, @hhsecond, @justusschock, @lantiga, @lightningforever, @Liyang90, @manangoel99, @mauvilsa, @nicolai86, @nohalon, @rohitgr7, @schmidt-jake, @speediedan, @yMayanand
If we forgot someone due to not matching commit email with GitHub account, let us know :]
v1.8.6
: Weekly patch releaseCompare Source
App
Added
Request
annotation inconfigure_api
handlers (#16047)work.delete
method to delete the work (#16103)display_name
property to LightningWork for the cloud (#16095)ColdStartProxy
to the AutoScaler (#16094)ready
(#16075)ready
for components (#16129)Changed
start_method
for creating Work processes locally on macOS is now 'spawn' (previously 'fork') (#16089)lightning.app.utilities.cloud.is_running_in_cloud
now returnsTrue
during the loading of the app locally when running with--cloud
(#16045)True
(#16009)Fixed
PythonServer
messaging "Your app has started" (#15989)AutoScaler
would fail with min_replica=0 (#16092AutoScaler
UI (#16128)streamlit
(#16139)Full Changelog: Lightning-AI/pytorch-lightning@1.8.5.post0...1.8.6
v1.8.5.post0
: Minor patch releaseCompare Source
App
self.lightningignore
(#16080)Pytorch
Full Changelog: Lightning-AI/pytorch-lightning@1.8.5...1.8.5.post0
v1.8.5
: Weekly patch releaseCompare Source
App
Added
Lightning{Flow,Work}.lightningignores
attributes to programmatically ignore files before uploading to the cloud (#15818).lightningignore
that ignoresvenv
(#16056)Changed
Fixed
DDPStrategy
import in app framework (#16029)AutoScaler
raising an exception when non-default cloud compute is specified (#15991)Pytorch
Full Changelog: Lightning-AI/pytorch-lightning@1.8.4.post0...1.8.5
v1.8.4.post0
: Minor patch releaseCompare Source
App
L.app.structures
(#15964)Pytorch
XLAProfiler
not recording anything due to mismatching of action names (#15885)Full Changelog: Lightning-AI/pytorch-lightning@1.8.4...1.8.4.post0
v1.8.4
: Weekly patch releaseCompare Source
App
Added
code_dir
argument to tracer run (#15771)lightning run model
to launch aLightningLite
accelerated script (#15506)lightning delete app
to delete a lightning app on the cloud (#15783)AutoScaler
component (#15769)ready
of the LightningFlow to inform when theOpen App
should be visible (#15921)_start_method
to customize how to start the works (#15923)configure_layout
method to theLightningWork
which can be used to control how the work is handled in the layout of a parent flow (#15926)lightning run app organization/name
(#15941)Changed
MultiNode
components now warn the user when running withnum_nodes > 1
locally (#15806)BuildConfig(requirements=[...])
is passed but arequirements.txt
file is already present in the Work (#15799)BuildConfig(dockerfile="...")
is passed but aDockerfile
file is already present in the Work (#15799)Removed
SingleProcessRuntime
(#15933)Fixed
enable_spawn
method of theWorkRunExecutor
(#15812)L.app.structures
would cause multiple apps to be opened and fail with an error in the cloud (#15911)ImportError
on Multinode if package not present (#15963)Lite
shuffle=False
having no effect when using DDP/DistributedSampler (#15931)Pytorch
Changed
Fixed
fit_loop.restarting
to beFalse
for lr finder (#15620)torch.jit.script
-ing a LightningModule causing an unintended error message about deprecateduse_amp
property (#15947)Full Changelog: Lightning-AI/pytorch-lightning@1.8.3...1.8.4
v1.8.3.post2
: Dependency hotfixCompare Source
🤖
v1.8.3.post1
: Hotfix for Python ServerCompare Source
App
Changed
Full Changelog: Lightning-AI/pytorch-lightning@1.8.3...1.8.3
v1.8.3.post0
: Hotfix for requirementsCompare Source
v1.8.3
: Weekly patch releaseCompare Source
App
Changed
lightning add ssh-key
CLI command has been transitioned tolightning create ssh-key
lightning remove ssh-key
CLI command has been transitioned tolightning delete ssh-key
LightningTrainerScript
start-up time (#15751)StreamlitFrontend
to support upload in localhost (#15684)Fixed
LightningFlow
(#15750)Lite
Changed
Pytorch
Changed
tensorboard
totensorboardx
inTensorBoardLogger
(#15728)Full Changelog: Lightning-AI/pytorch-lightning@1.8.2...1.8.3
v1.8.2
: Weekly patch releaseCompare Source
App
Added
Changed
.lightning
file (#15654)Fixed
Lite
Fixed
LightningLite(strategy="ddp_spawn", ...)
toLightningLite(strategy="ddp", ...)
when on an LSF cluster (#15103)Pytorch
Fixed
Trainer(strategy="ddp_spawn", ...)
toTrainer(strategy="ddp", ...)
when on an LSF cluster (#15103][https://github.com/PyTorchLightning/pytorch-lightning/issues/15103](https://redirect.github.com/PyTorchLightning/pytorch-lightning/issues/15103)3))Full Changelog: Lightning-AI/pytorch-lightning@1.8.1...1.8.2
v1.8.1
: Weekly patch releaseCompare Source
App
Added
start
method to the work (#15523)MultiNode
Component to run with distributed computation with any frameworks (#15524)RunWorkExecutor
to the work and provides default ones for theMultiNode
Component (#15561)start_with_flow
flag to theLightningWork
which can be disabled to prevent the work from starting at the same time as the flow (#15591)bi-directional
delta updates between the flow and the works (#15582)--setup
flag tolightning run app
CLI command allowing for dependency installation via app comments (#15577)Changed
flow.flows
to be recursive wont to align the behavior with theflow.works
(#15466)params
argument inTracerPythonScript.run
no longer prepends--
automatically to parameters (#15518)Fixed
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.