You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The feature to use multiple loggers is really helpful. But I have issues when I try to handle code in such a way that any logger maybe turned off at any time.
For example, when I'm writing this code, I am writing a model able to log into mlflow and tensorboard. When both loggers are passed to Trainer(logger=[ml_flow, tesnorboard_logger]) vs Trainer(logger=[tesnorboard_logger]), I have to write a lot of different custom code to handle that. Upstream handling of how many loggers were passed and of which type in which order is messy. Example, I want to log an image to tesnorboard logs when I only pass tensorboard logger will be: self.logger.experiment.add_image() but when multiple logs are passed and in unknown order, the code becomes unnecessarily convoluted.
Pitch
Handle multiple loggers in model easily. We should be able to check if logger of a type is present or not and if present, get them. This should happen with minimal friction such as checking all indexes. One solution may be to have self.loggers as a dictionary rather than list?
Alternatives
Providing __len__ for pytorch_lightning.loggers.base.LoggerCollection can be one low hanging fruit which does simplify a lot (of course doesnot go all the way).
We actually got rid of the LoggerCollection and opted for a simpler design where loggers are just exposed in a list. As of Lightning 1.8.0, trainer.logger returns the first logger in the list and trainer.loggers returns the list of all loggers. Our recommendation for users who have logger-specific code is to implement their calls like so:
This is only needed if users need to make logger-api-specific calls. For regular metrics, self.log() in the LightningModule remains logger-agnostic.
To your feature request:
If it helps, you can check len(trainer.loggers) today.
LoggerCollection was never able to solve this problem of logger-specific api calls. Plus, we tried to unify logger apis in the past but were never successful (#12183, #11837).
Uh oh!
There was an error while loading. Please reload this page.
Description & Motivation
The feature to use multiple loggers is really helpful. But I have issues when I try to handle code in such a way that any logger maybe turned off at any time.
For example, when I'm writing this code, I am writing a model able to log into
mlflow
andtensorboard
. When both loggers are passed toTrainer(logger=[ml_flow, tesnorboard_logger])
vsTrainer(logger=[tesnorboard_logger])
, I have to write a lot of different custom code to handle that. Upstream handling of how many loggers were passed and of which type in which order is messy. Example, I want to log an image to tesnorboard logs when I only pass tensorboard logger will be:self.logger.experiment.add_image()
but when multiple logs are passed and in unknown order, the code becomes unnecessarily convoluted.Pitch
Handle multiple loggers in model easily. We should be able to check if logger of a type is present or not and if present, get them. This should happen with minimal friction such as checking all indexes. One solution may be to have
self.loggers
as a dictionary rather than list?Alternatives
Providing
__len__
forpytorch_lightning.loggers.base.LoggerCollection
can be one low hanging fruit which does simplify a lot (of course doesnot go all the way).Additional context
No response
cc @Borda @awaelchli @Blaizzy
The text was updated successfully, but these errors were encountered: