-
Notifications
You must be signed in to change notification settings - Fork 3.5k
Refactor codebase to use the loggers property instead of logger #11731
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor codebase to use the loggers property instead of logger #11731
Conversation
for more information, see https://pre-commit.ci
Co-authored-by: ananthsub <[email protected]>
Co-authored-by: Carlos Mocholí <[email protected]>
for more information, see https://pre-commit.ci
…/pytorch-lightning into refactor/create-loggers-property
save_dir = trainer.loggers[0].save_dir | ||
else: | ||
save_dir = trainer.default_root_dir | ||
# TODO: Find out we handle trainer.logger.version without LoggerCollection |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How should we handle trainer.logger.version and trainer.logger.name here? LoggerCollection will concatenate the values together from each logger, but with trainer.loggers we no longer have that logic and that logic might not be the best approach anyways.
@@ -213,6 +213,7 @@ def get_standard_metrics(trainer: "pl.Trainer", pl_module: "pl.LightningModule") | |||
if pl_module.truncated_bptt_steps > 0: | |||
items_dict["split_idx"] = trainer.fit_loop.split_idx | |||
|
|||
# TODO: Find out we handle trainer.logger.version without LoggerCollection |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
version = ( | ||
trainer.logger.version | ||
if isinstance(trainer.logger.version, str) | ||
else f"version_{trainer.logger.version}" | ||
) | ||
# TODO: Find out we handle trainer.logger.name without LoggerCollection |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
elif isinstance(self.logger, TensorBoardLogger): | ||
dirpath = self.logger.log_dir | ||
elif isinstance(self.logger, LoggerCollection): | ||
if len(self.loggers) != 1 or isinstance(self.logger, LoggerCollection): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When creating a LoggerCollection of size 1, the log_dir property returns default_root_dir, however we can't easily substitute this with trainer.loggers because trainer.loggers can't distinguish between a list of length 1 and a single logger. Both are stored as a list of length 1. Currently I'm checking trainer.logger, but this won't work once trainer.logger and LoggerCollection are deprecated. Should this be handled differently?
@@ -119,6 +119,7 @@ def test_logdir_logger_collection(tmpdir): | |||
max_steps=2, | |||
logger=[TensorBoardLogger(save_dir=save_dir, name="custom_logs")], | |||
) | |||
# TODO: What behavior do we want once trainer.logger is deprecated? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How should this unit test be handled? Part of this issue https://github.com/PyTorchLightning/pytorch-lightning/pull/11731/files#r801162619.
@@ -495,6 +495,7 @@ def look_for_trace(trace_dir): | |||
# Wrap the logger in a list so it becomes a LoggerCollection | |||
logger = [TensorBoardLogger(save_dir=tmpdir)] | |||
trainer = Trainer(default_root_dir=tmpdir, profiler="pytorch", logger=logger, limit_train_batches=5, max_epochs=1) | |||
# TODO: What behavior do we want once trainer.logger is deprecated? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How should this unit test be handled? Part of this issue https://github.com/PyTorchLightning/pytorch-lightning/pull/11731/files#r801162619
What does this PR do?
Refactors the codebase to use the new loggers property introduced here #11683.
Part of #11232
Does your PR introduce any breaking changes? If yes, please list them.
Before submitting
PR review
Anyone in the community is welcome to review the PR.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:
Did you have fun?
Make sure you had fun coding 🙃