Skip to content

LoggerCollection name redundancy #10954

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
twsl opened this issue Dec 6, 2021 · 6 comments · Fixed by #10976
Closed

LoggerCollection name redundancy #10954

twsl opened this issue Dec 6, 2021 · 6 comments · Fixed by #10976
Assignees
Labels
bug Something isn't working good first issue Good for newcomers logger Related to the Loggers priority: 1 Medium priority task

Comments

@twsl
Copy link
Contributor

twsl commented Dec 6, 2021

🐛 Bug

When you create a logger collection, the name of the logger collection is used for the save path in ModelCheckpoint. LoggerCollection concats the names of all logger. Therefore, in my example the checkpoints are saved to f"{experiment_name}_{experiment_name}" instead of experiment_name.

To Reproduce

tb_logger = TensorBoardLogger(save_dir=root_dir, name=experiment_name, version=version)
csv_logger = CSVLogger(save_dir=root_dir, name=experiment_name, version=version)
trainer = pl.Trainer(
        logger=[
            csv_logger,
            tb_logger,
        ],
    )

#...

class TestModule(pl.LightningModule):
    def configure_callbacks(self):
        checkpoint = ModelCheckpoint(
            monitor="val_map",
            filename="{epoch}-{step}--{val_map:.4f}",
            auto_insert_metric_name=True,
            every_n_epochs=1,
        )
        return [checkpoint]

Expected behavior

I would expect the checkpoints to be saved under the same name I set for the loggers. I understand the motive behind the current implementation, but I think only unique names should be concatenated there. Same goes for the version property.

Environment

  • PyTorch Lightning Version (e.g., 1.5.0): 1.5.4
  • PyTorch Version (e.g., 1.10): 1.10
  • Python version (e.g., 3.9): 3.8
  • OS (e.g., Linux): Linux
  • CUDA/cuDNN version:
  • GPU models and configuration: 1080
  • How you installed PyTorch (conda, pip, source): pip
  • If compiling from source, the output of torch.__config__.show(): -
  • Any other relevant information:

Additional context

Workaround is to set the dirpath argument manually.

cc @Borda @awaelchli @edward-io @ananthsub @tchaton

@twsl twsl added the bug Something isn't working label Dec 6, 2021
@tchaton tchaton added logger Related to the Loggers priority: 1 Medium priority task good first issue Good for newcomers labels Dec 6, 2021
@tchaton
Copy link
Contributor

tchaton commented Dec 6, 2021

Dear @twsl,

Would you be interested in contributing a fix?

Best,
T.C

@ashutoshml
Copy link

I can help with fixing this issue.

@tchaton tchaton self-assigned this Dec 6, 2021
@tchaton
Copy link
Contributor

tchaton commented Dec 6, 2021

Great @ashutoshml, I assigned you to the issue.

@ashutoshml
Copy link

Thanks. I'll start working on it.

What are the requirements for the fix, though?

Based on the issue reported, I assume the fix should only be concerning experiment_name in the loggers. Is there anything else that needs to change?

@tchaton
Copy link
Contributor

tchaton commented Dec 7, 2021

Hey @ashutoshml,

I would assume this to fix the problem. There might be some edge case if the users wanted to have twice of the same type of loggers though.

@ashutoshml
Copy link

Sure. I can take a look at that.
Currently the issue is assigned to you, though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working good first issue Good for newcomers logger Related to the Loggers priority: 1 Medium priority task
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants