-
Notifications
You must be signed in to change notification settings - Fork 3.5k
Callbacks are not saved to the config file #7540
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
It cannot save the callbacks to the config file, because these are objects. You need to provide class path I believe. It is explained here: Basically, since you instantiate them yourself LightningCLI is unaware of them and doesn't know how to instantiate them. This does not just apply to callbacks. |
As I mention, I also tried
I have to instantiate them because I cannot pass them like logger like the following code. Should we handle callbacks the same way we handle loggers? cli = LightningCLI(
BoringModel,
seed_everything_default=123,
trainer_defaults={
"max_epochs": 2,
"callbacks": [
{
"class_path": "pytorch_lightning.callbacks.ModelCheckpoint"
"init_args": {
...
}
},
],
"logger": {
"class_path": "pytorch_lightning.loggers.TestTubeLogger",
"init_args": {
"save_dir": "logs",
"create_git_tag": True,
},
},
},
) |
Try this: seed_everything: 123
trainer:
callbacks:
- class_path: pytorch_lightning.callbacks.EarlyStopping
init_args:
patience: 5
monitor: valid_loss
- class_path: pytorch_lightning.callbacks.ModelCheckpoint
init_args:
monitor: valid_loss
dirpath: here And then when we print the config it outputs everything: seed_everything: 123
trainer:
logger: true
checkpoint_callback: true
callbacks:
- class_path: pytorch_lightning.callbacks.EarlyStopping
init_args:
monitor: valid_loss
min_delta: 0.0
patience: 5
verbose: false
mode: min
strict: true
check_finite: true
check_on_train_epoch_end: false
- class_path: pytorch_lightning.callbacks.ModelCheckpoint
init_args:
dirpath: here
monitor: valid_loss
verbose: false
save_weights_only: false
mode: min
auto_insert_metric_name: true
default_root_dir: null
....
Yes, I think the idea of jsonargparse is that we can instantiate any object like this. |
Thank you for your detailed reply, sorry I may have made a mistake somewhere before. |
Some additional clarification. The description of In summary. If you want persistent callbacks add them in |
@tshu-w does it answer your questions or are there still any open problems here? |
@mauvilsa Thank you for your detailed replies! I understand that the current setting is to only provide persistent callbacks in the code. As a newbie, I'm trying to understand what the rationale behind specializing callbacks is, and why not provide callback configuration to trainer_defaults like logger does. |
@tshu-w It is great that you are challenging the concept. This is precisely why LightningCLI is in beta, so that important cases are identified, discussed and improved for the stable version. Regarding your comment. You are comparing logger and callbacks but they are conceptually different cases. The callbacks are a list whereas there is a single logger. With a list the behavior of a default is not necessarily obvious. If for example there are three default callbacks. When the callbacks parameter is overridden with a config file there could be several possibilities. For the example lets say that a single callback is included in the config. One possibility could be that all default callbacks are discarded and the single given callback is used. Another possibility could be that callbacks are appended to default list so in the example there would be four callbacks used (this is what is currently implemented). Another possibility could be that someone wants to change some settings of a default callback. In this case how do you specify that you want to modify say the second callback and not be required to include in your config of the settings of the first and third callback which shouldn't change? Note that in general what LightningCLI does is to look at the type hints of a parameter and make it configurable. This is why it is able to work with user defined modules and datamodules. The behavior for lists is that the whole list is replaced when overriding. Since in lightning the callbacks are important, the persistent callbacks seemed like a useful feature. Maybe we should make the persistent callbacks an independent init parameter of LightningCLI. |
@mauvilsa Thx your replies! This makes sense.
Last Question, since it is now appended to the back of the list, can it override the previous callback of the same type? |
The current implementation no. It just appends. But it could be a good idea to change it so that callbacks of the same type are overridden instead of appended. |
We have plans to add better support for multiple callbacks of the same type, so overriding would be problematic as users would use multiple callbacks of the same type. |
Thank you for your efforts and it's great to see pytorchlightning keep getting better! |
I came to this issue when I want to have a default callback of
c.f. https://lightning.ai/docs/pytorch/stable/cli/lightning_cli_expert.html#configure-forced-callbacks |
🐛 Bug
LightningCLI cannot save callbacks to config file.
Please reproduce using the BoringModel
train.py:
Expected behavior
Callbacks were saved to the config file. And should the logger and callbacks provided to
train_defaults
be of the same form?Environment
The text was updated successfully, but these errors were encountered: