Skip to content

[CLI] print_config not showing correct callbacks #11252

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
semaphore-egg opened this issue Dec 24, 2021 · 3 comments · Fixed by #11309
Closed

[CLI] print_config not showing correct callbacks #11252

semaphore-egg opened this issue Dec 24, 2021 · 3 comments · Fixed by #11309
Labels
bug Something isn't working lightningcli pl.cli.LightningCLI
Milestone

Comments

@semaphore-egg
Copy link
Contributor

semaphore-egg commented Dec 24, 2021

🐛 Bug

[CLI] print_config not showing correct callbacks

To Reproduce

running following command from root directory of pytorch-lightning (cloned from git origin master):
python -m pl_examples.basic_examples.autoencoder --print_config

Get callbacks: null.
The full printed configs are:

seed_everything: 1234
trainer:
  logger: true
  checkpoint_callback: null
  enable_checkpointing: true
  callbacks: null
  default_root_dir: null
  gradient_clip_val: null
  gradient_clip_algorithm: null
  process_position: 0
  num_nodes: 1
  num_processes: 1
  devices: null
  gpus: null
  auto_select_gpus: false
  tpu_cores: null
  ipus: null
  log_gpu_memory: null
  progress_bar_refresh_rate: null
  enable_progress_bar: true
  overfit_batches: 0.0
  track_grad_norm: -1
  check_val_every_n_epoch: 1
  fast_dev_run: false
  accumulate_grad_batches: null
  max_epochs: 10
  min_epochs: null
  max_steps: -1
  min_steps: null
  max_time: null
  limit_train_batches: 1.0
  limit_val_batches: 1.0
  limit_test_batches: 1.0
  limit_predict_batches: 1.0
  val_check_interval: 1.0
  flush_logs_every_n_steps: null
  log_every_n_steps: 50
  accelerator: null
  strategy: null
  sync_batchnorm: false
  precision: 32
  enable_model_summary: true
  weights_summary: top
  weights_save_path: null
  num_sanity_val_steps: 2
  resume_from_checkpoint: null
  profiler: null
  benchmark: false
  deterministic: false
  reload_dataloaders_every_n_epochs: 0
  auto_lr_find: false
  replace_sampler_ddp: true
  detect_anomaly: false
  auto_scale_batch_size: false
  prepare_data_per_node: null
  plugins: null
  amp_backend: native
  amp_level: null
  move_metrics_to_cpu: false
  multiple_trainloader_mode: max_size_cycle
  stochastic_weight_avg: false
  terminate_on_nan: null
model:
  hidden_dim: 64
data:
  batch_size: 32

Expected behavior

Should print correct callbacks related to ImageSampler.

Environment

  • PyTorch Lightning Version (1.5.6):
  • PyTorch Version (1.10.1):
  • Python version (3.8):
  • OS (Ubuntu 20.04):
  • CUDA/cuDNN version:
  • GPU models and configuration:
  • How you installed PyTorch (pip):
  • If compiling from source, the output of torch.__config__.show():
  • Any other relevant information:

Additional context

cc @carmocca @mauvilsa

@semaphore-egg semaphore-egg added the bug Something isn't working label Dec 24, 2021
@rohitgr7 rohitgr7 added the lightningcli pl.cli.LightningCLI label Dec 24, 2021
@rohitgr7 rohitgr7 added this to the 1.5.x milestone Dec 24, 2021
@carmocca
Copy link
Contributor

Duplicate of #7540, see #7540 (comment) for an explanation for this behavior.

@tchaton
Copy link
Contributor

tchaton commented Jan 4, 2022

Hey @carmocca. Is this information written in the documentation ?

@carmocca
Copy link
Contributor

carmocca commented Jan 4, 2022

@tchaton there you go: #11309

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working lightningcli pl.cli.LightningCLI
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants