You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Before when I ran an evaluation, a new run_id was generated automatically. This is not the case anymore. How can I get a new run_id?
There is the CLI parameter
parser.add_argument(
"--eval_run_id",
type=str,
required=False,
dest="eval_run_id",
help="(optional) if specified, uses the provided run id to store the evaluation results",
)
but this needs me to choose the run_id.
Describe the solution you'd like
Generate a new run_id automatically unless otherwise specified.
Describe alternatives you've considered
No response
Additional context
No response
Organisation
No response
The text was updated successfully, but these errors were encountered:
There should still be a new run_id generated if not specified.
Is this still relevant? How should I understand this: Should the run_id for evaluation be always different to the run_id of the pretrained model? Or is it just a way to use --eval_run_id (now --run_id) without supplying a run_id? This probably ties also into #258 ?
The run_id for evaluate should by default be different than the run_id of the model you load (we can run many different eval experiments on one pretrained model).
Is your feature request related to a problem? Please describe.
Before when I ran an evaluation, a new run_id was generated automatically. This is not the case anymore. How can I get a new run_id?
There is the CLI parameter
but this needs me to choose the run_id.
Describe the solution you'd like
Generate a new run_id automatically unless otherwise specified.
Describe alternatives you've considered
No response
Additional context
No response
Organisation
No response
The text was updated successfully, but these errors were encountered: