Skip to content

Evaluation not returning new run_id by default #261

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
kctezcan opened this issue May 22, 2025 · 4 comments · May be fixed by #272
Open

Evaluation not returning new run_id by default #261

kctezcan opened this issue May 22, 2025 · 4 comments · May be fixed by #272
Assignees
Labels
enhancement New feature or request

Comments

@kctezcan
Copy link
Contributor

Is your feature request related to a problem? Please describe.

Before when I ran an evaluation, a new run_id was generated automatically. This is not the case anymore. How can I get a new run_id?

There is the CLI parameter

parser.add_argument(
        "--eval_run_id",
        type=str,
        required=False,
        dest="eval_run_id",
        help="(optional) if specified, uses the provided run id to store the evaluation results",
    )

but this needs me to choose the run_id.

Describe the solution you'd like

Generate a new run_id automatically unless otherwise specified.

Describe alternatives you've considered

No response

Additional context

No response

Organisation

No response

@kctezcan kctezcan added the enhancement New feature or request label May 22, 2025
@clessig
Copy link
Collaborator

clessig commented May 22, 2025

There should still be a new run_id generated if not specified.

@grassesi
Copy link
Contributor

There should still be a new run_id generated if not specified.

Is this still relevant? How should I understand this: Should the run_id for evaluation be always different to the run_id of the pretrained model? Or is it just a way to use --eval_run_id (now --run_id) without supplying a run_id? This probably ties also into #258 ?

@clessig
Copy link
Collaborator

clessig commented May 27, 2025

The run_id for evaluate should by default be different than the run_id of the model you load (we can run many different eval experiments on one pretrained model).

@grassesi
Copy link
Contributor

I will implement a quick fix to #221 for that 👍

@grassesi grassesi linked a pull request May 27, 2025 that will close this issue
13 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: No status
Development

Successfully merging a pull request may close this issue.

4 participants