Skip to content

Missing default value to logprobs in openai.types.chat.chat_completion.Choice #1006

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
1 task done
davorrunje opened this issue Dec 22, 2023 · 4 comments · Fixed by #1007
Closed
1 task done

Missing default value to logprobs in openai.types.chat.chat_completion.Choice #1006

davorrunje opened this issue Dec 22, 2023 · 4 comments · Fixed by #1007
Labels
enhancement New feature or request

Comments

@davorrunje
Copy link
Contributor

davorrunje commented Dec 22, 2023

Confirm this is an issue with the Python library and not an underlying OpenAI API

  • This is an issue with the Python library

Describe the bug

#980 added token logprobs to chat completions of type Optional[ChoiceLogprobs] in openai.types.chat.chat_completion.Choice and openai.types.chat.chat_completion_chunk.Choice. In the latter, the default value is set to None, while in the former it is not set. This causes backward compatibility problems with code written for versions prior to 1.5.0.

To Reproduce

Execution of the following code fails:

from openai.types.chat.chat_completion import ChatCompletionMessage, Choice

msg = ChatCompletionMessage(role="assistant", content="")

Choice(
    index=0,
    finish_reason="stop",
    message=msg,
)

The output

----> 1 Choice(
      2     index=0,
      3     finish_reason="stop",
      4     message=msg,
      5 )

File /.venv-3.10/lib/python3.10/site-packages/pydantic/main.py:164, in BaseModel.__init__(__pydantic_self__, **data)
    162 # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
    163 __tracebackhide__ = True
--> 164 __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)

ValidationError: 1 validation error for Choice
logprobs
  Field required [type=missing, input_value={'index': 0, 'finish_reas...=None, tool_calls=None)}, input_type=dict]
    For further information visit https://errors.pydantic.dev/2.5/v/missing

Setting logprobs to None fixes the problem.

from openai.types.chat.chat_completion import ChatCompletionMessage, Choice

msg = ChatCompletionMessage(role="assistant", content="")

Choice(
    index=0,
    finish_reason="stop",
    message=msg,
    logprobs=None # added line
)

Code snippets

see above

OS

Linux

Python version

Python 3.10.13

Library version

openai 1.6.0

@davorrunje davorrunje added the bug Something isn't working label Dec 22, 2023
@davorrunje davorrunje changed the title MIssing default value to logprobs in openai.types.chat.chat_completion.Choice Missing default value to logprobs in openai.types.chat.chat_completion.Choice Dec 22, 2023
@rattrayalex
Copy link
Collaborator

Why are you instantiating Choice in this way? Do you mean to instantiate ChoiceParam?

@rattrayalex
Copy link
Collaborator

rattrayalex commented Dec 23, 2023

ah, it seems you're doing this for tests, is that right?

Hmm, we didn't intend for these models to be instantiated this way, so we haven't considered this sort of change to be breaking, but of course in cases like yours, it is.

If we do this we'll want to add = None to all Optional fields.

This is something we'll have to think about and likely won't be actioned until after the new year.

Thank you for raising this!

@rattrayalex rattrayalex added enhancement New feature or request and removed bug Something isn't working labels Dec 23, 2023
@davorrunje
Copy link
Contributor Author

davorrunje commented Dec 23, 2023

Thank you the reply.

The main reason I raised the issue is that it broke AutoGen (microsoft/autogen#1043). I made a PR there, but until the new version is released everyone has to downgrade openai lib as remedy.

i understand your argument and agree with it in general. The thing that was puzzling me was that the default value was not set it in openai.types.chat.chat_completion.Choice, but it was set in openai.types.chat.chat_completion_chunk.Choice. I could not understand why and I assumed it was an oversight.

@rattrayalex
Copy link
Collaborator

rattrayalex commented Dec 23, 2023

Thank you @davorrunje !

The use-case of needing to build up a full object from chunks makes sense. We plan to bake that into this library in the coming weeks, but we should support that.

We do plan to add = None to cases like this in general, so I'll go ahead and merge your PR, as it's a step in the right direction. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants