-
Notifications
You must be signed in to change notification settings - Fork 3.8k
client.chat.completions.create() not taking logprobs and top_logprobs as arguments #975
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Sorry about this – we're working on it and should have it out soon. |
This has been released. |
Hey @rattrayalex, Is there a release planned for logprobs of fine tuned models ? As of now I am getting an error |
cc @enochcheung |
Thanks for the CC! |
Thanks a ton |
Oddly, I'm getting the |
I have the same problem |
me too, the problem happens somtimes |
I'm just migrating our fine-tuned curie/davinci completion models over to the new GPT3.5 Turbo chat models (for the Jan 4th deprecation deadline), and the only roadblock is the seemingly lack of logprobs support:
I get this both via the OpenAI Python Module and a direct POST request. I am using my fine-tuned model (on |
FYI: @eyb1: I had the same issue, but logprops appear to work for fine-tuned models as well now. In my case this includes models that were trained when the feature wasn't ready yet. |
Hello all, Logprobs has been enabled for fine tuned models based on the turbo-1106 models as of 2023-11-27. We will also enable support for fine-tunes based on the turbo-0613 models today (2024-01-02). I will comment back in this thread once that has been deployed. Apologies for the inconvenience |
@eyb1 this has now been enabled on fine-tunes of |
Thank you for your kind help. I’m wondering how about not fine-tune models? I directly call gpt-3.5-turbo-1106 API to complete some tasks by setting logprobs as True. |
@rattrayalex Is there a plan to fix this problem? |
@jhallard shared that this has been enabled on all fine-tuned models now, and thus I believe all relevant models. If you continue to encounter that error message, please share it here, pinging him. |
Hey @rattrayalex, I believe they're referring to a different problem - but the same one I mentioned above. Different than the fine-tunes, just trying to return logprobs on 3.5-turbo-1106 seems to intermittently return a I also just tested it again, and I'm still getting the intermittent error. |
Thanks for reporting @natty-zs, this has been fixed! Please let me know if you continue to see this issue. |
Seems it has been resolved, I called chat api for 15 times and not saw this error again, mayby this issuse can be closed. from openai import OpenAI
client = OpenAI()
for _ in range(15):
client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello!"}],
logprobs=True,
top_logprobs=2,
) |
No worries - thanks for helping out! Looks like it's fixed on my end 👍 |
Hi there, I still get this error TypeError: Completions.create() got an unexpected keyword argument 'logprobs. completion = openai.chat.completions.create(
) |
Make sure to update your openai library to the latest |
Thank you so much! It is fixed! |
Confirm this is an issue with the Python library and not an underlying OpenAI API
Describe the bug
I just updated to 1.4.0, and the
client.chat.completions.create()
in python library is not takinglogprobs
ortop_logprobs
as arguments but these arguments are already enabled if I access them through http request.To Reproduce
To reproduce:
Then it will return:
TypeError: Completions.create() got an unexpected keyword argument 'logprobs'
But you can use:
And you will get a response with logprobs.
Code snippets
No response
OS
macOS
Python version
Python v3.11.5
Library version
openai v1.4.0
The text was updated successfully, but these errors were encountered: