Skip to content

client.chat.completions.create() not taking logprobs and top_logprobs as arguments #975

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
1 task done
az-2os opened this issue Dec 15, 2023 · 23 comments
Closed
1 task done
Labels
bug Something isn't working

Comments

@az-2os
Copy link

az-2os commented Dec 15, 2023

Confirm this is an issue with the Python library and not an underlying OpenAI API

  • This is an issue with the Python library

Describe the bug

I just updated to 1.4.0, and the client.chat.completions.create() in python library is not taking logprobs or top_logprobs as arguments but these arguments are already enabled if I access them through http request.

To Reproduce

To reproduce:

from openai import OpenAI
client = OpenAI()

completion = client.chat.completions.create(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ],
 logprobs=True,
)

Then it will return:
TypeError: Completions.create() got an unexpected keyword argument 'logprobs'

But you can use:

import requests

url = "https://api.openai.com/v1/chat/completions"
headers = {
    "Content-Type": "application/json",
    "Authorization": f"Bearer {OPENAI_API_KEY}"
}
data = {
    "model": "gpt-3.5-turbo",
    "messages": [
        {
            "role": "system",
            "content": "You are a helpful assistant."
        },
        {
            "role": "user",
            "content": "Hello!"
        }
    ],
    'logprobs': True,
}

response = requests.post(url, headers=headers, json=data)

And you will get a response with logprobs.

Code snippets

No response

OS

macOS

Python version

Python v3.11.5

Library version

openai v1.4.0

@az-2os az-2os added the bug Something isn't working label Dec 15, 2023
@rattrayalex
Copy link
Collaborator

Sorry about this – we're working on it and should have it out soon.

@rattrayalex
Copy link
Collaborator

This has been released.

@yash-quizizz
Copy link

Hey @rattrayalex, Is there a release planned for logprobs of fine tuned models ? As of now I am getting an error This model does not support the \'logprobs\' parameter.

@rattrayalex
Copy link
Collaborator

cc @enochcheung

@enochcheung
Copy link

Thanks for the CC! logprobs is being rolled out to finetuned models as well, but currently not fully rolled out.

@yash-quizizz
Copy link

Thanks a ton
@enochcheung @rattrayalex

@natty-zs
Copy link

Oddly, I'm getting the Error code: 400 - {'error': {'message': "This model does not support the 'logprobs' parameter.", 'type': 'invalid_request_error', 'param': 'logprobs', 'code': None}} intermittently on gpt-3.5-turbo-1106, but seems stable on gpt-3.5-turbo.

@gonghaohuang
Copy link

Oddly, I'm getting the Error code: 400 - {'error': {'message': "This model does not support the 'logprobs' parameter.", 'type': 'invalid_request_error', 'param': 'logprobs', 'code': None}} intermittently on gpt-3.5-turbo-1106, but seems stable on gpt-3.5-turbo.

I have the same problem

@ruleGreen
Copy link

me too, the problem happens somtimes

@eyb1
Copy link

eyb1 commented Dec 31, 2023

I'm just migrating our fine-tuned curie/davinci completion models over to the new GPT3.5 Turbo chat models (for the Jan 4th deprecation deadline), and the only roadblock is the seemingly lack of logprobs support:

Error code: 400 - {'error': {'message': "This model does not support the 'logprobs' parameter.", 'type': 'invalid_request_error', 'param': 'logprobs', 'code': None}}

I get this both via the OpenAI Python Module and a direct POST request. I am using my fine-tuned model (on gpt-3.5-turbo-0613), haven't tried on others. We used the logprobs as a integral part of determining whether or not we should use the output of the model, will this be returning any time soon? @enochcheung I see you mentioned it above 2 weeks ago --- are we going to have to re-think our algo within the next 4 days?

@mohoyer
Copy link

mohoyer commented Jan 2, 2024

FYI: @eyb1: I had the same issue, but logprops appear to work for fine-tuned models as well now. In my case this includes models that were trained when the feature wasn't ready yet.

@jhallard
Copy link
Contributor

jhallard commented Jan 2, 2024

Hello all,

Logprobs has been enabled for fine tuned models based on the turbo-1106 models as of 2023-11-27. We will also enable support for fine-tunes based on the turbo-0613 models today (2024-01-02). I will comment back in this thread once that has been deployed.

Apologies for the inconvenience

@jhallard
Copy link
Contributor

jhallard commented Jan 2, 2024

@eyb1 this has now been enabled on fine-tunes of gpt-3.5-turbo-0613.

@ruleGreen
Copy link

Thank you for your kind help. I’m wondering how about not fine-tune models? I directly call gpt-3.5-turbo-1106 API to complete some tasks by setting logprobs as True.

@Zhang-Zelong
Copy link

me too, the problem happens somtimes

@rattrayalex Is there a plan to fix this problem?

@rattrayalex
Copy link
Collaborator

rattrayalex commented Jan 3, 2024

@jhallard shared that this has been enabled on all fine-tuned models now, and thus I believe all relevant models. If you continue to encounter that error message, please share it here, pinging him.

@natty-zs
Copy link

natty-zs commented Jan 3, 2024

Hey @rattrayalex, I believe they're referring to a different problem - but the same one I mentioned above. Different than the fine-tunes, just trying to return logprobs on 3.5-turbo-1106 seems to intermittently return a Error code: 400 - {'error': {'message': "This model does not support the 'logprobs' parameter.", 'type': 'invalid_request_error', 'param': 'logprobs', 'code': None}}. Not sure why it's intermittent. If I set up automatic retries it handles it fine, but I'm not sure why the error occurs in the first place, since they are enabled.

I also just tested it again, and I'm still getting the intermittent error.

@rattrayalex
Copy link
Collaborator

rattrayalex commented Jan 3, 2024

Thanks for reporting @natty-zs, this has been fixed! Please let me know if you continue to see this issue.

@Zhang-Zelong
Copy link

Seems it has been resolved, I called chat api for 15 times and not saw this error again, mayby this issuse can be closed.

from openai import OpenAI

client = OpenAI()

for _ in range(15):
    client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": "Hello!"}],
        logprobs=True,
        top_logprobs=2,
    )

@natty-zs
Copy link

natty-zs commented Jan 5, 2024

Thanks for reporting @natty-zs, this has been fixed! Please let me know if you continue to see this issue.

No worries - thanks for helping out! Looks like it's fixed on my end 👍

@0825kangkang
Copy link

Hi there, I still get this error TypeError: Completions.create() got an unexpected keyword argument 'logprobs.
Could you help to take a look on it? The model I am using is gpt -3.5 -turbo.

completion = openai.chat.completions.create(
model= model,
temperature=temperature,
logprobs= True,

messages=[
    {"role": "system", "content": system_message},  # System message first (if used)
    {"role": "user", "content": prompt}  # User message follows
],

)

@dalong-G
Copy link

Hi there, I still get this error TypeError: Completions.create() got an unexpected keyword argument 'logprobs. Could you help to take a look on it? The model I am using is gpt -3.5 -turbo.

completion = openai.chat.completions.create( model= model, temperature=temperature, logprobs= True,

messages=[
    {"role": "system", "content": system_message},  # System message first (if used)
    {"role": "user", "content": prompt}  # User message follows
],

)

Hi there, I still get this error TypeError: Completions.create() got an unexpected keyword argument 'logprobs. Could you help to take a look on it? The model I am using is gpt -3.5 -turbo.

completion = openai.chat.completions.create( model= model, temperature=temperature, logprobs= True,

messages=[
    {"role": "system", "content": system_message},  # System message first (if used)
    {"role": "user", "content": prompt}  # User message follows
],

)

Make sure to update your openai library to the latest

@0825kangkang
Copy link

Hi there, I still get this error TypeError: Completions.create() got an unexpected keyword argument 'logprobs. Could you help to take a look on it? The model I am using is gpt -3.5 -turbo.
completion = openai.chat.completions.create( model= model, temperature=temperature, logprobs= True,

messages=[
    {"role": "system", "content": system_message},  # System message first (if used)
    {"role": "user", "content": prompt}  # User message follows
],

)

Hi there, I still get this error TypeError: Completions.create() got an unexpected keyword argument 'logprobs. Could you help to take a look on it? The model I am using is gpt -3.5 -turbo.
completion = openai.chat.completions.create( model= model, temperature=temperature, logprobs= True,

messages=[
    {"role": "system", "content": system_message},  # System message first (if used)
    {"role": "user", "content": prompt}  # User message follows
],

)

Make sure to update your openai library to the latest

Thank you so much! It is fixed!

stainless-app bot added a commit that referenced this issue Mar 27, 2025
stainless-app bot added a commit that referenced this issue Mar 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests