Skip to content

Commit eba7958

Browse files
stainless-botmegamanics
authored andcommitted
feat(api): unify function types (openai#741)
Also fixes an enum `assistant.run.step` -> `thread.run.step`
1 parent c20023e commit eba7958

20 files changed

+223
-251
lines changed

Diff for: api.md

+6
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,9 @@
1+
# Shared Types
2+
3+
```python
4+
from openai.types import FunctionObject, FunctionParameters
5+
```
6+
17
# Completions
28

39
Types:

Diff for: src/openai/resources/chat/completions.py

+72-12
Original file line numberDiff line numberDiff line change
@@ -137,8 +137,18 @@ def create(
137137
138138
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
139139
140-
response_format: An object specifying the format that the model must output. Used to enable JSON
141-
mode.
140+
response_format: An object specifying the format that the model must output.
141+
142+
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
143+
message the model generates is valid JSON.
144+
145+
**Important:** when using JSON mode, you **must** also instruct the model to
146+
produce JSON yourself via a system or user message. Without this, the model may
147+
generate an unending stream of whitespace until the generation reaches the token
148+
limit, resulting in increased latency and appearance of a "stuck" request. Also
149+
note that the message content may be partially cut off if
150+
`finish_reason="length"`, which indicates the generation exceeded `max_tokens`
151+
or the conversation exceeded the max context length.
142152
143153
seed: This feature is in Beta. If specified, our system will make a best effort to
144154
sample deterministically, such that repeated requests with the same `seed` and
@@ -304,8 +314,18 @@ def create(
304314
305315
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
306316
307-
response_format: An object specifying the format that the model must output. Used to enable JSON
308-
mode.
317+
response_format: An object specifying the format that the model must output.
318+
319+
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
320+
message the model generates is valid JSON.
321+
322+
**Important:** when using JSON mode, you **must** also instruct the model to
323+
produce JSON yourself via a system or user message. Without this, the model may
324+
generate an unending stream of whitespace until the generation reaches the token
325+
limit, resulting in increased latency and appearance of a "stuck" request. Also
326+
note that the message content may be partially cut off if
327+
`finish_reason="length"`, which indicates the generation exceeded `max_tokens`
328+
or the conversation exceeded the max context length.
309329
310330
seed: This feature is in Beta. If specified, our system will make a best effort to
311331
sample deterministically, such that repeated requests with the same `seed` and
@@ -464,8 +484,18 @@ def create(
464484
465485
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
466486
467-
response_format: An object specifying the format that the model must output. Used to enable JSON
468-
mode.
487+
response_format: An object specifying the format that the model must output.
488+
489+
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
490+
message the model generates is valid JSON.
491+
492+
**Important:** when using JSON mode, you **must** also instruct the model to
493+
produce JSON yourself via a system or user message. Without this, the model may
494+
generate an unending stream of whitespace until the generation reaches the token
495+
limit, resulting in increased latency and appearance of a "stuck" request. Also
496+
note that the message content may be partially cut off if
497+
`finish_reason="length"`, which indicates the generation exceeded `max_tokens`
498+
or the conversation exceeded the max context length.
469499
470500
seed: This feature is in Beta. If specified, our system will make a best effort to
471501
sample deterministically, such that repeated requests with the same `seed` and
@@ -704,8 +734,18 @@ async def create(
704734
705735
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
706736
707-
response_format: An object specifying the format that the model must output. Used to enable JSON
708-
mode.
737+
response_format: An object specifying the format that the model must output.
738+
739+
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
740+
message the model generates is valid JSON.
741+
742+
**Important:** when using JSON mode, you **must** also instruct the model to
743+
produce JSON yourself via a system or user message. Without this, the model may
744+
generate an unending stream of whitespace until the generation reaches the token
745+
limit, resulting in increased latency and appearance of a "stuck" request. Also
746+
note that the message content may be partially cut off if
747+
`finish_reason="length"`, which indicates the generation exceeded `max_tokens`
748+
or the conversation exceeded the max context length.
709749
710750
seed: This feature is in Beta. If specified, our system will make a best effort to
711751
sample deterministically, such that repeated requests with the same `seed` and
@@ -871,8 +911,18 @@ async def create(
871911
872912
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
873913
874-
response_format: An object specifying the format that the model must output. Used to enable JSON
875-
mode.
914+
response_format: An object specifying the format that the model must output.
915+
916+
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
917+
message the model generates is valid JSON.
918+
919+
**Important:** when using JSON mode, you **must** also instruct the model to
920+
produce JSON yourself via a system or user message. Without this, the model may
921+
generate an unending stream of whitespace until the generation reaches the token
922+
limit, resulting in increased latency and appearance of a "stuck" request. Also
923+
note that the message content may be partially cut off if
924+
`finish_reason="length"`, which indicates the generation exceeded `max_tokens`
925+
or the conversation exceeded the max context length.
876926
877927
seed: This feature is in Beta. If specified, our system will make a best effort to
878928
sample deterministically, such that repeated requests with the same `seed` and
@@ -1031,8 +1081,18 @@ async def create(
10311081
10321082
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
10331083
1034-
response_format: An object specifying the format that the model must output. Used to enable JSON
1035-
mode.
1084+
response_format: An object specifying the format that the model must output.
1085+
1086+
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
1087+
message the model generates is valid JSON.
1088+
1089+
**Important:** when using JSON mode, you **must** also instruct the model to
1090+
produce JSON yourself via a system or user message. Without this, the model may
1091+
generate an unending stream of whitespace until the generation reaches the token
1092+
limit, resulting in increased latency and appearance of a "stuck" request. Also
1093+
note that the message content may be partially cut off if
1094+
`finish_reason="length"`, which indicates the generation exceeded `max_tokens`
1095+
or the conversation exceeded the max context length.
10361096
10371097
seed: This feature is in Beta. If specified, our system will make a best effort to
10381098
sample deterministically, such that repeated requests with the same `seed` and

Diff for: src/openai/types/__init__.py

+2
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,8 @@
55
from .edit import Edit as Edit
66
from .image import Image as Image
77
from .model import Model as Model
8+
from .shared import FunctionObject as FunctionObject
9+
from .shared import FunctionParameters as FunctionParameters
810
from .embedding import Embedding as Embedding
911
from .fine_tune import FineTune as FineTune
1012
from .completion import Completion as Completion

Diff for: src/openai/types/beta/assistant.py

+4-31
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,13 @@
11
# File generated from our OpenAPI spec by Stainless.
22

33
import builtins
4-
from typing import Dict, List, Union, Optional
4+
from typing import List, Union, Optional
55
from typing_extensions import Literal
66

7+
from ..shared import FunctionObject
78
from ..._models import BaseModel
89

9-
__all__ = ["Assistant", "Tool", "ToolCodeInterpreter", "ToolRetrieval", "ToolFunction", "ToolFunctionFunction"]
10+
__all__ = ["Assistant", "Tool", "ToolCodeInterpreter", "ToolRetrieval", "ToolFunction"]
1011

1112

1213
class ToolCodeInterpreter(BaseModel):
@@ -19,36 +20,8 @@ class ToolRetrieval(BaseModel):
1920
"""The type of tool being defined: `retrieval`"""
2021

2122

22-
class ToolFunctionFunction(BaseModel):
23-
description: str
24-
"""
25-
A description of what the function does, used by the model to choose when and
26-
how to call the function.
27-
"""
28-
29-
name: str
30-
"""The name of the function to be called.
31-
32-
Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length
33-
of 64.
34-
"""
35-
36-
parameters: Dict[str, builtins.object]
37-
"""The parameters the functions accepts, described as a JSON Schema object.
38-
39-
See the [guide](https://platform.openai.com/docs/guides/gpt/function-calling)
40-
for examples, and the
41-
[JSON Schema reference](https://json-schema.org/understanding-json-schema/) for
42-
documentation about the format.
43-
44-
To describe a function that accepts no parameters, provide the value
45-
`{"type": "object", "properties": {}}`.
46-
"""
47-
48-
4923
class ToolFunction(BaseModel):
50-
function: ToolFunctionFunction
51-
"""The function definition."""
24+
function: FunctionObject
5225

5326
type: Literal["function"]
5427
"""The type of tool being defined: `function`"""

Diff for: src/openai/types/beta/assistant_create_params.py

+4-31
Original file line numberDiff line numberDiff line change
@@ -2,16 +2,17 @@
22

33
from __future__ import annotations
44

5-
from typing import Dict, List, Union, Optional
5+
from typing import List, Union, Optional
66
from typing_extensions import Literal, Required, TypedDict
77

8+
from ...types import shared_params
9+
810
__all__ = [
911
"AssistantCreateParams",
1012
"Tool",
1113
"ToolAssistantToolsCode",
1214
"ToolAssistantToolsRetrieval",
1315
"ToolAssistantToolsFunction",
14-
"ToolAssistantToolsFunctionFunction",
1516
]
1617

1718

@@ -71,36 +72,8 @@ class ToolAssistantToolsRetrieval(TypedDict, total=False):
7172
"""The type of tool being defined: `retrieval`"""
7273

7374

74-
class ToolAssistantToolsFunctionFunction(TypedDict, total=False):
75-
description: Required[str]
76-
"""
77-
A description of what the function does, used by the model to choose when and
78-
how to call the function.
79-
"""
80-
81-
name: Required[str]
82-
"""The name of the function to be called.
83-
84-
Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length
85-
of 64.
86-
"""
87-
88-
parameters: Required[Dict[str, object]]
89-
"""The parameters the functions accepts, described as a JSON Schema object.
90-
91-
See the [guide](https://platform.openai.com/docs/guides/gpt/function-calling)
92-
for examples, and the
93-
[JSON Schema reference](https://json-schema.org/understanding-json-schema/) for
94-
documentation about the format.
95-
96-
To describe a function that accepts no parameters, provide the value
97-
`{"type": "object", "properties": {}}`.
98-
"""
99-
100-
10175
class ToolAssistantToolsFunction(TypedDict, total=False):
102-
function: Required[ToolAssistantToolsFunctionFunction]
103-
"""The function definition."""
76+
function: Required[shared_params.FunctionObject]
10477

10578
type: Required[Literal["function"]]
10679
"""The type of tool being defined: `function`"""

Diff for: src/openai/types/beta/assistant_update_params.py

+4-31
Original file line numberDiff line numberDiff line change
@@ -2,16 +2,17 @@
22

33
from __future__ import annotations
44

5-
from typing import Dict, List, Union, Optional
5+
from typing import List, Union, Optional
66
from typing_extensions import Literal, Required, TypedDict
77

8+
from ...types import shared_params
9+
810
__all__ = [
911
"AssistantUpdateParams",
1012
"Tool",
1113
"ToolAssistantToolsCode",
1214
"ToolAssistantToolsRetrieval",
1315
"ToolAssistantToolsFunction",
14-
"ToolAssistantToolsFunctionFunction",
1516
]
1617

1718

@@ -73,36 +74,8 @@ class ToolAssistantToolsRetrieval(TypedDict, total=False):
7374
"""The type of tool being defined: `retrieval`"""
7475

7576

76-
class ToolAssistantToolsFunctionFunction(TypedDict, total=False):
77-
description: Required[str]
78-
"""
79-
A description of what the function does, used by the model to choose when and
80-
how to call the function.
81-
"""
82-
83-
name: Required[str]
84-
"""The name of the function to be called.
85-
86-
Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length
87-
of 64.
88-
"""
89-
90-
parameters: Required[Dict[str, object]]
91-
"""The parameters the functions accepts, described as a JSON Schema object.
92-
93-
See the [guide](https://platform.openai.com/docs/guides/gpt/function-calling)
94-
for examples, and the
95-
[JSON Schema reference](https://json-schema.org/understanding-json-schema/) for
96-
documentation about the format.
97-
98-
To describe a function that accepts no parameters, provide the value
99-
`{"type": "object", "properties": {}}`.
100-
"""
101-
102-
10377
class ToolAssistantToolsFunction(TypedDict, total=False):
104-
function: Required[ToolAssistantToolsFunctionFunction]
105-
"""The function definition."""
78+
function: Required[shared_params.FunctionObject]
10679

10780
type: Required[Literal["function"]]
10881
"""The type of tool being defined: `function`"""

Diff for: src/openai/types/beta/thread_create_and_run_params.py

+4-31
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,11 @@
22

33
from __future__ import annotations
44

5-
from typing import Dict, List, Union, Optional
5+
from typing import List, Union, Optional
66
from typing_extensions import Literal, Required, TypedDict
77

8+
from ...types import shared_params
9+
810
__all__ = [
911
"ThreadCreateAndRunParams",
1012
"Thread",
@@ -13,7 +15,6 @@
1315
"ToolAssistantToolsCode",
1416
"ToolAssistantToolsRetrieval",
1517
"ToolAssistantToolsFunction",
16-
"ToolAssistantToolsFunctionFunction",
1718
]
1819

1920

@@ -110,36 +111,8 @@ class ToolAssistantToolsRetrieval(TypedDict, total=False):
110111
"""The type of tool being defined: `retrieval`"""
111112

112113

113-
class ToolAssistantToolsFunctionFunction(TypedDict, total=False):
114-
description: Required[str]
115-
"""
116-
A description of what the function does, used by the model to choose when and
117-
how to call the function.
118-
"""
119-
120-
name: Required[str]
121-
"""The name of the function to be called.
122-
123-
Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length
124-
of 64.
125-
"""
126-
127-
parameters: Required[Dict[str, object]]
128-
"""The parameters the functions accepts, described as a JSON Schema object.
129-
130-
See the [guide](https://platform.openai.com/docs/guides/gpt/function-calling)
131-
for examples, and the
132-
[JSON Schema reference](https://json-schema.org/understanding-json-schema/) for
133-
documentation about the format.
134-
135-
To describe a function that accepts no parameters, provide the value
136-
`{"type": "object", "properties": {}}`.
137-
"""
138-
139-
140114
class ToolAssistantToolsFunction(TypedDict, total=False):
141-
function: Required[ToolAssistantToolsFunctionFunction]
142-
"""The function definition."""
115+
function: Required[shared_params.FunctionObject]
143116

144117
type: Required[Literal["function"]]
145118
"""The type of tool being defined: `function`"""

0 commit comments

Comments
 (0)