Skip to content

Commit 329a30a

Browse files
feat(api): add o1 models (#1061)
See https://platform.openai.com/docs/guides/reasoning for details.
1 parent 70337c5 commit 329a30a

File tree

9 files changed

+76
-42
lines changed

9 files changed

+76
-42
lines changed

Diff for: .stats.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
11
configured_endpoints: 68
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-85a85e0c08de456441431c0ae4e9c078cc8f9748c29430b9a9058340db6389ee.yml
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-501122aa32adaa2abb3d4487880ab9cdf2141addce2e6c3d1bd9bb6b44c318a8.yml

Diff for: src/resources/beta/assistants.ts

+19-17
Original file line numberDiff line numberDiff line change
@@ -151,11 +151,11 @@ export interface Assistant {
151151
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
152152
*
153153
* Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
154-
* Outputs which guarantees the model will match your supplied JSON schema. Learn
155-
* more in the
154+
* Outputs which ensures the model will match your supplied JSON schema. Learn more
155+
* in the
156156
* [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
157157
*
158-
* Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
158+
* Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the
159159
* message the model generates is valid JSON.
160160
*
161161
* **Important:** when using JSON mode, you **must** also instruct the model to
@@ -665,7 +665,8 @@ export namespace FileSearchTool {
665665
max_num_results?: number;
666666

667667
/**
668-
* The ranking options for the file search.
668+
* The ranking options for the file search. If not specified, the file search tool
669+
* will use the `auto` ranker and a score_threshold of 0.
669670
*
670671
* See the
671672
* [file search tool documentation](https://platform.openai.com/docs/assistants/tools/file-search/customizing-file-search-settings)
@@ -676,24 +677,25 @@ export namespace FileSearchTool {
676677

677678
export namespace FileSearch {
678679
/**
679-
* The ranking options for the file search.
680+
* The ranking options for the file search. If not specified, the file search tool
681+
* will use the `auto` ranker and a score_threshold of 0.
680682
*
681683
* See the
682684
* [file search tool documentation](https://platform.openai.com/docs/assistants/tools/file-search/customizing-file-search-settings)
683685
* for more information.
684686
*/
685687
export interface RankingOptions {
686688
/**
687-
* The ranker to use for the file search. If not specified will use the `auto`
688-
* ranker.
689+
* The score threshold for the file search. All values must be a floating point
690+
* number between 0 and 1.
689691
*/
690-
ranker?: 'auto' | 'default_2024_08_21';
692+
score_threshold: number;
691693

692694
/**
693-
* The score threshold for the file search. All values must be a floating point
694-
* number between 0 and 1.
695+
* The ranker to use for the file search. If not specified will use the `auto`
696+
* ranker.
695697
*/
696-
score_threshold?: number;
698+
ranker?: 'auto' | 'default_2024_08_21';
697699
}
698700
}
699701
}
@@ -1125,11 +1127,11 @@ export interface AssistantCreateParams {
11251127
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
11261128
*
11271129
* Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
1128-
* Outputs which guarantees the model will match your supplied JSON schema. Learn
1129-
* more in the
1130+
* Outputs which ensures the model will match your supplied JSON schema. Learn more
1131+
* in the
11301132
* [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
11311133
*
1132-
* Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
1134+
* Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the
11331135
* message the model generates is valid JSON.
11341136
*
11351137
* **Important:** when using JSON mode, you **must** also instruct the model to
@@ -1283,11 +1285,11 @@ export interface AssistantUpdateParams {
12831285
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
12841286
*
12851287
* Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
1286-
* Outputs which guarantees the model will match your supplied JSON schema. Learn
1287-
* more in the
1288+
* Outputs which ensures the model will match your supplied JSON schema. Learn more
1289+
* in the
12881290
* [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
12891291
*
1290-
* Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
1292+
* Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the
12911293
* message the model generates is valid JSON.
12921294
*
12931295
* **Important:** when using JSON mode, you **must** also instruct the model to

Diff for: src/resources/beta/threads/runs/runs.ts

+6-6
Original file line numberDiff line numberDiff line change
@@ -303,11 +303,11 @@ export interface Run {
303303
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
304304
*
305305
* Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
306-
* Outputs which guarantees the model will match your supplied JSON schema. Learn
307-
* more in the
306+
* Outputs which ensures the model will match your supplied JSON schema. Learn more
307+
* in the
308308
* [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
309309
*
310-
* Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
310+
* Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the
311311
* message the model generates is valid JSON.
312312
*
313313
* **Important:** when using JSON mode, you **must** also instruct the model to
@@ -583,11 +583,11 @@ export interface RunCreateParamsBase {
583583
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
584584
*
585585
* Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
586-
* Outputs which guarantees the model will match your supplied JSON schema. Learn
587-
* more in the
586+
* Outputs which ensures the model will match your supplied JSON schema. Learn more
587+
* in the
588588
* [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
589589
*
590-
* Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
590+
* Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the
591591
* message the model generates is valid JSON.
592592
*
593593
* **Important:** when using JSON mode, you **must** also instruct the model to

Diff for: src/resources/beta/threads/threads.ts

+6-6
Original file line numberDiff line numberDiff line change
@@ -102,11 +102,11 @@ export class Threads extends APIResource {
102102
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
103103
*
104104
* Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
105-
* Outputs which guarantees the model will match your supplied JSON schema. Learn
106-
* more in the
105+
* Outputs which ensures the model will match your supplied JSON schema. Learn more
106+
* in the
107107
* [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
108108
*
109-
* Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
109+
* Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the
110110
* message the model generates is valid JSON.
111111
*
112112
* **Important:** when using JSON mode, you **must** also instruct the model to
@@ -498,11 +498,11 @@ export interface ThreadCreateAndRunParamsBase {
498498
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
499499
*
500500
* Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
501-
* Outputs which guarantees the model will match your supplied JSON schema. Learn
502-
* more in the
501+
* Outputs which ensures the model will match your supplied JSON schema. Learn more
502+
* in the
503503
* [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
504504
*
505-
* Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
505+
* Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the
506506
* message the model generates is valid JSON.
507507
*
508508
* **Important:** when using JSON mode, you **must** also instruct the model to

Diff for: src/resources/chat/chat.ts

+5-1
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,13 @@ export class Chat extends APIResource {
99
}
1010

1111
export type ChatModel =
12+
| 'o1-preview'
13+
| 'o1-preview-2024-09-12'
14+
| 'o1-mini'
15+
| 'o1-mini-2024-09-12'
1216
| 'gpt-4o'
13-
| 'gpt-4o-2024-05-13'
1417
| 'gpt-4o-2024-08-06'
18+
| 'gpt-4o-2024-05-13'
1519
| 'chatgpt-4o-latest'
1620
| 'gpt-4o-mini'
1721
| 'gpt-4o-mini-2024-07-18'

Diff for: src/resources/chat/completions.ts

+20-10
Original file line numberDiff line numberDiff line change
@@ -788,14 +788,21 @@ export interface ChatCompletionCreateParamsBase {
788788
*/
789789
logprobs?: boolean | null;
790790

791+
/**
792+
* An upper bound for the number of tokens that can be generated for a completion,
793+
* including visible output tokens and
794+
* [reasoning tokens](https://platform.openai.com/docs/guides/reasoning).
795+
*/
796+
max_completion_tokens?: number | null;
797+
791798
/**
792799
* The maximum number of [tokens](/tokenizer) that can be generated in the chat
793-
* completion.
800+
* completion. This value can be used to control
801+
* [costs](https://openai.com/api/pricing/) for text generated via API.
794802
*
795-
* The total length of input tokens and generated tokens is limited by the model's
796-
* context length.
797-
* [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken)
798-
* for counting tokens.
803+
* This value is now deprecated in favor of `max_completion_tokens`, and is not
804+
* compatible with
805+
* [o1 series models](https://platform.openai.com/docs/guides/reasoning).
799806
*/
800807
max_tokens?: number | null;
801808

@@ -830,11 +837,11 @@ export interface ChatCompletionCreateParamsBase {
830837
* all GPT-3.5 Turbo models newer than `gpt-3.5-turbo-1106`.
831838
*
832839
* Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
833-
* Outputs which guarantees the model will match your supplied JSON schema. Learn
834-
* more in the
840+
* Outputs which ensures the model will match your supplied JSON schema. Learn more
841+
* in the
835842
* [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
836843
*
837-
* Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
844+
* Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the
838845
* message the model generates is valid JSON.
839846
*
840847
* **Important:** when using JSON mode, you **must** also instruct the model to
@@ -863,8 +870,11 @@ export interface ChatCompletionCreateParamsBase {
863870
* Specifies the latency tier to use for processing the request. This parameter is
864871
* relevant for customers subscribed to the scale tier service:
865872
*
866-
* - If set to 'auto', the system will utilize scale tier credits until they are
867-
* exhausted.
873+
* - If set to 'auto', and the Project is Scale tier enabled, the system will
874+
* utilize scale tier credits until they are exhausted.
875+
* - If set to 'auto', and the Project is not Scale tier enabled, the request will
876+
* be processed using the default service tier with a lower uptime SLA and no
877+
* latency guarentee.
868878
* - If set to 'default', the request will be processed using the default service
869879
* tier with a lower uptime SLA and no latency guarentee.
870880
* - When not set, the default behavior is 'auto'.

Diff for: src/resources/completions.ts

+17
Original file line numberDiff line numberDiff line change
@@ -120,6 +120,23 @@ export interface CompletionUsage {
120120
* Total number of tokens used in the request (prompt + completion).
121121
*/
122122
total_tokens: number;
123+
124+
/**
125+
* Breakdown of tokens used in a completion.
126+
*/
127+
completion_tokens_details?: CompletionUsage.CompletionTokensDetails;
128+
}
129+
130+
export namespace CompletionUsage {
131+
/**
132+
* Breakdown of tokens used in a completion.
133+
*/
134+
export interface CompletionTokensDetails {
135+
/**
136+
* Tokens generated by the model for reasoning.
137+
*/
138+
reasoning_tokens?: number;
139+
}
123140
}
124141

125142
export type CompletionCreateParams = CompletionCreateParamsNonStreaming | CompletionCreateParamsStreaming;

Diff for: src/resources/fine-tuning/jobs/jobs.ts

+1-1
Original file line numberDiff line numberDiff line change
@@ -340,7 +340,7 @@ export interface JobCreateParams {
340340
seed?: number | null;
341341

342342
/**
343-
* A string of up to 18 characters that will be added to your fine-tuned model
343+
* A string of up to 64 characters that will be added to your fine-tuned model
344344
* name.
345345
*
346346
* For example, a `suffix` of "custom-model-name" would produce a model name like

Diff for: tests/api-resources/chat/completions.test.ts

+1
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,7 @@ describe('resource completions', () => {
3232
functions: [{ name: 'name', description: 'description', parameters: { foo: 'bar' } }],
3333
logit_bias: { foo: 0 },
3434
logprobs: true,
35+
max_completion_tokens: 0,
3536
max_tokens: 0,
3637
n: 1,
3738
parallel_tool_calls: true,

0 commit comments

Comments
 (0)