Skip to content

Commit dcb1bc6

Browse files
Stainless Botstainless-app[bot]
Stainless Bot
authored andcommitted
feat(api): add o1 models (#1061)
See https://platform.openai.com/docs/guides/reasoning for details.
1 parent 8958d97 commit dcb1bc6

File tree

9 files changed

+77
-42
lines changed

9 files changed

+77
-42
lines changed

Diff for: .stats.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
11
configured_endpoints: 68
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-85a85e0c08de456441431c0ae4e9c078cc8f9748c29430b9a9058340db6389ee.yml
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-501122aa32adaa2abb3d4487880ab9cdf2141addce2e6c3d1bd9bb6b44c318a8.yml

Diff for: src/resources/beta/assistants.ts

+19-17
Original file line numberDiff line numberDiff line change
@@ -151,11 +151,11 @@ export interface Assistant {
151151
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
152152
*
153153
* Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
154-
* Outputs which guarantees the model will match your supplied JSON schema. Learn
155-
* more in the
154+
* Outputs which ensures the model will match your supplied JSON schema. Learn more
155+
* in the
156156
* [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
157157
*
158-
* Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
158+
* Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the
159159
* message the model generates is valid JSON.
160160
*
161161
* **Important:** when using JSON mode, you **must** also instruct the model to
@@ -665,7 +665,8 @@ export namespace FileSearchTool {
665665
max_num_results?: number;
666666

667667
/**
668-
* The ranking options for the file search.
668+
* The ranking options for the file search. If not specified, the file search tool
669+
* will use the `auto` ranker and a score_threshold of 0.
669670
*
670671
* See the
671672
* [file search tool documentation](https://platform.openai.com/docs/assistants/tools/file-search/customizing-file-search-settings)
@@ -676,24 +677,25 @@ export namespace FileSearchTool {
676677

677678
export namespace FileSearch {
678679
/**
679-
* The ranking options for the file search.
680+
* The ranking options for the file search. If not specified, the file search tool
681+
* will use the `auto` ranker and a score_threshold of 0.
680682
*
681683
* See the
682684
* [file search tool documentation](https://platform.openai.com/docs/assistants/tools/file-search/customizing-file-search-settings)
683685
* for more information.
684686
*/
685687
export interface RankingOptions {
686688
/**
687-
* The ranker to use for the file search. If not specified will use the `auto`
688-
* ranker.
689+
* The score threshold for the file search. All values must be a floating point
690+
* number between 0 and 1.
689691
*/
690-
ranker?: 'auto' | 'default_2024_08_21';
692+
score_threshold: number;
691693

692694
/**
693-
* The score threshold for the file search. All values must be a floating point
694-
* number between 0 and 1.
695+
* The ranker to use for the file search. If not specified will use the `auto`
696+
* ranker.
695697
*/
696-
score_threshold?: number;
698+
ranker?: 'auto' | 'default_2024_08_21';
697699
}
698700
}
699701
}
@@ -1125,11 +1127,11 @@ export interface AssistantCreateParams {
11251127
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
11261128
*
11271129
* Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
1128-
* Outputs which guarantees the model will match your supplied JSON schema. Learn
1129-
* more in the
1130+
* Outputs which ensures the model will match your supplied JSON schema. Learn more
1131+
* in the
11301132
* [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
11311133
*
1132-
* Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
1134+
* Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the
11331135
* message the model generates is valid JSON.
11341136
*
11351137
* **Important:** when using JSON mode, you **must** also instruct the model to
@@ -1283,11 +1285,11 @@ export interface AssistantUpdateParams {
12831285
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
12841286
*
12851287
* Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
1286-
* Outputs which guarantees the model will match your supplied JSON schema. Learn
1287-
* more in the
1288+
* Outputs which ensures the model will match your supplied JSON schema. Learn more
1289+
* in the
12881290
* [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
12891291
*
1290-
* Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
1292+
* Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the
12911293
* message the model generates is valid JSON.
12921294
*
12931295
* **Important:** when using JSON mode, you **must** also instruct the model to

Diff for: src/resources/beta/threads/runs/runs.ts

+6-6
Original file line numberDiff line numberDiff line change
@@ -429,11 +429,11 @@ export interface Run {
429429
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
430430
*
431431
* Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
432-
* Outputs which guarantees the model will match your supplied JSON schema. Learn
433-
* more in the
432+
* Outputs which ensures the model will match your supplied JSON schema. Learn more
433+
* in the
434434
* [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
435435
*
436-
* Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
436+
* Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the
437437
* message the model generates is valid JSON.
438438
*
439439
* **Important:** when using JSON mode, you **must** also instruct the model to
@@ -709,11 +709,11 @@ export interface RunCreateParamsBase {
709709
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
710710
*
711711
* Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
712-
* Outputs which guarantees the model will match your supplied JSON schema. Learn
713-
* more in the
712+
* Outputs which ensures the model will match your supplied JSON schema. Learn more
713+
* in the
714714
* [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
715715
*
716-
* Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
716+
* Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the
717717
* message the model generates is valid JSON.
718718
*
719719
* **Important:** when using JSON mode, you **must** also instruct the model to

Diff for: src/resources/beta/threads/threads.ts

+6-6
Original file line numberDiff line numberDiff line change
@@ -126,11 +126,11 @@ export class Threads extends APIResource {
126126
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
127127
*
128128
* Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
129-
* Outputs which guarantees the model will match your supplied JSON schema. Learn
130-
* more in the
129+
* Outputs which ensures the model will match your supplied JSON schema. Learn more
130+
* in the
131131
* [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
132132
*
133-
* Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
133+
* Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the
134134
* message the model generates is valid JSON.
135135
*
136136
* **Important:** when using JSON mode, you **must** also instruct the model to
@@ -522,11 +522,11 @@ export interface ThreadCreateAndRunParamsBase {
522522
* and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
523523
*
524524
* Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
525-
* Outputs which guarantees the model will match your supplied JSON schema. Learn
526-
* more in the
525+
* Outputs which ensures the model will match your supplied JSON schema. Learn more
526+
* in the
527527
* [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
528528
*
529-
* Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
529+
* Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the
530530
* message the model generates is valid JSON.
531531
*
532532
* **Important:** when using JSON mode, you **must** also instruct the model to

Diff for: src/resources/chat/chat.ts

+6-1
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,14 @@ export class Chat extends APIResource {
99
}
1010

1111
export type ChatModel =
12+
| 'o1-preview'
13+
| 'o1-preview-2024-09-12'
14+
| 'o1-mini'
15+
| 'o1-mini-2024-09-12'
1216
| 'gpt-4o'
13-
| 'gpt-4o-2024-05-13'
1417
| 'gpt-4o-2024-08-06'
18+
| 'gpt-4o-2024-05-13'
19+
| 'chatgpt-4o-latest'
1520
| 'gpt-4o-mini'
1621
| 'gpt-4o-mini-2024-07-18'
1722
| 'gpt-4-turbo'

Diff for: src/resources/chat/completions.ts

+20-10
Original file line numberDiff line numberDiff line change
@@ -788,14 +788,21 @@ export interface ChatCompletionCreateParamsBase {
788788
*/
789789
logprobs?: boolean | null;
790790

791+
/**
792+
* An upper bound for the number of tokens that can be generated for a completion,
793+
* including visible output tokens and
794+
* [reasoning tokens](https://platform.openai.com/docs/guides/reasoning).
795+
*/
796+
max_completion_tokens?: number | null;
797+
791798
/**
792799
* The maximum number of [tokens](/tokenizer) that can be generated in the chat
793-
* completion.
800+
* completion. This value can be used to control
801+
* [costs](https://openai.com/api/pricing/) for text generated via API.
794802
*
795-
* The total length of input tokens and generated tokens is limited by the model's
796-
* context length.
797-
* [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken)
798-
* for counting tokens.
803+
* This value is now deprecated in favor of `max_completion_tokens`, and is not
804+
* compatible with
805+
* [o1 series models](https://platform.openai.com/docs/guides/reasoning).
799806
*/
800807
max_tokens?: number | null;
801808

@@ -830,11 +837,11 @@ export interface ChatCompletionCreateParamsBase {
830837
* all GPT-3.5 Turbo models newer than `gpt-3.5-turbo-1106`.
831838
*
832839
* Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
833-
* Outputs which guarantees the model will match your supplied JSON schema. Learn
834-
* more in the
840+
* Outputs which ensures the model will match your supplied JSON schema. Learn more
841+
* in the
835842
* [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
836843
*
837-
* Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
844+
* Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the
838845
* message the model generates is valid JSON.
839846
*
840847
* **Important:** when using JSON mode, you **must** also instruct the model to
@@ -863,8 +870,11 @@ export interface ChatCompletionCreateParamsBase {
863870
* Specifies the latency tier to use for processing the request. This parameter is
864871
* relevant for customers subscribed to the scale tier service:
865872
*
866-
* - If set to 'auto', the system will utilize scale tier credits until they are
867-
* exhausted.
873+
* - If set to 'auto', and the Project is Scale tier enabled, the system will
874+
* utilize scale tier credits until they are exhausted.
875+
* - If set to 'auto', and the Project is not Scale tier enabled, the request will
876+
* be processed using the default service tier with a lower uptime SLA and no
877+
* latency guarentee.
868878
* - If set to 'default', the request will be processed using the default service
869879
* tier with a lower uptime SLA and no latency guarentee.
870880
* - When not set, the default behavior is 'auto'.

Diff for: src/resources/completions.ts

+17
Original file line numberDiff line numberDiff line change
@@ -120,6 +120,23 @@ export interface CompletionUsage {
120120
* Total number of tokens used in the request (prompt + completion).
121121
*/
122122
total_tokens: number;
123+
124+
/**
125+
* Breakdown of tokens used in a completion.
126+
*/
127+
completion_tokens_details?: CompletionUsage.CompletionTokensDetails;
128+
}
129+
130+
export namespace CompletionUsage {
131+
/**
132+
* Breakdown of tokens used in a completion.
133+
*/
134+
export interface CompletionTokensDetails {
135+
/**
136+
* Tokens generated by the model for reasoning.
137+
*/
138+
reasoning_tokens?: number;
139+
}
123140
}
124141

125142
export type CompletionCreateParams = CompletionCreateParamsNonStreaming | CompletionCreateParamsStreaming;

Diff for: src/resources/fine-tuning/jobs/jobs.ts

+1-1
Original file line numberDiff line numberDiff line change
@@ -340,7 +340,7 @@ export interface JobCreateParams {
340340
seed?: number | null;
341341

342342
/**
343-
* A string of up to 18 characters that will be added to your fine-tuned model
343+
* A string of up to 64 characters that will be added to your fine-tuned model
344344
* name.
345345
*
346346
* For example, a `suffix` of "custom-model-name" would produce a model name like

Diff for: tests/api-resources/chat/completions.test.ts

+1
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,7 @@ describe('resource completions', () => {
3232
functions: [{ name: 'name', description: 'description', parameters: { foo: 'bar' } }],
3333
logit_bias: { foo: 0 },
3434
logprobs: true,
35+
max_completion_tokens: 0,
3536
max_tokens: 0,
3637
n: 1,
3738
parallel_tool_calls: true,

0 commit comments

Comments
 (0)