You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* events</a> as they become available, with the stream terminated by a data: [DONE] message.
88
74
*/
89
-
@JsonProperty("logit_bias")
90
-
Map<String, Integer> logitBias;
75
+
Booleanstream;
91
76
77
+
/**
78
+
* What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower
79
+
* values like 0.2 will make it more focused and deterministic.<br>
80
+
* We generally recommend altering this or top_p but not both.
81
+
*/
82
+
Doubletemperature;
92
83
93
84
/**
94
-
* A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse.
85
+
* An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens
86
+
* with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.<br>
87
+
* We generally recommend altering this or temperature but not both.
95
88
*/
96
-
Stringuser;
89
+
@JsonProperty("top_p")
90
+
DoubletopP;
97
91
98
92
/**
99
-
* A list of the available functions.
93
+
* A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse.
100
94
*/
101
-
List<?> functions;
95
+
Stringuser;
102
96
103
97
/**
104
98
* Controls how the model responds to function calls, as specified in the <a href="https://platform.openai.com/docs/api-reference/chat/create#chat/create-function_call">OpenAI documentation</a>.
0 commit comments