Skip to content

release: 1.4.0 #973

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Dec 15, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "1.3.9"
".": "1.4.0"
}
8 changes: 8 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,13 @@
# Changelog

## 1.4.0 (2023-12-15)

Full Changelog: [v1.3.9...v1.4.0](https://github.com/openai/openai-python/compare/v1.3.9...v1.4.0)

### Features

* **api:** add optional `name` argument + improve docs ([#972](https://github.com/openai/openai-python/issues/972)) ([7972010](https://github.com/openai/openai-python/commit/7972010615820099f662c02821cfbd59e7d6ea44))

## 1.3.9 (2023-12-12)

Full Changelog: [v1.3.8...v1.3.9](https://github.com/openai/openai-python/compare/v1.3.8...v1.3.9)
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "openai"
version = "1.3.9"
version = "1.4.0"
description = "The official Python library for the openai API"
readme = "README.md"
license = "Apache-2.0"
2 changes: 1 addition & 1 deletion src/openai/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# File generated from our OpenAPI spec by Stainless.

__title__ = "openai"
__version__ = "1.3.9" # x-release-please-version
__version__ = "1.4.0" # x-release-please-version
8 changes: 6 additions & 2 deletions src/openai/resources/audio/speech.py
Original file line number Diff line number Diff line change
@@ -53,7 +53,9 @@ def create(
`tts-1` or `tts-1-hd`

voice: The voice to use when generating the audio. Supported voices are `alloy`,
`echo`, `fable`, `onyx`, `nova`, and `shimmer`.
`echo`, `fable`, `onyx`, `nova`, and `shimmer`. Previews of the voices are
available in the
[Text to speech guide](https://platform.openai.com/docs/guides/text-to-speech/voice-options).

response_format: The format to audio in. Supported formats are `mp3`, `opus`, `aac`, and `flac`.

@@ -120,7 +122,9 @@ async def create(
`tts-1` or `tts-1-hd`

voice: The voice to use when generating the audio. Supported voices are `alloy`,
`echo`, `fable`, `onyx`, `nova`, and `shimmer`.
`echo`, `fable`, `onyx`, `nova`, and `shimmer`. Previews of the voices are
available in the
[Text to speech guide](https://platform.openai.com/docs/guides/text-to-speech/voice-options).

response_format: The format to audio in. Supported formats are `mp3`, `opus`, `aac`, and `flac`.

112 changes: 62 additions & 50 deletions src/openai/resources/chat/completions.py

Large diffs are not rendered by default.

24 changes: 12 additions & 12 deletions src/openai/resources/completions.py
Original file line number Diff line number Diff line change
@@ -103,7 +103,7 @@ def create(
existing frequency in the text so far, decreasing the model's likelihood to
repeat the same line verbatim.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

logit_bias: Modify the likelihood of specified tokens appearing in the completion.

@@ -143,7 +143,7 @@ def create(
whether they appear in the text so far, increasing the model's likelihood to
talk about new topics.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

seed: If specified, our system will make a best effort to sample deterministically,
such that repeated requests with the same `seed` and parameters should return
@@ -272,7 +272,7 @@ def create(
existing frequency in the text so far, decreasing the model's likelihood to
repeat the same line verbatim.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

logit_bias: Modify the likelihood of specified tokens appearing in the completion.

@@ -312,7 +312,7 @@ def create(
whether they appear in the text so far, increasing the model's likelihood to
talk about new topics.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

seed: If specified, our system will make a best effort to sample deterministically,
such that repeated requests with the same `seed` and parameters should return
@@ -434,7 +434,7 @@ def create(
existing frequency in the text so far, decreasing the model's likelihood to
repeat the same line verbatim.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

logit_bias: Modify the likelihood of specified tokens appearing in the completion.

@@ -474,7 +474,7 @@ def create(
whether they appear in the text so far, increasing the model's likelihood to
talk about new topics.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

seed: If specified, our system will make a best effort to sample deterministically,
such that repeated requests with the same `seed` and parameters should return
@@ -671,7 +671,7 @@ async def create(
existing frequency in the text so far, decreasing the model's likelihood to
repeat the same line verbatim.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

logit_bias: Modify the likelihood of specified tokens appearing in the completion.

@@ -711,7 +711,7 @@ async def create(
whether they appear in the text so far, increasing the model's likelihood to
talk about new topics.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

seed: If specified, our system will make a best effort to sample deterministically,
such that repeated requests with the same `seed` and parameters should return
@@ -840,7 +840,7 @@ async def create(
existing frequency in the text so far, decreasing the model's likelihood to
repeat the same line verbatim.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

logit_bias: Modify the likelihood of specified tokens appearing in the completion.

@@ -880,7 +880,7 @@ async def create(
whether they appear in the text so far, increasing the model's likelihood to
talk about new topics.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

seed: If specified, our system will make a best effort to sample deterministically,
such that repeated requests with the same `seed` and parameters should return
@@ -1002,7 +1002,7 @@ async def create(
existing frequency in the text so far, decreasing the model's likelihood to
repeat the same line verbatim.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

logit_bias: Modify the likelihood of specified tokens appearing in the completion.

@@ -1042,7 +1042,7 @@ async def create(
whether they appear in the text so far, increasing the model's likelihood to
talk about new topics.

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

seed: If specified, our system will make a best effort to sample deterministically,
such that repeated requests with the same `seed` and parameters should return
6 changes: 4 additions & 2 deletions src/openai/resources/embeddings.py
Original file line number Diff line number Diff line change
@@ -51,7 +51,8 @@ def create(
input: Input text to embed, encoded as a string or array of tokens. To embed multiple
inputs in a single request, pass an array of strings or array of token arrays.
The input must not exceed the max input tokens for the model (8192 tokens for
`text-embedding-ada-002`) and cannot be an empty string.
`text-embedding-ada-002`), cannot be an empty string, and any array must be 2048
dimensions or less.
[Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken)
for counting tokens.

@@ -144,7 +145,8 @@ async def create(
input: Input text to embed, encoded as a string or array of tokens. To embed multiple
inputs in a single request, pass an array of strings or array of token arrays.
The input must not exceed the max input tokens for the model (8192 tokens for
`text-embedding-ada-002`) and cannot be an empty string.
`text-embedding-ada-002`), cannot be an empty string, and any array must be 2048
dimensions or less.
[Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken)
for counting tokens.

16 changes: 8 additions & 8 deletions src/openai/resources/files.py
Original file line number Diff line number Diff line change
@@ -46,12 +46,12 @@ def create(
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
) -> FileObject:
"""Upload a file that can be used across various endpoints/features.
"""Upload a file that can be used across various endpoints.

The size of
all the files uploaded by one organization can be up to 100 GB.
The size of all the
files uploaded by one organization can be up to 100 GB.

The size of individual files for can be a maximum of 512MB. See the
The size of individual files can be a maximum of 512 MB. See the
[Assistants Tools guide](https://platform.openai.com/docs/assistants/tools) to
learn more about the types of files supported. The Fine-tuning API only supports
`.jsonl` files.
@@ -309,12 +309,12 @@ async def create(
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
) -> FileObject:
"""Upload a file that can be used across various endpoints/features.
"""Upload a file that can be used across various endpoints.

The size of
all the files uploaded by one organization can be up to 100 GB.
The size of all the
files uploaded by one organization can be up to 100 GB.

The size of individual files for can be a maximum of 512MB. See the
The size of individual files can be a maximum of 512 MB. See the
[Assistants Tools guide](https://platform.openai.com/docs/assistants/tools) to
learn more about the types of files supported. The Fine-tuning API only supports
`.jsonl` files.
2 changes: 2 additions & 0 deletions src/openai/types/audio/speech_create_params.py
Original file line number Diff line number Diff line change
@@ -22,6 +22,8 @@ class SpeechCreateParams(TypedDict, total=False):
"""The voice to use when generating the audio.

Supported voices are `alloy`, `echo`, `fable`, `onyx`, `nova`, and `shimmer`.
Previews of the voices are available in the
[Text to speech guide](https://platform.openai.com/docs/guides/text-to-speech/voice-options).
"""

response_format: Literal["mp3", "opus", "aac", "flac"]
Original file line number Diff line number Diff line change
@@ -24,18 +24,28 @@ class FunctionCall(TypedDict, total=False):


class ChatCompletionAssistantMessageParam(TypedDict, total=False):
content: Required[Optional[str]]
"""The contents of the assistant message."""

role: Required[Literal["assistant"]]
"""The role of the messages author, in this case `assistant`."""

content: Optional[str]
"""The contents of the assistant message.

Required unless `tool_calls` or `function_call` is specified.
"""

function_call: FunctionCall
"""Deprecated and replaced by `tool_calls`.

The name and arguments of a function that should be called, as generated by the
model.
"""

name: str
"""An optional name for the participant.

Provides the model information to differentiate between participants of the same
role.
"""

tool_calls: List[ChatCompletionMessageToolCallParam]
"""The tool calls generated by the model, such as function calls."""
Original file line number Diff line number Diff line change
@@ -12,7 +12,11 @@ class ImageURL(TypedDict, total=False):
"""Either a URL of the image or the base64 encoded image data."""

detail: Literal["auto", "low", "high"]
"""Specifies the detail level of the image."""
"""Specifies the detail level of the image.

Learn more in the
[Vision guide](https://platform.openai.com/docs/guides/vision/low-or-high-fidelity-image-understanding).
"""


class ChatCompletionContentPartImageParam(TypedDict, total=False):
Original file line number Diff line number Diff line change
@@ -2,15 +2,14 @@

from __future__ import annotations

from typing import Optional
from typing_extensions import Literal, Required, TypedDict

__all__ = ["ChatCompletionFunctionMessageParam"]


class ChatCompletionFunctionMessageParam(TypedDict, total=False):
content: Required[Optional[str]]
"""The return value from the function call, to return to the model."""
content: Required[str]
"""The contents of the function message."""

name: Required[str]
"""The name of the function to call."""
Original file line number Diff line number Diff line change
@@ -13,7 +13,7 @@ class Function(TypedDict, total=False):


class ChatCompletionNamedToolChoiceParam(TypedDict, total=False):
function: Function
function: Required[Function]

type: Literal["function"]
type: Required[Literal["function"]]
"""The type of the tool. Currently, only `function` is supported."""
10 changes: 8 additions & 2 deletions src/openai/types/chat/chat_completion_system_message_param.py
Original file line number Diff line number Diff line change
@@ -2,15 +2,21 @@

from __future__ import annotations

from typing import Optional
from typing_extensions import Literal, Required, TypedDict

__all__ = ["ChatCompletionSystemMessageParam"]


class ChatCompletionSystemMessageParam(TypedDict, total=False):
content: Required[Optional[str]]
content: Required[str]
"""The contents of the system message."""

role: Required[Literal["system"]]
"""The role of the messages author, in this case `system`."""

name: str
"""An optional name for the participant.

Provides the model information to differentiate between participants of the same
role.
"""
Original file line number Diff line number Diff line change
@@ -2,14 +2,13 @@

from __future__ import annotations

from typing import Optional
from typing_extensions import Literal, Required, TypedDict

__all__ = ["ChatCompletionToolMessageParam"]


class ChatCompletionToolMessageParam(TypedDict, total=False):
content: Required[Optional[str]]
content: Required[str]
"""The contents of the tool message."""

role: Required[Literal["tool"]]
Original file line number Diff line number Diff line change
@@ -11,8 +11,15 @@


class ChatCompletionUserMessageParam(TypedDict, total=False):
content: Required[Union[str, List[ChatCompletionContentPartParam], None]]
content: Required[Union[str, List[ChatCompletionContentPartParam]]]
"""The contents of the user message."""

role: Required[Literal["user"]]
"""The role of the messages author, in this case `user`."""

name: str
"""An optional name for the participant.

Provides the model information to differentiate between participants of the same
role.
"""
Loading