Skip to content

release: 1.2.0 #738

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Nov 9, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 5 additions & 23 deletions .devcontainer/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,27 +1,9 @@
# syntax=docker/dockerfile:1
FROM debian:bookworm-slim
ARG VARIANT="3.9"
FROM mcr.microsoft.com/vscode/devcontainers/python:0-${VARIANT}

RUN apt-get update && apt-get install -y \
libxkbcommon0 \
ca-certificates \
make \
curl \
git \
unzip \
libc++1 \
vim \
termcap \
&& apt-get clean autoclean
USER vscode

RUN curl -sSf https://rye-up.com/get | RYE_VERSION="0.15.2" RYE_INSTALL_OPTION="--yes" bash
ENV PATH=/root/.rye/shims:$PATH
ENV PATH=/home/vscode/.rye/shims:$PATH

WORKDIR /workspace

COPY README.md .python-version pyproject.toml requirements.lock requirements-dev.lock /workspace/

RUN rye sync --all-features

COPY . /workspace

CMD ["rye", "shell"]
RUN echo "[[ -d .venv ]] && source .venv/bin/activate" >> /home/vscode/.bashrc
21 changes: 20 additions & 1 deletion .devcontainer/devcontainer.json
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,26 @@
{
"name": "Debian",
"build": {
"dockerfile": "Dockerfile"
"dockerfile": "Dockerfile",
"context": ".."
},

"postStartCommand": "rye sync --all-features",

"customizations": {
"vscode": {
"extensions": [
"ms-python.python"
],
"settings": {
"terminal.integrated.shell.linux": "/bin/bash",
"python.pythonPath": ".venv/bin/python",
"python.typeChecking": "basic",
"terminal.integrated.env.linux": {
"PATH": "/home/vscode/.rye/shims:${env:PATH}"
}
}
}
}

// Features to add to the dev container. More info: https://containers.dev/features.
Expand Down
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "1.1.2"
".": "1.2.0"
}
25 changes: 25 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,30 @@
# Changelog

## 1.2.0 (2023-11-08)

Full Changelog: [v1.1.2...v1.2.0](https://github.com/openai/openai-python/compare/v1.1.2...v1.2.0)

### Features

* **api:** unify function types ([#741](https://github.com/openai/openai-python/issues/741)) ([ed16c4d](https://github.com/openai/openai-python/commit/ed16c4d2fec6cf4e33235d82b05ed9a777752204))
* **client:** support passing chunk size for binary responses ([#747](https://github.com/openai/openai-python/issues/747)) ([c0c89b7](https://github.com/openai/openai-python/commit/c0c89b77a69ef098900e3a194894efcf72085d36))


### Bug Fixes

* **api:** update embedding response object type ([#739](https://github.com/openai/openai-python/issues/739)) ([29182c4](https://github.com/openai/openai-python/commit/29182c4818e2c56f46e961dba33e31dc30c25519))
* **client:** show a helpful error message if the v0 API is used ([#743](https://github.com/openai/openai-python/issues/743)) ([920567c](https://github.com/openai/openai-python/commit/920567cb04df48a7f6cd2a3402a0b1f172c6290e))


### Chores

* **internal:** improve github devcontainer setup ([#737](https://github.com/openai/openai-python/issues/737)) ([0ac1abb](https://github.com/openai/openai-python/commit/0ac1abb07ec687a4f7b1150be10054dbd6e7cfbc))


### Refactors

* **api:** rename FunctionObject to FunctionDefinition ([#746](https://github.com/openai/openai-python/issues/746)) ([1afd138](https://github.com/openai/openai-python/commit/1afd13856c0e586ecbde8b24fe4f4bad9beeefdf))

## 1.1.2 (2023-11-08)

Full Changelog: [v1.1.1...v1.1.2](https://github.com/openai/openai-python/compare/v1.1.1...v1.1.2)
Expand Down
6 changes: 6 additions & 0 deletions api.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
# Shared Types

```python
from openai.types import FunctionDefinition, FunctionParameters
```

# Completions

Types:
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "openai"
version = "1.1.2"
version = "1.2.0"
description = "Client library for the openai API"
readme = "README.md"
license = "Apache-2.0"
Expand Down
1 change: 1 addition & 0 deletions src/openai/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,7 @@
from .version import VERSION as VERSION
from .lib.azure import AzureOpenAI as AzureOpenAI
from .lib.azure import AsyncAzureOpenAI as AsyncAzureOpenAI
from .lib._old_api import *

_setup_logging()

Expand Down
18 changes: 14 additions & 4 deletions src/openai/_base_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -1727,9 +1727,14 @@ def iter_raw(self, chunk_size: Optional[int] = None) -> Iterator[bytes]:
return self.response.iter_raw(chunk_size)

@override
def stream_to_file(self, file: str | os.PathLike[str]) -> None:
def stream_to_file(
self,
file: str | os.PathLike[str],
*,
chunk_size: int | None = None,
) -> None:
with open(file, mode="wb") as f:
for data in self.response.iter_bytes():
for data in self.response.iter_bytes(chunk_size):
f.write(data)

@override
Expand Down Expand Up @@ -1757,10 +1762,15 @@ async def aiter_raw(self, chunk_size: Optional[int] = None) -> AsyncIterator[byt
return self.response.aiter_raw(chunk_size)

@override
async def astream_to_file(self, file: str | os.PathLike[str]) -> None:
async def astream_to_file(
self,
file: str | os.PathLike[str],
*,
chunk_size: int | None = None,
) -> None:
path = anyio.Path(file)
async with await path.open(mode="wb") as f:
async for data in self.response.aiter_bytes():
async for data in self.response.aiter_bytes(chunk_size):
await f.write(data)

@override
Expand Down
15 changes: 13 additions & 2 deletions src/openai/_types.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,12 @@ def iter_raw(self, chunk_size: Optional[int] = None) -> Iterator[bytes]:
pass

@abstractmethod
def stream_to_file(self, file: str | PathLike[str]) -> None:
def stream_to_file(
self,
file: str | PathLike[str],
*,
chunk_size: int | None = None,
) -> None:
"""
Stream the output to the given file.
"""
Expand Down Expand Up @@ -172,7 +177,13 @@ async def aiter_raw(self, chunk_size: Optional[int] = None) -> AsyncIterator[byt
"""
pass

async def astream_to_file(self, file: str | PathLike[str]) -> None:
@abstractmethod
async def astream_to_file(
self,
file: str | PathLike[str],
*,
chunk_size: int | None = None,
) -> None:
"""
Stream the output to the given file.
"""
Expand Down
2 changes: 1 addition & 1 deletion src/openai/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# File generated from our OpenAPI spec by Stainless.

__title__ = "openai"
__version__ = "1.1.2" # x-release-please-version
__version__ = "1.2.0" # x-release-please-version
66 changes: 66 additions & 0 deletions src/openai/lib/_old_api.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
from __future__ import annotations

from typing import TYPE_CHECKING
from typing_extensions import override

from .._utils import LazyProxy
from .._exceptions import OpenAIError

INSTRUCTIONS = """

You tried to access openai.{symbol}, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.

You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface.

Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`

A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742
"""


class APIRemovedInV1(OpenAIError):
def __init__(self, *, symbol: str) -> None:
super().__init__(INSTRUCTIONS.format(symbol=symbol))


class APIRemovedInV1Proxy(LazyProxy[None]):
def __init__(self, *, symbol: str) -> None:
super().__init__()
self._symbol = symbol

@override
def __load__(self) -> None:
raise APIRemovedInV1(symbol=self._symbol)


SYMBOLS = [
"Edit",
"File",
"Audio",
"Image",
"Model",
"Engine",
"Customer",
"FineTune",
"Embedding",
"Completion",
"Deployment",
"Moderation",
"ErrorObject",
"FineTuningJob",
"ChatCompletion",
]

# we explicitly tell type checkers that nothing is exported
# from this file so that when we re-export the old symbols
# in `openai/__init__.py` they aren't added to the auto-complete
# suggestions given by editors
if TYPE_CHECKING:
__all__: list[str] = []
else:
__all__ = SYMBOLS


__locals = locals()
for symbol in SYMBOLS:
__locals[symbol] = APIRemovedInV1Proxy(symbol=symbol)
84 changes: 72 additions & 12 deletions src/openai/resources/chat/completions.py
Original file line number Diff line number Diff line change
Expand Up @@ -137,8 +137,18 @@ def create(

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)

response_format: An object specifying the format that the model must output. Used to enable JSON
mode.
response_format: An object specifying the format that the model must output.

Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
message the model generates is valid JSON.

**Important:** when using JSON mode, you **must** also instruct the model to
produce JSON yourself via a system or user message. Without this, the model may
generate an unending stream of whitespace until the generation reaches the token
limit, resulting in increased latency and appearance of a "stuck" request. Also
note that the message content may be partially cut off if
`finish_reason="length"`, which indicates the generation exceeded `max_tokens`
or the conversation exceeded the max context length.

seed: This feature is in Beta. If specified, our system will make a best effort to
sample deterministically, such that repeated requests with the same `seed` and
Expand Down Expand Up @@ -304,8 +314,18 @@ def create(

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)

response_format: An object specifying the format that the model must output. Used to enable JSON
mode.
response_format: An object specifying the format that the model must output.

Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
message the model generates is valid JSON.

**Important:** when using JSON mode, you **must** also instruct the model to
produce JSON yourself via a system or user message. Without this, the model may
generate an unending stream of whitespace until the generation reaches the token
limit, resulting in increased latency and appearance of a "stuck" request. Also
note that the message content may be partially cut off if
`finish_reason="length"`, which indicates the generation exceeded `max_tokens`
or the conversation exceeded the max context length.

seed: This feature is in Beta. If specified, our system will make a best effort to
sample deterministically, such that repeated requests with the same `seed` and
Expand Down Expand Up @@ -464,8 +484,18 @@ def create(

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)

response_format: An object specifying the format that the model must output. Used to enable JSON
mode.
response_format: An object specifying the format that the model must output.

Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
message the model generates is valid JSON.

**Important:** when using JSON mode, you **must** also instruct the model to
produce JSON yourself via a system or user message. Without this, the model may
generate an unending stream of whitespace until the generation reaches the token
limit, resulting in increased latency and appearance of a "stuck" request. Also
note that the message content may be partially cut off if
`finish_reason="length"`, which indicates the generation exceeded `max_tokens`
or the conversation exceeded the max context length.

seed: This feature is in Beta. If specified, our system will make a best effort to
sample deterministically, such that repeated requests with the same `seed` and
Expand Down Expand Up @@ -704,8 +734,18 @@ async def create(

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)

response_format: An object specifying the format that the model must output. Used to enable JSON
mode.
response_format: An object specifying the format that the model must output.

Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
message the model generates is valid JSON.

**Important:** when using JSON mode, you **must** also instruct the model to
produce JSON yourself via a system or user message. Without this, the model may
generate an unending stream of whitespace until the generation reaches the token
limit, resulting in increased latency and appearance of a "stuck" request. Also
note that the message content may be partially cut off if
`finish_reason="length"`, which indicates the generation exceeded `max_tokens`
or the conversation exceeded the max context length.

seed: This feature is in Beta. If specified, our system will make a best effort to
sample deterministically, such that repeated requests with the same `seed` and
Expand Down Expand Up @@ -871,8 +911,18 @@ async def create(

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)

response_format: An object specifying the format that the model must output. Used to enable JSON
mode.
response_format: An object specifying the format that the model must output.

Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
message the model generates is valid JSON.

**Important:** when using JSON mode, you **must** also instruct the model to
produce JSON yourself via a system or user message. Without this, the model may
generate an unending stream of whitespace until the generation reaches the token
limit, resulting in increased latency and appearance of a "stuck" request. Also
note that the message content may be partially cut off if
`finish_reason="length"`, which indicates the generation exceeded `max_tokens`
or the conversation exceeded the max context length.

seed: This feature is in Beta. If specified, our system will make a best effort to
sample deterministically, such that repeated requests with the same `seed` and
Expand Down Expand Up @@ -1031,8 +1081,18 @@ async def create(

[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)

response_format: An object specifying the format that the model must output. Used to enable JSON
mode.
response_format: An object specifying the format that the model must output.

Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
message the model generates is valid JSON.

**Important:** when using JSON mode, you **must** also instruct the model to
produce JSON yourself via a system or user message. Without this, the model may
generate an unending stream of whitespace until the generation reaches the token
limit, resulting in increased latency and appearance of a "stuck" request. Also
note that the message content may be partially cut off if
`finish_reason="length"`, which indicates the generation exceeded `max_tokens`
or the conversation exceeded the max context length.

seed: This feature is in Beta. If specified, our system will make a best effort to
sample deterministically, such that repeated requests with the same `seed` and
Expand Down
2 changes: 2 additions & 0 deletions src/openai/types/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@
from .edit import Edit as Edit
from .image import Image as Image
from .model import Model as Model
from .shared import FunctionDefinition as FunctionDefinition
from .shared import FunctionParameters as FunctionParameters
from .embedding import Embedding as Embedding
from .fine_tune import FineTune as FineTune
from .completion import Completion as Completion
Expand Down
Loading