Skip to content

release: 1.40.2 #1613

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Aug 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "1.40.1"
".": "1.40.2"
}
2 changes: 1 addition & 1 deletion .stats.yml
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
configured_endpoints: 68
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-4097c2f86beb3f3bb021775cd1dfa240e960caf842aeefc2e08da4dc0851ea79.yml
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-97797a9363b9960b5f2fbdc84426a2b91e75533ecd409fe99e37c231180a4339.yml
15 changes: 15 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,20 @@
# Changelog

## 1.40.2 (2024-08-08)

Full Changelog: [v1.40.1...v1.40.2](https://github.com/openai/openai-python/compare/v1.40.1...v1.40.2)

### Bug Fixes

* **client:** raise helpful error message for response_format misuse ([18191da](https://github.com/openai/openai-python/commit/18191dac8e1437a0f708525d474b7ecfe459d966))
* **json schema:** support recursive BaseModels in Pydantic v1 ([#1623](https://github.com/openai/openai-python/issues/1623)) ([43e10c0](https://github.com/openai/openai-python/commit/43e10c0f251a42f1e6497f360c6c23d3058b3da3))


### Chores

* **internal:** format some docstrings ([d34a081](https://github.com/openai/openai-python/commit/d34a081c30f869646145919b2256ded115241eb5))
* **internal:** updates ([#1624](https://github.com/openai/openai-python/issues/1624)) ([598e7a2](https://github.com/openai/openai-python/commit/598e7a23768e7addbe1319ada2e87caee3cf0d14))

## 1.40.1 (2024-08-07)

Full Changelog: [v1.40.0...v1.40.1](https://github.com/openai/openai-python/compare/v1.40.0...v1.40.1)
Expand Down
5 changes: 2 additions & 3 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "openai"
version = "1.40.1"
version = "1.40.2"
description = "The official Python library for the openai API"
dynamic = ["readme"]
license = "Apache-2.0"
Expand Down Expand Up @@ -202,7 +202,6 @@ unfixable = [
"T201",
"T203",
]
ignore-init-module-imports = true

[tool.ruff.lint.flake8-tidy-imports.banned-api]
"functools.lru_cache".msg = "This function does not retain type information for the wrapped function's arguments; The `lru_cache` function from `_utils` should be used instead"
Expand All @@ -214,7 +213,7 @@ combine-as-imports = true
extra-standard-library = ["typing_extensions"]
known-first-party = ["openai", "tests"]

[tool.ruff.per-file-ignores]
[tool.ruff.lint.per-file-ignores]
"bin/**.py" = ["T201", "T203"]
"scripts/**.py" = ["T201", "T203"]
"tests/**.py" = ["T201", "T203"]
Expand Down
2 changes: 1 addition & 1 deletion src/openai/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

__title__ = "openai"
__version__ = "1.40.1" # x-release-please-version
__version__ = "1.40.2" # x-release-please-version
5 changes: 5 additions & 0 deletions src/openai/lib/_pydantic.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,11 @@ def _ensure_strict_json_schema(
for def_name, def_schema in defs.items():
_ensure_strict_json_schema(def_schema, path=(*path, "$defs", def_name))

definitions = json_schema.get("definitions")
if is_dict(definitions):
for definition_name, definition_schema in definitions.items():
_ensure_strict_json_schema(definition_schema, path=(*path, "definitions", definition_name))

return json_schema


Expand Down
18 changes: 12 additions & 6 deletions src/openai/resources/beta/chat/completions.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,14 +78,17 @@ def parse(
from pydantic import BaseModel
from openai import OpenAI


class Step(BaseModel):
explanation: str
output: str


class MathResponse(BaseModel):
steps: List[Step]
final_answer: str


client = OpenAI()
completion = client.beta.chat.completions.parse(
model="gpt-4o-2024-08-06",
Expand Down Expand Up @@ -184,12 +187,12 @@ def stream(

```py
with client.beta.chat.completions.stream(
model='gpt-4o-2024-08-06',
model="gpt-4o-2024-08-06",
messages=[...],
) as stream:
for event in stream:
if event.type == 'content.delta':
print(event.content, flush=True, end='')
if event.type == "content.delta":
print(event.content, flush=True, end="")
```

When the context manager is entered, a `ChatCompletionStream` instance is returned which, like `.create(stream=True)` is an iterator. The full list of events that are yielded by the iterator are outlined in [these docs](https://github.com/openai/openai-python/blob/main/helpers.md#chat-completions-events).
Expand Down Expand Up @@ -287,14 +290,17 @@ async def parse(
from pydantic import BaseModel
from openai import AsyncOpenAI


class Step(BaseModel):
explanation: str
output: str


class MathResponse(BaseModel):
steps: List[Step]
final_answer: str


client = AsyncOpenAI()
completion = await client.beta.chat.completions.parse(
model="gpt-4o-2024-08-06",
Expand Down Expand Up @@ -393,12 +399,12 @@ def stream(

```py
async with client.beta.chat.completions.stream(
model='gpt-4o-2024-08-06',
model="gpt-4o-2024-08-06",
messages=[...],
) as stream:
async for event in stream:
if event.type == 'content.delta':
print(event.content, flush=True, end='')
if event.type == "content.delta":
print(event.content, flush=True, end="")
```

When the context manager is entered, an `AsyncChatCompletionStream` instance is returned which, like `.create(stream=True)` is an async iterator. The full list of events that are yielded by the iterator are outlined in [these docs](https://github.com/openai/openai-python/blob/main/helpers.md#chat-completions-events).
Expand Down
41 changes: 41 additions & 0 deletions src/openai/resources/chat/completions.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,12 @@

from __future__ import annotations

import inspect
from typing import Dict, List, Union, Iterable, Optional, overload
from typing_extensions import Literal

import httpx
import pydantic

from ... import _legacy_response
from ..._types import NOT_GIVEN, Body, Query, Headers, NotGiven
Expand Down Expand Up @@ -147,6 +149,11 @@ def create(
[GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
all GPT-3.5 Turbo models newer than `gpt-3.5-turbo-1106`.

Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
Outputs which guarantees the model will match your supplied JSON schema. Learn
more in the
[Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).

Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
message the model generates is valid JSON.

Expand Down Expand Up @@ -345,6 +352,11 @@ def create(
[GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
all GPT-3.5 Turbo models newer than `gpt-3.5-turbo-1106`.

Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
Outputs which guarantees the model will match your supplied JSON schema. Learn
more in the
[Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).

Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
message the model generates is valid JSON.

Expand Down Expand Up @@ -536,6 +548,11 @@ def create(
[GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
all GPT-3.5 Turbo models newer than `gpt-3.5-turbo-1106`.

Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
Outputs which guarantees the model will match your supplied JSON schema. Learn
more in the
[Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).

Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
message the model generates is valid JSON.

Expand Down Expand Up @@ -647,6 +664,7 @@ def create(
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
) -> ChatCompletion | Stream[ChatCompletionChunk]:
validate_response_format(response_format)
return self._post(
"/chat/completions",
body=maybe_transform(
Expand Down Expand Up @@ -802,6 +820,11 @@ async def create(
[GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
all GPT-3.5 Turbo models newer than `gpt-3.5-turbo-1106`.

Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
Outputs which guarantees the model will match your supplied JSON schema. Learn
more in the
[Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).

Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
message the model generates is valid JSON.

Expand Down Expand Up @@ -1000,6 +1023,11 @@ async def create(
[GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
all GPT-3.5 Turbo models newer than `gpt-3.5-turbo-1106`.

Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
Outputs which guarantees the model will match your supplied JSON schema. Learn
more in the
[Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).

Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
message the model generates is valid JSON.

Expand Down Expand Up @@ -1191,6 +1219,11 @@ async def create(
[GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
all GPT-3.5 Turbo models newer than `gpt-3.5-turbo-1106`.

Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
Outputs which guarantees the model will match your supplied JSON schema. Learn
more in the
[Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).

Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
message the model generates is valid JSON.

Expand Down Expand Up @@ -1302,6 +1335,7 @@ async def create(
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
) -> ChatCompletion | AsyncStream[ChatCompletionChunk]:
validate_response_format(response_format)
return await self._post(
"/chat/completions",
body=await async_maybe_transform(
Expand Down Expand Up @@ -1375,3 +1409,10 @@ def __init__(self, completions: AsyncCompletions) -> None:
self.create = async_to_streamed_response_wrapper(
completions.create,
)


def validate_response_format(response_format: object) -> None:
if inspect.isclass(response_format) and issubclass(response_format, pydantic.BaseModel):
raise TypeError(
"You tried to pass a `BaseModel` class to `chat.completions.create()`; You must use `beta.chat.completions.parse()` instead"
)
5 changes: 5 additions & 0 deletions src/openai/types/chat/completion_create_params.py
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,11 @@ class CompletionCreateParamsBase(TypedDict, total=False):
[GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) and
all GPT-3.5 Turbo models newer than `gpt-3.5-turbo-1106`.

Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured
Outputs which guarantees the model will match your supplied JSON schema. Learn
more in the
[Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).

Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the
message the model generates is valid JSON.

Expand Down
2 changes: 1 addition & 1 deletion src/openai/types/chat_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@

ChatModel: TypeAlias = Literal[
"gpt-4o",
"gpt-4o-2024-08-06",
"gpt-4o-2024-05-13",
"gpt-4o-2024-08-06",
"gpt-4o-mini",
"gpt-4o-mini-2024-07-18",
"gpt-4-turbo",
Expand Down
35 changes: 35 additions & 0 deletions tests/api_resources/chat/test_completions.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
from typing import Any, cast

import pytest
import pydantic

from openai import OpenAI, AsyncOpenAI
from tests.utils import assert_matches_type
Expand Down Expand Up @@ -257,6 +258,23 @@ def test_streaming_response_create_overload_2(self, client: OpenAI) -> None:

assert cast(Any, response.is_closed) is True

@parametrize
def test_method_create_disallows_pydantic(self, client: OpenAI) -> None:
class MyModel(pydantic.BaseModel):
a: str

with pytest.raises(TypeError, match=r"You tried to pass a `BaseModel` class"):
client.chat.completions.create(
messages=[
{
"content": "string",
"role": "system",
}
],
model="gpt-4o",
response_format=cast(Any, MyModel),
)


class TestAsyncCompletions:
parametrize = pytest.mark.parametrize("async_client", [False, True], indirect=True, ids=["loose", "strict"])
Expand Down Expand Up @@ -498,3 +516,20 @@ async def test_streaming_response_create_overload_2(self, async_client: AsyncOpe
await stream.close()

assert cast(Any, response.is_closed) is True

@parametrize
async def test_method_create_disallows_pydantic(self, async_client: AsyncOpenAI) -> None:
class MyModel(pydantic.BaseModel):
a: str

with pytest.raises(TypeError, match=r"You tried to pass a `BaseModel` class"):
await async_client.chat.completions.create(
messages=[
{
"content": "string",
"role": "system",
}
],
model="gpt-4o",
response_format=cast(Any, MyModel),
)
2 changes: 2 additions & 0 deletions tests/lib/test_pydantic.py
Original file line number Diff line number Diff line change
Expand Up @@ -130,6 +130,7 @@ def test_most_types() -> None:
"type": "object",
"properties": {"column_name": {"title": "Column Name", "type": "string"}},
"required": ["column_name"],
"additionalProperties": False,
},
"Condition": {
"title": "Condition",
Expand All @@ -147,6 +148,7 @@ def test_most_types() -> None:
},
},
"required": ["column", "operator", "value"],
"additionalProperties": False,
},
"OrderBy": {
"title": "OrderBy",
Expand Down