Skip to content

Commit 0bf971b

Browse files
stainless-botmegamanics
authored andcommitted
feat(client): add support for streaming raw responses (openai#1072)
As an alternative to `with_raw_response` we now provide `with_streaming_response` as well. When using these methods you will have to use a context manager to ensure that the response is always cleaned up.
1 parent 2074a8b commit 0bf971b

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

61 files changed

+4273
-563
lines changed

Diff for: README.md

+35-2
Original file line numberDiff line numberDiff line change
@@ -414,7 +414,7 @@ if response.my_field is None:
414414

415415
### Accessing raw response data (e.g. headers)
416416

417-
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call.
417+
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
418418

419419
```py
420420
from openai import OpenAI
@@ -433,7 +433,40 @@ completion = response.parse() # get the object that `chat.completions.create()`
433433
print(completion)
434434
```
435435

436-
These methods return an [`APIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_response.py) object.
436+
These methods return an [`LegacyAPIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_legacy_response.py) object. This is a legacy class as we're changing it slightly in the next major version.
437+
438+
For the sync client this will mostly be the same with the exception
439+
of `content` & `text` will be methods instead of properties. In the
440+
async client, all methods will be async.
441+
442+
A migration script will be provided & the migration in general should
443+
be smooth.
444+
445+
#### `.with_streaming_response`
446+
447+
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
448+
449+
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
450+
451+
As such, `.with_streaming_response` methods return a different [`APIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_response.py) object, and the async client returns an [`AsyncAPIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_response.py) object.
452+
453+
```python
454+
with client.chat.completions.with_streaming_response.create(
455+
messages=[
456+
{
457+
"role": "user",
458+
"content": "Say this is a test",
459+
}
460+
],
461+
model="gpt-3.5-turbo",
462+
) as response:
463+
print(response.headers.get("X-My-Header"))
464+
465+
for line in response.iter_lines():
466+
print(line)
467+
```
468+
469+
The context manager is required so that the response will reliably be closed.
437470

438471
### Configuring the HTTP client
439472

Diff for: examples/audio.py

+10-6
Original file line numberDiff line numberDiff line change
@@ -12,14 +12,18 @@
1212

1313
def main() -> None:
1414
# Create text-to-speech audio file
15-
response = openai.audio.speech.create(
16-
model="tts-1", voice="alloy", input="the quick brown fox jumped over the lazy dogs"
17-
)
18-
19-
response.stream_to_file(speech_file_path)
15+
with openai.audio.speech.with_streaming_response.create(
16+
model="tts-1",
17+
voice="alloy",
18+
input="the quick brown fox jumped over the lazy dogs",
19+
) as response:
20+
response.stream_to_file(speech_file_path)
2021

2122
# Create transcription from audio file
22-
transcription = openai.audio.transcriptions.create(model="whisper-1", file=speech_file_path)
23+
transcription = openai.audio.transcriptions.create(
24+
model="whisper-1",
25+
file=speech_file_path,
26+
)
2327
print(transcription.text)
2428

2529
# Create translation from audio file

Diff for: src/openai/__init__.py

+1
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@
1010
from ._utils import file_from_path
1111
from ._client import Client, OpenAI, Stream, Timeout, Transport, AsyncClient, AsyncOpenAI, AsyncStream, RequestOptions
1212
from ._version import __title__, __version__
13+
from ._response import APIResponse as APIResponse, AsyncAPIResponse as AsyncAPIResponse
1314
from ._exceptions import (
1415
APIError,
1516
OpenAIError,

0 commit comments

Comments
 (0)