Skip to content

feat(client): add support for streaming raw responses #1072

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jan 15, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 35 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -414,7 +414,7 @@ if response.my_field is None:

### Accessing raw response data (e.g. headers)

The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call.
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,

```py
from openai import OpenAI
Expand All @@ -433,7 +433,40 @@ completion = response.parse() # get the object that `chat.completions.create()`
print(completion)
```

These methods return an [`APIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_response.py) object.
These methods return an [`LegacyAPIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_legacy_response.py) object. This is a legacy class as we're changing it slightly in the next major version.

For the sync client this will mostly be the same with the exception
of `content` & `text` will be methods instead of properties. In the
async client, all methods will be async.

A migration script will be provided & the migration in general should
be smooth.

#### `.with_streaming_response`

The above interface eagerly reads the full response body when you make the request, which may not always be what you want.

To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.

As such, `.with_streaming_response` methods return a different [`APIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_response.py) object, and the async client returns an [`AsyncAPIResponse`](https://github.com/openai/openai-python/tree/main/src/openai/_response.py) object.

```python
with client.chat.completions.with_streaming_response.create(
messages=[
{
"role": "user",
"content": "Say this is a test",
}
],
model="gpt-3.5-turbo",
) as response:
print(response.headers.get("X-My-Header"))

for line in response.iter_lines():
print(line)
```

The context manager is required so that the response will reliably be closed.

### Configuring the HTTP client

Expand Down
16 changes: 10 additions & 6 deletions examples/audio.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,18 @@

def main() -> None:
# Create text-to-speech audio file
response = openai.audio.speech.create(
model="tts-1", voice="alloy", input="the quick brown fox jumped over the lazy dogs"
)

response.stream_to_file(speech_file_path)
with openai.audio.speech.with_streaming_response.create(
model="tts-1",
voice="alloy",
input="the quick brown fox jumped over the lazy dogs",
) as response:
response.stream_to_file(speech_file_path)

# Create transcription from audio file
transcription = openai.audio.transcriptions.create(model="whisper-1", file=speech_file_path)
transcription = openai.audio.transcriptions.create(
model="whisper-1",
file=speech_file_path,
)
print(transcription.text)

# Create translation from audio file
Expand Down
1 change: 1 addition & 0 deletions src/openai/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
from ._utils import file_from_path
from ._client import Client, OpenAI, Stream, Timeout, Transport, AsyncClient, AsyncOpenAI, AsyncStream, RequestOptions
from ._version import __title__, __version__
from ._response import APIResponse as APIResponse, AsyncAPIResponse as AsyncAPIResponse
from ._exceptions import (
APIError,
OpenAIError,
Expand Down
Loading