Skip to content

OpenAI error payloads in the body may not get parsed fully during chat streaming #479

Open
@atesgoral

Description

@atesgoral

Describe the bug

With OpenAI chat streaming, if OpenAI returns an HTTP error (e.g. 400), the error handler optimistically tries to parse the first chunk it receives as the entire body. However, networks conditions or an intermediary like an OpenAI proxy may cause the response to be fragmented into different chunks.

The symptom is an error being raised with just the first chunk of the body. While this is not a very severe problem, since it happens during what's already an API call failure (that should ideally be rare), not collecting and parsing the full error would make it hard for application developers to figure out exactly what went wrong.

The issue is here, since #344:

raise_error.on_complete(env.merge(body: try_parse_json(chunk)))

To Reproduce

Hard to reproduce in vivo, unless you're running a special HTTP/TCP proxy that chunks the original TCP packets for you, or have poor network conditions, etc.

Minimal reproducible unit test: #480

-"error" => {"code"=>"test", "message"=>"Test error", "param"=>nil, "type"=>"test_error"},
+"{\"error\":{\"message\":\"Test error\",\"type\":\""

Expected behavior

The full response body is collected (chunks concatenated) and parsed as JSON.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions