Skip to content

Assistant with gpt-4o and gpt-4o-mini may call unsupported tool 'browser' and throw exception #1574

Open
@kunerzzz

Description

@kunerzzz

Confirm this is an issue with the Python library and not an underlying OpenAI API

  • This is an issue with the Python library

Describe the bug

When calling the Assistant API and selecting the gpt-4o or gpt-4o-mini model, the model may attempt to call an unsupported browser tool. The Python SDK does not define this type of ToolCall , resulting in an exception being thrown.

File ""assistant.py"", line 62, in <module>
    stream.until_done()
  File ""/usr/local/lib/python3.7/site-packages/openai/lib/streaming/_assistants.py"", line 102, in until_done
    consume_sync_iterator(self)
  File ""/usr/local/lib/python3.7/site-packages/openai/_utils/_streams.py"", line 6, in consume_sync_iterator
    for _ in iterator:
  File ""/usr/local/lib/python3.7/site-packages/openai/lib/streaming/_assistants.py"", line 69, in __iter__
    for item in self._iterator:
  File ""/usr/local/lib/python3.7/site-packages/openai/lib/streaming/_assistants.py"", line 406, in __stream__
    self._emit_sse_event(event)
  File ""/usr/local/lib/python3.7/site-packages/openai/lib/streaming/_assistants.py"", line 267, in _emit_sse_event
    run_step_snapshots=self.__run_step_snapshots,
  File ""/usr/local/lib/python3.7/site-packages/openai/lib/streaming/_assistants.py"", line 913, in accumulate_run_step
    data.delta.model_dump(exclude_unset=True),
AttributeError: 'dict' object has no attribute 'model_dump'

The root cause of this issue is that the API returns unexpected content. However, the SDK should be able to handle this scenario gracefully.

To Reproduce

  1. Ask assistant to use browser tool but only give it code interpreter.
with openai.beta.threads.create_and_run_stream(
    assistant_id='an assistant only supports Code Interpreter',
    model='gpt-4o-mini',
    instructions="Use browser tool first to answer the user's question",
    thread= {
        'messages': [{'role': 'user', 'content': 'Who is Tom'}]
    },
    tool_choice='required',
) as stream:
    stream.until_done()

This issue can occur under normal usage scenarios, but it is consistently reproducible with the specified configuration.

Code snippets

No response

OS

Linux

Python version

Python 3.7.4

Library version

openai v1.36.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions