Skip to content

Commit 942ba98

Browse files
authored
Docs and tests for litellm (#561)
1 parent a0254b0 commit 942ba98

File tree

5 files changed

+72
-18
lines changed

5 files changed

+72
-18
lines changed

README.md

+1-3
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# OpenAI Agents SDK
22

3-
The OpenAI Agents SDK is a lightweight yet powerful framework for building multi-agent workflows.
3+
The OpenAI Agents SDK is a lightweight yet powerful framework for building multi-agent workflows. It is provider-agnostic, supporting the OpenAI Responses and Chat Completions APIs, as well as 100+ other LLMs.
44

55
<img src="https://cdn.openai.com/API/docs/images/orchestration.png" alt="Image of the Agents Tracing UI" style="max-height: 803px;">
66

@@ -13,8 +13,6 @@ The OpenAI Agents SDK is a lightweight yet powerful framework for building multi
1313

1414
Explore the [examples](examples) directory to see the SDK in action, and read our [documentation](https://openai.github.io/openai-agents-python/) for more details.
1515

16-
Notably, our SDK [is compatible](https://openai.github.io/openai-agents-python/models/) with any model providers that support the OpenAI Chat Completions API format.
17-
1816
## Get started
1917

2018
1. Set up your Python environment

docs/models/index.md

+40-15
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,40 @@ The Agents SDK comes with out-of-the-box support for OpenAI models in two flavor
55
- **Recommended**: the [`OpenAIResponsesModel`][agents.models.openai_responses.OpenAIResponsesModel], which calls OpenAI APIs using the new [Responses API](https://platform.openai.com/docs/api-reference/responses).
66
- The [`OpenAIChatCompletionsModel`][agents.models.openai_chatcompletions.OpenAIChatCompletionsModel], which calls OpenAI APIs using the [Chat Completions API](https://platform.openai.com/docs/api-reference/chat).
77

8+
## Non-OpenAI models
9+
10+
You can use most other non-OpenAI models via the [LiteLLM integration](./litellm.md). First, install the litellm dependency group:
11+
12+
```bash
13+
pip install "openai-agents[litellm]"
14+
```
15+
16+
Then, use any of the [supported models](https://docs.litellm.ai/docs/providers) with the `litellm/` prefix:
17+
18+
```python
19+
claude_agent = Agent(model="litellm/anthropic/claude-3-5-sonnet-20240620", ...)
20+
gemini_agent = Agent(model="litellm/gemini/gemini-2.5-flash-preview-04-17", ...)
21+
```
22+
23+
### Other ways to use non-OpenAI models
24+
25+
You can integrate other LLM providers in 3 more ways (examples [here](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/)):
26+
27+
1. [`set_default_openai_client`][agents.set_default_openai_client] is useful in cases where you want to globally use an instance of `AsyncOpenAI` as the LLM client. This is for cases where the LLM provider has an OpenAI compatible API endpoint, and you can set the `base_url` and `api_key`. See a configurable example in [examples/model_providers/custom_example_global.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_global.py).
28+
2. [`ModelProvider`][agents.models.interface.ModelProvider] is at the `Runner.run` level. This lets you say "use a custom model provider for all agents in this run". See a configurable example in [examples/model_providers/custom_example_provider.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_provider.py).
29+
3. [`Agent.model`][agents.agent.Agent.model] lets you specify the model on a specific Agent instance. This enables you to mix and match different providers for different agents. See a configurable example in [examples/model_providers/custom_example_agent.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_agent.py). An easy way to use most available models is via the [LiteLLM integration](./litellm.md).
30+
31+
In cases where you do not have an API key from `platform.openai.com`, we recommend disabling tracing via `set_tracing_disabled()`, or setting up a [different tracing processor](../tracing.md).
32+
33+
!!! note
34+
35+
In these examples, we use the Chat Completions API/model, because most LLM providers don't yet support the Responses API. If your LLM provider does support it, we recommend using Responses.
36+
837
## Mixing and matching models
938

1039
Within a single workflow, you may want to use different models for each agent. For example, you could use a smaller, faster model for triage, while using a larger, more capable model for complex tasks. When configuring an [`Agent`][agents.Agent], you can select a specific model by either:
1140

12-
1. Passing the name of an OpenAI model.
41+
1. Passing the name of a model.
1342
2. Passing any model name + a [`ModelProvider`][agents.models.interface.ModelProvider] that can map that name to a Model instance.
1443
3. Directly providing a [`Model`][agents.models.interface.Model] implementation.
1544

@@ -64,20 +93,6 @@ english_agent = Agent(
6493
)
6594
```
6695

67-
## Using other LLM providers
68-
69-
You can use other LLM providers in 3 ways (examples [here](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/)):
70-
71-
1. [`set_default_openai_client`][agents.set_default_openai_client] is useful in cases where you want to globally use an instance of `AsyncOpenAI` as the LLM client. This is for cases where the LLM provider has an OpenAI compatible API endpoint, and you can set the `base_url` and `api_key`. See a configurable example in [examples/model_providers/custom_example_global.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_global.py).
72-
2. [`ModelProvider`][agents.models.interface.ModelProvider] is at the `Runner.run` level. This lets you say "use a custom model provider for all agents in this run". See a configurable example in [examples/model_providers/custom_example_provider.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_provider.py).
73-
3. [`Agent.model`][agents.agent.Agent.model] lets you specify the model on a specific Agent instance. This enables you to mix and match different providers for different agents. See a configurable example in [examples/model_providers/custom_example_agent.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_agent.py). An easy way to use most available models is via the [LiteLLM integration](./litellm.md).
74-
75-
In cases where you do not have an API key from `platform.openai.com`, we recommend disabling tracing via `set_tracing_disabled()`, or setting up a [different tracing processor](../tracing.md).
76-
77-
!!! note
78-
79-
In these examples, we use the Chat Completions API/model, because most LLM providers don't yet support the Responses API. If your LLM provider does support it, we recommend using Responses.
80-
8196
## Common issues with using other LLM providers
8297

8398
### Tracing client error 401
@@ -100,7 +115,17 @@ The SDK uses the Responses API by default, but most other LLM providers don't ye
100115
Some model providers don't have support for [structured outputs](https://platform.openai.com/docs/guides/structured-outputs). This sometimes results in an error that looks something like this:
101116

102117
```
118+
103119
BadRequestError: Error code: 400 - {'error': {'message': "'response_format.type' : value is not one of the allowed values ['text','json_object']", 'type': 'invalid_request_error'}}
120+
104121
```
105122

106123
This is a shortcoming of some model providers - they support JSON outputs, but don't allow you to specify the `json_schema` to use for the output. We are working on a fix for this, but we suggest relying on providers that do have support for JSON schema output, because otherwise your app will often break because of malformed JSON.
124+
125+
## Mixing models across providers
126+
127+
You need to be aware of feature differences between model providers, or you may run into errors. For example, OpenAI supports structured outputs, multimodal input, and hosted file search and web search, but many other providers don't support these features. Be aware of these limitations:
128+
129+
- Don't send unsupported `tools` to providers that don't understand them
130+
- Filter out multimodal inputs before calling models that are text-only
131+
- Be aware that providers that don't support structured JSON outputs will occasionally produce invalid JSON.

tests/models/__init__.py

Whitespace-only changes.

tests/models/conftest.py

+11
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
import os
2+
import sys
3+
4+
5+
# Skip voice tests on Python 3.9
6+
def pytest_ignore_collect(collection_path, config):
7+
if sys.version_info[:2] == (3, 9):
8+
this_dir = os.path.dirname(__file__)
9+
10+
if str(collection_path).startswith(this_dir):
11+
return True

tests/models/test_map.py

+20
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
from agents import Agent, OpenAIResponsesModel, RunConfig, Runner
2+
from agents.extensions.models.litellm_model import LitellmModel
3+
4+
5+
def test_no_prefix_is_openai():
6+
agent = Agent(model="gpt-4o", instructions="", name="test")
7+
model = Runner._get_model(agent, RunConfig())
8+
assert isinstance(model, OpenAIResponsesModel)
9+
10+
11+
def openai_prefix_is_openai():
12+
agent = Agent(model="openai/gpt-4o", instructions="", name="test")
13+
model = Runner._get_model(agent, RunConfig())
14+
assert isinstance(model, OpenAIResponsesModel)
15+
16+
17+
def test_litellm_prefix_is_litellm():
18+
agent = Agent(model="litellm/foo/bar", instructions="", name="test")
19+
model = Runner._get_model(agent, RunConfig())
20+
assert isinstance(model, LitellmModel)

0 commit comments

Comments
 (0)