Description
Confirm this is an issue with the Python library and not an underlying OpenAI API
- This is an issue with the Python library
Describe the bug
When using the AsyncOpenAI object and specifying a proxy through http_client to initiate a request, an exception "TypeError: object Response can't be used in 'await' expression" will be thrown in openai/_base_client.py.
The following are exception details:
Traceback (most recent call last):
File "/Users/apple/opt/miniconda3/lib/python3.9/site-packages/openai/_base_client.py", line 1441, in _request
response = await self._client.send(
TypeError: object Response can't be used in 'await' expression
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/apple/opt/miniconda3/lib/python3.9/site-packages/openai/_base_client.py", line 1441, in _request
response = await self._client.send(
TypeError: object Response can't be used in 'await' expression
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/apple/opt/miniconda3/lib/python3.9/site-packages/openai/_base_client.py", line 1441, in _request
response = await self._client.send(
TypeError: object Response can't be used in 'await' expression
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/apple/project/myproject-aigc/test.py", line 87, in
asyncio.run(main())
File "/Users/apple/opt/miniconda3/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/Users/apple/opt/miniconda3/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
return future.result()
File "/Users/apple/project/space-aigc/test.py", line 75, in main
chat_completion = await client.chat.completions.create(
File "/Users/apple/opt/miniconda3/lib/python3.9/site-packages/openai/resources/chat/completions.py", line 1300, in create
return await self._post(
File "/Users/apple/opt/miniconda3/lib/python3.9/site-packages/openai/_base_client.py", line 1705, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
File "/Users/apple/opt/miniconda3/lib/python3.9/site-packages/openai/_base_client.py", line 1407, in request
return await self._request(
File "/Users/apple/opt/miniconda3/lib/python3.9/site-packages/openai/_base_client.py", line 1461, in _request
return await self._retry_request(
File "/Users/apple/opt/miniconda3/lib/python3.9/site-packages/openai/_base_client.py", line 1530, in _retry_request
return await self._request(
File "/Users/apple/opt/miniconda3/lib/python3.9/site-packages/openai/_base_client.py", line 1461, in _request
return await self._retry_request(
File "/Users/apple/opt/miniconda3/lib/python3.9/site-packages/openai/_base_client.py", line 1530, in _retry_request
return await self._request(
File "/Users/apple/opt/miniconda3/lib/python3.9/site-packages/openai/_base_client.py", line 1471, in _request
raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.
To Reproduce
- Create an AsyncOpenAI object and define the http_client parameter as the proxy address through httpx.Client
- Initiate any asynchronous request
Code snippets
import asyncio
from openai import AsyncOpenAI
import httpx
client = AsyncOpenAI(
# This is the default and can be omitted
api_key=<api_key>,
http_client=httpx.Client(proxies=<proxy_address>)
)
async def main() -> None:
chat_completion = await client.chat.completions.create(
messages=[
{
"role": "user",
"content": "Say this is a test",
}
],
model="gpt-3.5-turbo",
)
print(chat_completion)
asyncio.run(main())
OS
macOS
Python version
Python v3.9.13
Library version
openai v1.9.0