Closed
Description
Confirm this is an issue with the Python library and not an underlying OpenAI API
- This is an issue with the Python library
Describe the bug
All API Errors raised during streaming raise the same generic message ("An error occurred during streaming").
See line 64 of _streaming.py.
The data["error"]
field is completely disregarded for streaming errors. Instead, it could be read to raise the correct type of error.
This is a major quality-of-life issue with custom inference servers that seek to follow the openai standard.
To Reproduce
- Spin up a custom inference server.
- Have users access it via the
openai
Python library. - Introduce an arbitrary error in your inference server, and handle it correctly, by returning the error's code, param, and type inside of the streaming response.
- Notice that the users will always get the generic "An error occurred during streaming" message.
Code snippets
The relevant part of the codebase uses this code snippet:
if is_mapping(data) and data.get("error"):
raise APIError(
message="An error occurred during streaming",
request=self.response.request,
body=data["error"],
)
A different part of the codebase uses this code snippet:
raise self._make_status_error_from_response(err.response) from None
This latter type of functionality should also be used while iterating over a stream.
### OS
Ubuntu 20.04.5 LTS
### Python version
Python v.3.11.6
### Library version
openai v1.11.1