Skip to content

Added RouteErrorHandler for server #481

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 20, 2023
Merged

Added RouteErrorHandler for server #481

merged 1 commit into from
Jul 20, 2023

Conversation

c0sogi
Copy link
Contributor

@c0sogi c0sogi commented Jul 16, 2023

I've added a RouteErrorHandler that wraps any exception raised by the server into an OpenAI-like error response.

For example, n_ctx(2048) with a prompt max_tokens=1 for 2049 tokens over 2048, you might get a json error response with status code 400 like the one below.

{
  "error": {
    "message": "This model's maximum context length is 2048 tokens, however you requested 2050 tokens (2049 in your prompt; 1 for the completion). Please reduce your prompt; or completion length.",
    "type": "invalid_request_error",
    "param": "messages",
    "code": "context_length_exceeded"
  }
}

This friendly message should help you determine how many prompt tokens you need to remove and resend.
I also removed the le option from the max_tokens field to account for the removal of the 2048 token limit with the addition of the custom rope feature. It now only raises an error message if prompt tokens exceeds n_ctx.

For other exceptions, it is wrapped in an internal_server_error with status code 500. For example,

{
  "error": {
    "message": "[{'type': 'int_parsing', 'loc': ('body', 'max_tokens'), 'msg': 'Input should be a valid integer, unable to parse string as an integer', 'input': 'foo', 'url': 'https://errors.pydantic.dev/2.0.3/v/int_parsing'}]",
    "type": "internal_server_error",
    "param": null,
    "code": null
  }
}

However, in the event-stream state, if an exception is thrown inside chunk iterator, JSON might not be parsed correctly.
So, it should first check whether an exception is thrown by yielding the first response. That part can be done as follows:

    iterator_or_completion: Union[llama_cpp.ChatCompletion, Iterator[
        llama_cpp.ChatCompletionChunk
    ]] = await run_in_threadpool(llama.create_chat_completion, **kwargs)

    if isinstance(iterator_or_completion, Iterator):  # replaced with `body.stream`
        # EAFP: It's easier to ask for forgiveness than permission
        first_response = await run_in_threadpool(next, iterator_or_completion)

        # If no exception was raised from first_response, we can assume that
        # the iterator is valid and we can use it to stream the response.
        def iterator() -> Iterator[llama_cpp.ChatCompletionChunk]:
            yield first_response
            yield from iterator_or_completion

        send_chan, recv_chan = anyio.create_memory_object_stream(10)
        return EventSourceResponse(
            recv_chan, data_sender_callable=partial(  # type: ignore
                get_event_publisher,
                request=request,
                inner_send_chan=send_chan,
                iterator=iterator(),
            )
        )

@abetlen
Copy link
Owner

abetlen commented Jul 20, 2023

@c0sogi thank you for the contribution! LGTM

@abetlen abetlen merged commit 365d9a4 into abetlen:main Jul 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants