Open
Description
When using custom models with Agents, they might return their reasoning between tokens as the ChatCompletion response. Should this text possibly be turned into a reasoning output item? (with the caveat that we still need it as input message on the next step)
I think it would make the process simpler using custom LLMs if the reasoning part is separated from the actual final answer.
The problem is that the API only mentions a reasoning type "summary". Would we maybe want to invent a "detail" type just for local use?