Skip to content

Add pipeline for tests #203

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 7 commits into from

Conversation

thejaminator
Copy link
Contributor

@thejaminator thejaminator commented Jan 26, 2023

Added a CI pipeline to run tests.
Should catch errors like this as well. #199

image

Example on my fork here thejaminator#1

The maintainers of this repo will need to set an environment API key OPENAI_API_KEY=<API-KEY> as a github secret, so the tests that require an API key will be run as part of the pipeline.

Alternatively, tests that require authentication with the API key can be separated from those that don't. And we can disable those that require authentication. Or mock the API response.

)
assert result.purpose == "search"
assert result.purpose == "fine-tune"
assert "id" in result

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Andrew-Chen-Wang this test was failing, i just copied the test case from the synchronous version to fix it.

@thejaminator
Copy link
Contributor Author

@hallacy @ddeville would this MR be helpful?

@rattrayalex
Copy link
Collaborator

Thanks for this!

We've since rewritten the library entirely, so this change is no longer relevant. I'm sorry we didn't get to it sooner.

We currently run CI for this repo in a private mirror, but hope to add public CI with tests soon.

safa0 pushed a commit to safa0/openai-agents-python that referenced this pull request Apr 27, 2025
## Context
By default, the outputs of tools are sent to the LLM again. The LLM gets
to read the outputs, and produce a new response. There are cases where
this is not desired:
1. Every tool results in another round trip, and sometimes the output of
the tool is enough.
2. If you force tool use (via model settings `tool_choice=required`),
then the agent will just infinite loop.

This enables you to have different behavior, e.g. use the first tool
output as the final output, or write a custom function to process tool
results and potentially produce an output.

## Test plan
Added new tests and ran existing tests
Also added examples.


Closes openai#117
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants