Skip to content

Inference provider for vllm #2886

Open
@jeffmaury

Description

@jeffmaury

Is your feature request related to a problem? Please describe

Allow to run models with vllm

Describe the solution you'd like

Add a new inference server for vllm

Describe alternatives you've considered

No response

Additional context

No response

Metadata

Metadata

Assignees

Labels

kind/feature 💡Issue for requesting a new feature

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions