Skip to content

Design for sharing GPU across multiple APIs #1390

Open
@RobertLucian

Description

@RobertLucian

Description

An API can only be given entire GPU units. Add support for fractional values for the GPU resource. Here's an example:

# cortex.yaml
- name: <string>  # API name (required)
  # ...
  compute:
    gpu: 300m

Motivation

Better GPU resource utilization across multiple APIs. This can reduce the overall costs for the user. If an API is supposed to run very rarely and doesn't need much inference capacity, then giving 100m of the GPU resource might be desirable - the current alternative is to give an entire GPU to the API, and this can be expensive/wasteful.

Additional context

There are 2 ways to address this:

  1. At the driver level (i.e. device plugin for the GPU). This is the preferred method.
  2. At the pod level, by having a single pod per instance which is responsible for handling the prediction requests of all API replicas of all APIs residing on that instance. This may incur significant performance penalties. The single pod also represents the single-point-of-failure, so this may be undesirable.

Open questions

  • If a pod uses more GPU than requested is there a way to evict it?

Useful links for the first approach (where the device plugin handles all of this):

  1. Is sharing GPU to multiple containers feasible? kubernetes/kubernetes#52757
  2. https://github.com/AliyunContainerService/gpushare-scheduler-extender
  3. https://github.com/sakjain92/Fractional-GPUs
  4. https://github.com/Deepomatic/shared-gpu-nvidia-k8s-device-plugin
  5. https://github.com/tkestack/gpu-manager

Metadata

Metadata

Assignees

Labels

enhancementNew feature or requestresearchDetermine technical constraints

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions