Skip to content

Importing POT consumes 22G GPU memory. [ONLY FOR 0.9.1] #516

Closed
@HowardZJU

Description

@HowardZJU

Describe the bug

It is kind of strange for me. When I try to import POT, the memory occupation gets 23172MiB / 24220MiB for GPU:0 and 814MiB / 24220MiB for other GPUs.

To Reproduce

Steps to reproduce the behavior:

import torch
import ot

then run nvidia-smi

2023-08-28 00:06:28.565778: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
/homeold/home/wangh/miniconda3/envs/mvi/lib/python3.10/site-packages/ot/backend.py:2998: UserWarning: To use TensorflowBackend, you need to activate the tensorflow numpy API. You can activate it by running:
from tensorflow.python.ops.numpy_ops import np_config
np_config.enable_numpy_behavior()
register_backend(TensorflowBackend())

Screenshots

截图_20230828001502 截图_20230828001510 截图_20230828001524

Environment (please complete the following information):

  • OS (e.g. MacOS, Windows, Linux): Linux
  • Python version: Python 3.10 or 3.9
  • How was POT installed (source, pip, conda): pip
  • POT: 0.9.1
  • Build command you used (if compiling from source): pip install pythonot
  • Only for GPU related bugs:
    • CUDA version: 11.4
    • GPU: RTX TITAN
    • Any other relevant information:

When I reinstall pot=0.9.0, the issue is resolved.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions