Skip to content

[BLD] Add support for NumPy 2.0 wheels #629

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions RELEASES.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
## 0.9.4dev

#### New features
+ NumPy 2.0 support is added (PR #629)
+ New quantized FGW solvers `ot.gromov.quantized_fused_gromov_wasserstein`, `ot.gromov.quantized_fused_gromov_wasserstein_samples` and `ot.gromov.quantized_fused_gromov_wasserstein_partitioned` (PR #603)
+ `ot.gromov._gw.solve_gromov_linesearch` now has an argument to specify if the matrices are symmetric in which case the computation can be done faster (PR #607).
+ Continuous entropic mapping (PR #613)
Expand Down
12 changes: 6 additions & 6 deletions ot/da.py
Original file line number Diff line number Diff line change
Expand Up @@ -497,7 +497,7 @@ class label

if (ys is not None) and (yt is not None):

if self.limit_max != np.infty:
if self.limit_max != np.inf:
self.limit_max = self.limit_max * nx.max(self.cost_)

# missing_labels is a (ns, nt) matrix of {0, 1} such that
Expand All @@ -519,7 +519,7 @@ class label
cost_correction = label_match * missing_labels * self.limit_max
# this operation is necessary because 0 * Inf = NAN
# thus is irrelevant when limit_max is finite
cost_correction = nx.nan_to_num(cost_correction, -np.infty)
cost_correction = nx.nan_to_num(cost_correction, -np.inf)
self.cost_ = nx.maximum(self.cost_, cost_correction)

# distribution estimation
Expand Down Expand Up @@ -1067,7 +1067,7 @@ class SinkhornTransport(BaseTransport):
method from :ref:`[66]
<references-sinkhorntransport>` and :ref:`[19]
<references-sinkhorntransport>`.
limit_max: float, optional (default=np.infty)
limit_max: float, optional (default=np.inf)
Controls the semi supervised mode. Transport between labeled source
and target samples of different classes will exhibit an cost defined
by this variable
Expand Down Expand Up @@ -1109,7 +1109,7 @@ def __init__(self, reg_e=1., method="sinkhorn_log", max_iter=1000,
tol=10e-9, verbose=False, log=False,
metric="sqeuclidean", norm=None,
distribution_estimation=distribution_estimation_uniform,
out_of_sample_map='continuous', limit_max=np.infty):
out_of_sample_map='continuous', limit_max=np.inf):

if out_of_sample_map not in ['ferradans', 'continuous']:
raise ValueError('Unknown out_of_sample_map method')
Expand Down Expand Up @@ -1417,7 +1417,7 @@ class SinkhornLpl1Transport(BaseTransport):
The kind of out of sample mapping to apply to transport samples
from a domain into another one. Currently the only possible option is
"ferradans" which uses the method proposed in :ref:`[6] <references-sinkhornlpl1transport>`.
limit_max: float, optional (default=np.infty)
limit_max: float, optional (default=np.inf)
Controls the semi supervised mode. Transport between labeled source
and target samples of different classes will exhibit a cost defined by
limit_max.
Expand Down Expand Up @@ -1450,7 +1450,7 @@ def __init__(self, reg_e=1., reg_cl=0.1,
tol=10e-9, verbose=False,
metric="sqeuclidean", norm=None,
distribution_estimation=distribution_estimation_uniform,
out_of_sample_map='ferradans', limit_max=np.infty):
out_of_sample_map='ferradans', limit_max=np.inf):
self.reg_e = reg_e
self.reg_cl = reg_cl
self.max_iter = max_iter
Expand Down
2 changes: 1 addition & 1 deletion ot/regpath.py
Original file line number Diff line number Diff line change
Expand Up @@ -762,7 +762,7 @@ def semi_relaxed_path(a: np.array, b: np.array, C: np.array, reg=1e-4,
active_index.append(i * m + j)
gamma_list = []
t_list = []
current_gamma = np.Inf
current_gamma = np.inf
augmented_H0 = construct_augmented_H(active_index, m, Hc, HrHr)
add_col = np.array([])
id_pop = -1
Expand Down
9 changes: 7 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
[build-system]
requires = ["setuptools", "wheel", "oldest-supported-numpy", "cython>=0.23"]
build-backend = "setuptools.build_meta"
requires = [
"setuptools>=42",
"oldest-supported-numpy; python_version < '3.9'",
"numpy>=2.0.0; python_version >= '3.9'",
"cython>=0.23"
]
build-backend = "setuptools.build_meta"
1 change: 0 additions & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,6 @@
license='MIT',
scripts=[],
data_files=[],
setup_requires=["oldest-supported-numpy", "cython>=0.23"],
Copy link
Contributor Author

@matthewfeickert matthewfeickert Jun 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you also update the setup.py? I know the toml file is enough but it is important to remain synchronized.

setup_requires is deprecated and indeed trying to use it actually breaks a NumPy 2.0 build, so this needs to get removed. The build-system information in pyproject.toml supersedes it.

c.f. https://learn.scientific-python.org/development/guides/packaging-classic/#pep-517518-support-high-priority

install_requires=["numpy>=1.16", "scipy>=1.6"],
python_requires=">=3.6",
Copy link
Contributor Author

@matthewfeickert matthewfeickert Jun 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The requires-python metadata here is incorrect compared to reality as pot v0.9.3 only provides wheels for Python 3.7+ and the tests are only for Python 3.8+

python-version: ["3.8", "3.9", "3.10", "3.11"]

As @henryiii has covered elsewhere (can't remember which blog) the purpose of requires-python (python_requires in setuptools) is to provide guards to keep older CPython versions from installing releases that could contain unrunnable code and to avoid backtracking unhelpfully in dependency resolves.

This should get updated, but this is also a separate (adjacent) scope issue than NumPy 2.0, so I'll open up PR #630 for this.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks we need to cleanup stuff

classifiers=[
Expand Down
Loading