You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: RELEASES.md
+15-6Lines changed: 15 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -4,9 +4,9 @@
4
4
*December 2023*
5
5
6
6
This new release contains several new features and bug fixes. Among the new features
7
-
we have a new solver for estimation of nearest Brenier potentials (SSNB) that can be used for O mapping estimation (on small problems), new Bregman Alternated Projected Gradient solvers for GW and FGW, and new solvers for Bures-Wasserstein barycenters. Finally we have a new exact line-search for (F)GW solvers with KL loss that can be used to improve the convergence of the solvers.
7
+
we have a new solver for estimation of nearest Brenier potentials (SSNB) that can be used for OT mapping estimation (on small problems), new Bregman Alternated Projected Gradient solvers for GW and FGW, and new solvers for Bures-Wasserstein barycenters. We also provide a first solver for Low Rank Sinkhorn that will be ussed to provide low rak OT extensions in the next releases. Finally we have a new exact line-search for (F)GW solvers with KL loss that can be used to improve the convergence of the solvers.
8
8
9
-
We also have a new `LazyTensor` class that can be used to model OT plans and low rank tensors in large scale OT. This class is used y the new wrapper for `geomloss` Sinkhorn solver on empirical samples that can lead to x10/x100 speedups on CPU or GPU and have a lazy implementation that allows solving very large problems of a few millions samples.
9
+
We also have a new `LazyTensor` class that can be used to model OT plans and low rank tensors in large scale OT. This class is used to return the plan for the new wrapper for `geomloss` Sinkhorn solver on empirical samples that can lead to x10/x100 speedups on CPU or GPU and have a lazy implementation that allows solving very large problems of a few millions samples.
10
10
11
11
We also have a new API for solving OT problems from empirical samples with `ot.solve_sample` Finally we have a new API for Gromov-Wasserstein solvers with `ot.solve_gromov` function that centralizes most of the (F)GW methods with unified notation. Some example of how to use the new API below:
12
12
@@ -22,6 +22,12 @@ sol = ot.solve_sample(xs, xt, reg= 1, metric='euclidean') # sinkhorn with euclid
22
22
23
23
sol = ot.solve_sample(xs, xt, reg=1, method='geomloss') # faster sinkhorn solver on CPU/GPU
24
24
25
+
sol = ot.solve_sample(x,x2, method='factored', rank=10) # compute factored OT
26
+
27
+
sol = ot.solve_sample(x,x2, method='lowrank', rank=10) # compute lowrank sinkhorn OT
sol = ot.solve_gromov(Cs,Ct) # Exact GW between samples with uniform weights
@@ -30,21 +36,24 @@ sol = ot.solve_gromov(Cs,Ct) # Exact GW between samples with uniform weights
30
36
M = ot.dist(xs, xt) # compute cost matrix
31
37
32
38
# Exact FGW between samples with uniform weights
33
-
sol = ot.solve_gromov(Cs, Ct, M, loss='KL') # FGW with KL data fitting
39
+
sol = ot.solve_gromov(Cs, Ct, M, loss='KL', alpha=0.7) # FGW with KL data fitting
34
40
35
41
36
42
# recover solutions objects
37
43
P = sol.plan # OT plan
38
44
u, v = sol.potentials # dual variables
39
45
value = sol.value # OT value
40
46
47
+
# for GW and FGW
48
+
value_linear = sol.value_linear # linear part of the loss
49
+
value_quad = sol.value_quad # quadratic part of the loss
41
50
42
51
```
43
52
44
-
Users are encouraged to use the new API but it might still be subjects to small changes before the release of POT 1.0 .
45
-
This class is used in the new BAPG solvers for GW and FGW that can be used to solve large scale OT problems with a small memory footprint.
53
+
Users are encouraged to use the new API (it is much simpler) but it might still be subjects to small changes before the release of POT 1.0 .
54
+
46
55
47
-
We also fixed a number of issues, the most pressing being a problem of GPU memory allocation when pytorch is installed that will not happen now thanks to Lazy initialization. We now also have the possibility to deactivate some backends which prvent python from importing them using environment variables.
56
+
We also fixed a number of issues, the most pressing being a problem of GPU memory allocation when pytorch is installed that will not happen now thanks to Lazy initialization of the backends. We now also have the possibility to deactivate some backends using environment which prevents POT from importing them and can lead to large import speedup.
0 commit comments