Recovery Bounds on Class-Based Optimal Transport: A Sum-of-Norms
Regularization Framework
- URL: http://arxiv.org/abs/1903.03850v3
- Date: Mon, 22 May 2023 16:00:53 GMT
- Title: Recovery Bounds on Class-Based Optimal Transport: A Sum-of-Norms
Regularization Framework
- Authors: Arman Rahbar, Ashkan Panahi, Morteza Haghir Chehreghani, Devdatt
Dubhashi, Hamid Krim
- Abstract summary: We propose a convex OT program with a sum-of-norms regularization term, which provably recovers the underlying class structure under geometric assumptions.
We provide a novel argument for the uniqueness of the optimum even in the absence of strong convexity.
Our experiments show that the new regularizer not only results in a better preservation of the class structure in the data but also yields additional robustness to the data geometry.
- Score: 21.037720934987483
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop a novel theoretical framework for understating OT schemes
respecting a class structure. For this purpose, we propose a convex OT program
with a sum-of-norms regularization term, which provably recovers the underlying
class structure under geometric assumptions. Furthermore, we derive an
accelerated proximal algorithm with a closed-form projection and proximal
operator scheme, thereby affording a more scalable algorithm for computing
optimal transport plans. We provide a novel argument for the uniqueness of the
optimum even in the absence of strong convexity. Our experiments show that the
new regularizer not only results in a better preservation of the class
structure in the data but also yields additional robustness to the data
geometry, compared to previous regularizers.
Related papers
- e-COP : Episodic Constrained Optimization of Policies [12.854752753529151]
We present the first policy optimization algorithm for constrained Reinforcement Learning (RL) in episodic (finite horizon) settings.
We show that our algorithm has similar or better performance than SoTA (non-episodic) algorithms adapted for the episodic setting.
arXiv Detail & Related papers (2024-06-13T20:12:09Z) - Submodular Framework for Structured-Sparse Optimal Transport [7.030105924295838]
Unbalanced optimal transport (UOT) has recently gained much attention due to its flexible framework for handling unnormalized measures and its robustness.
In this work, we explore learning (structured) sparse transport plans in the UOT setting, i.e., transport plans have an upper bound on the number of non-sparse entries in each column.
We propose novel sparsity-constrained UOT formulations building on the recently explored mean discrepancy based UOT.
arXiv Detail & Related papers (2024-06-07T13:11:04Z) - ENOT: Expectile Regularization for Fast and Accurate Training of Neural Optimal Transport [3.0237149871998095]
We present a new approach to accurately and efficiently estimating optimal transportation plan.
It is called ExpectileRegularised Neural Transport Optimal (ENOT)
ENOT enforces binding conditions on the learning process of dual potentials.
arXiv Detail & Related papers (2024-03-06T15:15:42Z) - Generative Models for Anomaly Detection and Design-Space Dimensionality
Reduction in Shape Optimization [0.0]
Our work presents a novel approach to shape optimization, with the twofold objective to improve the efficiency of global algorithms and to promote the generation of high-quality designs.
This is accomplished by reducing the number of the original design variables defining a new reduced subspace where the geometrical variance is maximized.
From the numerical results, the new framework improves the convergence of global optimization algorithms, while only designs with high-quality geometrical features are generated.
arXiv Detail & Related papers (2023-08-08T04:57:58Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Adapting to Misspecification in Contextual Bandits [82.55565343668246]
We introduce a new family of oracle-efficient algorithms for $varepsilon$-misspecified contextual bandits.
We obtain the first algorithm that achieves the optimal $O(dsqrtT + varepsilonsqrtdT)$ regret bound for unknown misspecification level.
arXiv Detail & Related papers (2021-07-12T21:30:41Z) - Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization [98.0595480384208]
We propose a generalization extraient spaces which converges to a stationary point.
The algorithm applies not only to general $p$-normed spaces, but also to general $p$-dimensional vector spaces.
arXiv Detail & Related papers (2020-10-31T21:35:42Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Structure Adaptive Algorithms for Stochastic Bandits [22.871155520200773]
We study reward maximisation in a class of structured multi-armed bandit problems.
The mean rewards of arms satisfy some given structural constraints.
We develop algorithms from instance-dependent lower-bounds using iterative saddle-point solvers.
arXiv Detail & Related papers (2020-07-02T08:59:54Z) - A General Framework for Consistent Structured Prediction with Implicit
Loss Embeddings [113.15416137912399]
We propose and analyze a novel theoretical and algorithmic framework for structured prediction.
We study a large class of loss functions that implicitly defines a suitable geometry on the problem.
When dealing with output spaces with infinite cardinality, a suitable implicit formulation of the estimator is shown to be crucial.
arXiv Detail & Related papers (2020-02-13T10:30:04Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.