Submodular Framework for Structured-Sparse Optimal Transport
- URL: http://arxiv.org/abs/2406.04914v1
- Date: Fri, 7 Jun 2024 13:11:04 GMT
- Title: Submodular Framework for Structured-Sparse Optimal Transport
- Authors: Piyushi Manupriya, Pratik Jawanpuria, Karthik S. Gurumoorthy, SakethaNath Jagarlapudi, Bamdev Mishra,
- Abstract summary: Unbalanced optimal transport (UOT) has recently gained much attention due to its flexible framework for handling unnormalized measures and its robustness.
In this work, we explore learning (structured) sparse transport plans in the UOT setting, i.e., transport plans have an upper bound on the number of non-sparse entries in each column.
We propose novel sparsity-constrained UOT formulations building on the recently explored mean discrepancy based UOT.
- Score: 7.030105924295838
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unbalanced optimal transport (UOT) has recently gained much attention due to its flexible framework for handling un-normalized measures and its robustness properties. In this work, we explore learning (structured) sparse transport plans in the UOT setting, i.e., transport plans have an upper bound on the number of non-sparse entries in each column (structured sparse pattern) or in the whole plan (general sparse pattern). We propose novel sparsity-constrained UOT formulations building on the recently explored maximum mean discrepancy based UOT. We show that the proposed optimization problem is equivalent to the maximization of a weakly submodular function over a uniform matroid or a partition matroid. We develop efficient gradient-based discrete greedy algorithms and provide the corresponding theoretical guarantees. Empirically, we observe that our proposed greedy algorithms select a diverse support set and we illustrate the efficacy of the proposed approach in various applications.
Related papers
- Alternating Minimization Schemes for Computing Rate-Distortion-Perception Functions with $f$-Divergence Perception Constraints [10.564071872770146]
We study the computation of the rate-distortion-perception function (RDPF) for discrete memoryless sources.
We characterize the optimal parametric solutions.
We provide sufficient conditions on the distortion and the perception constraints.
arXiv Detail & Related papers (2024-08-27T12:50:12Z) - Floorplanning of VLSI by Mixed-Variable Optimization [42.82770651937298]
This paper proposes memetic algorithms to solve mixed-variable floorplanning problems.
Proposed algorithms are superior to some celebrated B*-tree based floorplanning algorithms.
arXiv Detail & Related papers (2024-01-27T06:34:16Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Distributed Learning and Democratic Embeddings: Polynomial-Time Source
Coding Schemes Can Achieve Minimax Lower Bounds for Distributed Gradient
Descent under Communication Constraints [46.17631511884969]
We consider the problem of compressing a vector in the n-dimensional Euclidean space, subject to a bit-budget of R-bits per dimension.
We show that Democratic and Near-Democratic source-coding schemes are (near) optimal in the sense that the covering efficiency of the resulting quantizer is either dimension independent, or has a very weak logarithmic dependence.
We propose a distributed optimization algorithm: DGD-DEF, which employs our proposed coding strategy, and achieves the minimax optimal convergence rate to within (near) constant factors.
arXiv Detail & Related papers (2021-03-13T00:04:11Z) - Optimization-Inspired Learning with Architecture Augmentations and
Control Mechanisms for Low-Level Vision [74.9260745577362]
This paper proposes a unified optimization-inspired learning framework to aggregate Generative, Discriminative, and Corrective (GDC) principles.
We construct three propagative modules to effectively solve the optimization models with flexible combinations.
Experiments across varied low-level vision tasks validate the efficacy and adaptability of GDC.
arXiv Detail & Related papers (2020-12-10T03:24:53Z) - Planning with Submodular Objective Functions [118.0376288522372]
We study planning with submodular objective functions, where instead of maximizing cumulative reward, the goal is to maximize the value induced by a submodular function.
Our framework subsumes standard planning and submodular objective with cardinality constraints as special cases.
arXiv Detail & Related papers (2020-10-22T16:55:12Z) - Efficient Robust Optimal Transport with Application to Multi-Label
Classification [12.521494095948068]
We model the feature-feature relationship via a symmetric positive semi-definite Mahalanobis metric in the OT cost function.
We view the resulting optimization problem as a non-linear OT problem, which we solve using the Frank-Wolfe algorithm.
Empirical results on the discriminative learning setting, such as tag prediction and multi-class classification, illustrate the good performance of our approach.
arXiv Detail & Related papers (2020-10-22T16:43:52Z) - Jump Operator Planning: Goal-Conditioned Policy Ensembles and Zero-Shot
Transfer [71.44215606325005]
We propose a novel framework called Jump-Operator Dynamic Programming for quickly computing solutions within a super-exponential space of sequential sub-goal tasks.
This approach involves controlling over an ensemble of reusable goal-conditioned polices functioning as temporally extended actions.
We then identify classes of objective functions on this subspace whose solutions are invariant to the grounding, resulting in optimal zero-shot transfer.
arXiv Detail & Related papers (2020-07-06T05:13:20Z) - Distributionally Robust Bayesian Optimization [121.71766171427433]
We present a novel distributionally robust Bayesian optimization algorithm (DRBO) for zeroth-order, noisy optimization.
Our algorithm provably obtains sub-linear robust regret in various settings.
We demonstrate the robust performance of our method on both synthetic and real-world benchmarks.
arXiv Detail & Related papers (2020-02-20T22:04:30Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z) - Recovery Bounds on Class-Based Optimal Transport: A Sum-of-Norms
Regularization Framework [21.037720934987483]
We propose a convex OT program with a sum-of-norms regularization term, which provably recovers the underlying class structure under geometric assumptions.
We provide a novel argument for the uniqueness of the optimum even in the absence of strong convexity.
Our experiments show that the new regularizer not only results in a better preservation of the class structure in the data but also yields additional robustness to the data geometry.
arXiv Detail & Related papers (2019-03-09T18:54:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.