Constrained Sliced Wasserstein Embedding
- URL: http://arxiv.org/abs/2506.02203v1
- Date: Mon, 02 Jun 2025 19:43:40 GMT
- Title: Constrained Sliced Wasserstein Embedding
- Authors: Navid NaderiAlizadeh, Darian Salehi, Xinran Liu, Soheil Kolouri,
- Abstract summary: We introduce a constrained learning approach to optimize the slicing directions for SW distances.<n>We demonstrate how this constrained slicing approach can be applied to pool high-dimensional embeddings into fixed-length permutation-invariant representations.
- Score: 15.569545184712942
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sliced Wasserstein (SW) distances offer an efficient method for comparing high-dimensional probability measures by projecting them onto multiple 1-dimensional probability distributions. However, identifying informative slicing directions has proven challenging, often necessitating a large number of slices to achieve desirable performance and thereby increasing computational complexity. We introduce a constrained learning approach to optimize the slicing directions for SW distances. Specifically, we constrain the 1D transport plans to approximate the optimal plan in the original space, ensuring meaningful slicing directions. By leveraging continuous relaxations of these transport plans, we enable a gradient-based primal-dual approach to train the slicer parameters, alongside the remaining model parameters. We demonstrate how this constrained slicing approach can be applied to pool high-dimensional embeddings into fixed-length permutation-invariant representations. Numerical results on foundation models trained on images, point clouds, and protein sequences showcase the efficacy of the proposed constrained learning approach in learning more informative slicing directions. Our implementation code can be found at https://github.com/Stranja572/constrainedswe.
Related papers
- Differentiable Generalized Sliced Wasserstein Plans [10.764247782316984]
Optimal Transport (OT) has attracted significant interest in the machine learning community.<n>A novel slicing scheme, dubbed min-SWGG, lifts a single one-dimensional plan back to the original multidimensional space.<n>We show that min-SWGG inherits typical limitations of slicing methods.<n>We propose a differentiable approximation scheme to efficiently identify the optimal slice, even in high-dimensional settings.
arXiv Detail & Related papers (2025-05-28T07:18:08Z) - Enhancing Path Planning Performance through Image Representation Learning of High-Dimensional Configuration Spaces [0.4143603294943439]
We present a novel method for accelerating path-planning tasks in unknown scenes with obstacles.<n>We approximate the distribution of waypoints for a collision-free path using the Rapidly-exploring Random Tree algorithm.<n>Our experiments demonstrate promising results in accelerating path-planning tasks under critical time constraints.
arXiv Detail & Related papers (2025-01-11T21:14:52Z) - Expected Sliced Transport Plans [9.33181953215826]
We propose a "lifting" operation to extend one-dimensional optimal transport plans back to the original space of the measures.
We prove that using the EST plan to weight the sum of the individual Euclidean costs for moving from one point to another results in a valid metric between the input discrete probability measures.
arXiv Detail & Related papers (2024-10-16T02:44:36Z) - Sliced Wasserstein with Random-Path Projecting Directions [49.802024788196434]
We propose an optimization-free slicing distribution that provides a fast sampling for the Monte Carlo estimation of expectation.
We derive the random-path slicing distribution (RPSD) and two variants of sliced Wasserstein, i.e., the Random-Path Projection Sliced Wasserstein (RPSW) and the Importance Weighted Random-Path Projection Sliced Wasserstein (IWRPSW)
arXiv Detail & Related papers (2024-01-29T04:59:30Z) - Measure transfer via stochastic slicing and matching [2.8851756275902476]
This paper studies iterative schemes for measure transfer and approximation problems defined through a slicing-and-matching procedure.<n>The main contribution of this paper is an almost sure convergence proof for slicing-and-matching schemes.
arXiv Detail & Related papers (2023-07-11T18:12:30Z) - Linearized Wasserstein dimensionality reduction with approximation
guarantees [65.16758672591365]
LOT Wassmap is a computationally feasible algorithm to uncover low-dimensional structures in the Wasserstein space.
We show that LOT Wassmap attains correct embeddings and that the quality improves with increased sample size.
We also show how LOT Wassmap significantly reduces the computational cost when compared to algorithms that depend on pairwise distance computations.
arXiv Detail & Related papers (2023-02-14T22:12:16Z) - Amortized Projection Optimization for Sliced Wasserstein Generative
Models [17.196369579631074]
We propose to utilize the learning-to-optimize technique or amortized optimization to predict the informative direction of any given two mini-batch probability measures.
To the best of our knowledge, this is the first work that bridges amortized optimization and sliced Wasserstein generative models.
arXiv Detail & Related papers (2022-03-25T02:08:51Z) - Distributed Sketching for Randomized Optimization: Exact
Characterization, Concentration and Lower Bounds [54.51566432934556]
We consider distributed optimization methods for problems where forming the Hessian is computationally challenging.
We leverage randomized sketches for reducing the problem dimensions as well as preserving privacy and improving straggler resilience in asynchronous distributed systems.
arXiv Detail & Related papers (2022-03-18T05:49:13Z) - Near-optimal estimation of smooth transport maps with kernel
sums-of-squares [81.02564078640275]
Under smoothness conditions, the squared Wasserstein distance between two distributions could be efficiently computed with appealing statistical error upper bounds.
The object of interest for applications such as generative modeling is the underlying optimal transport map.
We propose the first tractable algorithm for which the statistical $L2$ error on the maps nearly matches the existing minimax lower-bounds for smooth map estimation.
arXiv Detail & Related papers (2021-12-03T13:45:36Z) - Deep Shells: Unsupervised Shape Correspondence with Optimal Transport [52.646396621449]
We propose a novel unsupervised learning approach to 3D shape correspondence.
We show that the proposed method significantly improves over the state-of-the-art on multiple datasets.
arXiv Detail & Related papers (2020-10-28T22:24:07Z) - Dynamic Scale Training for Object Detection [111.33112051962514]
We propose a Dynamic Scale Training paradigm (abbreviated as DST) to mitigate scale variation challenge in object detection.
Experimental results demonstrate the efficacy of our proposed DST towards scale variation handling.
It does not introduce inference overhead and could serve as a free lunch for general detection configurations.
arXiv Detail & Related papers (2020-04-26T16:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.