Families of Optimal Transport Kernels for Cell Complexes
- URL: http://arxiv.org/abs/2507.16569v1
- Date: Tue, 22 Jul 2025 13:21:28 GMT
- Title: Families of Optimal Transport Kernels for Cell Complexes
- Authors: Rahul Khorana,
- Abstract summary: We derive an explicit expression for the Wasserstein distance between cell complex signal distributions in terms of a Hodge-Laplacian matrix.<n>This leads to a structurally meaningful measure to compare CW complexes and define the optimal transportation map.<n>In order to simultaneously include both feature and structure information, we extend the Fused Gromov-Wasserstein distance to CW complexes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances have discussed cell complexes as ideal learning representations. However, there is a lack of available machine learning methods suitable for learning on CW complexes. In this paper, we derive an explicit expression for the Wasserstein distance between cell complex signal distributions in terms of a Hodge-Laplacian matrix. This leads to a structurally meaningful measure to compare CW complexes and define the optimal transportation map. In order to simultaneously include both feature and structure information, we extend the Fused Gromov-Wasserstein distance to CW complexes. Finally, we introduce novel kernels over the space of probability measures on CW complexes based on the dual formulation of optimal transport.
Related papers
- Loss-Complexity Landscape and Model Structure Functions [56.01537787608726]
We develop a framework for dualizing the Kolmogorov structure function $h_x(alpha)$.<n>We establish a mathematical analogy between information-theoretic constructs and statistical mechanics.<n>We explicitly prove the Legendre-Fenchel duality between the structure function and free energy.
arXiv Detail & Related papers (2025-07-17T21:31:45Z) - Operator K-complexity in DSSYK: Krylov complexity equals bulk length [0.0]
We study the notion of complexity under time evolution in chaotic quantum systems with holographic duals.<n>We find that Krylov complexity is given by the expectation value of a length operator acting on the Hilbert space of the theory.<n>We conclude that evolution on the Krylov chain can equivalently be understood as a particle moving in a Morse potential.
arXiv Detail & Related papers (2024-12-19T18:54:30Z) - Fast and scalable Wasserstein-1 neural optimal transport solver for single-cell perturbation prediction [55.89763969583124]
Optimal transport (OT) theory provides a principled framework for constructing such mappings.<n>We propose a novel solver based on Wasserstein-1 ($W$) dual formulation.<n>Our experiments demonstrate that the proposed $W$ neural optimal transport solver can mimic the $W$ OT in finding a unique and mon" map.
arXiv Detail & Related papers (2024-11-01T14:23:19Z) - CW-CNN & CW-AN: Convolutional Networks and Attention Networks for CW-Complexes [0.0]
We present a novel framework for learning on CW-complex structured data points.
We develop notions of convolution and attention that are well defined for CW-complexes.
We create the first Hodge informed neural network that can receive a CW-complex as input.
arXiv Detail & Related papers (2024-08-29T16:32:24Z) - Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - Optimal Transport for Kernel Gaussian Mixture Models [1.631115063641726]
Wasserstein distance from optimal mass transport is a powerful mathematical tool.
We propose a Wasserstein-type metric to compute the distance between two Gaussian mixtures in a kernel Hilbert space.
arXiv Detail & Related papers (2023-10-28T04:31:49Z) - CORE: Common Random Reconstruction for Distributed Optimization with
Provable Low Communication Complexity [110.50364486645852]
Communication complexity has become a major bottleneck for speeding up training and scaling up machine numbers.
We propose Common Om REOm, which can be used to compress information transmitted between machines.
arXiv Detail & Related papers (2023-09-23T08:45:27Z) - Linear Self-Attention Approximation via Trainable Feedforward Kernel [77.34726150561087]
In pursuit of faster computation, Efficient Transformers demonstrate an impressive variety of approaches.
We aim to expand the idea of trainable kernel methods to approximate the self-attention mechanism of the Transformer architecture.
arXiv Detail & Related papers (2022-11-08T08:14:11Z) - Signal Processing on Cell Complexes [7.0471949371778795]
We give an introduction to signal processing on (abstract) regular cell complexes.
We discuss how appropriate Hodge Laplacians for these cell complexes can be derived.
arXiv Detail & Related papers (2021-10-11T21:11:59Z) - Switch Spaces: Learning Product Spaces with Sparse Gating [48.591045282317424]
We propose Switch Spaces, a data-driven approach for learning representations in product space.
We introduce sparse gating mechanisms that learn to choose, combine and switch spaces.
Experiments on knowledge graph completion and item recommendations show that the proposed switch space achieves new state-of-the-art performances.
arXiv Detail & Related papers (2021-02-17T11:06:59Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - Cell Complex Neural Networks [0.8121462458089141]
We propose a general, unifying construction for performing neural network-type computations on cell complexes.
We show how our cell complex autoencoder construction can give, in the special case textbfcell2vec, a generalization for node2vec.
arXiv Detail & Related papers (2020-10-02T01:38:12Z) - CWY Parametrization: a Solution for Parallelized Optimization of
Orthogonal and Stiefel Matrices [41.57234424773276]
We introduce an efficient approach for optimization over orthogonal groups on highly parallel computation units such as GPUs or TPUs.
We further develop a novel Truncated CWY (or T-CWY) approach for Stiefel manifold parametrization.
We apply our methods to train recurrent neural network architectures in the tasks of neural machine video prediction.
arXiv Detail & Related papers (2020-04-18T17:58:43Z) - Multiresolution Tensor Learning for Efficient and Interpretable Spatial
Analysis [44.89716235936401]
We develop a novel Multiresolution Learning (MRTL) algorithm for efficiently learning interpretable spatial patterns.
When applied to two real-world datasets, MRTL demonstrates 45x speedup compared to a fixed resolution approach.
arXiv Detail & Related papers (2020-02-13T15:50:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.