Learning Reconstructive Embeddings in Reproducing Kernel Hilbert Spaces via the Representer Theorem
- URL: http://arxiv.org/abs/2601.05811v1
- Date: Fri, 09 Jan 2026 14:35:19 GMT
- Title: Learning Reconstructive Embeddings in Reproducing Kernel Hilbert Spaces via the Representer Theorem
- Authors: Enrique Feito-Casares, Francisco M. Melgarejo-Meseguer, José-Luis Rojo-Álvarez,
- Abstract summary: This work proposes new algorithms for reconstruction-based manifold learning within Reproducing Kernel Hilbert Spaces (RKHS)<n>A separable operator-valued kernel extends the formulation to vector-valued data while retaining the simplicity of a single scalar similarity function.<n>A subsequent kernel-alignment task projects the data into a lower-dimensional latent space whose Gram matrix aims to match the high-dimensional reconstruction kernel.
- Score: 2.0573301822495553
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Motivated by the growing interest in representation learning approaches that uncover the latent structure of high-dimensional data, this work proposes new algorithms for reconstruction-based manifold learning within Reproducing-Kernel Hilbert Spaces (RKHS). Each observation is first reconstructed as a linear combination of the other samples in the RKHS, by optimizing a vector form of the Representer Theorem for their autorepresentation property. A separable operator-valued kernel extends the formulation to vector-valued data while retaining the simplicity of a single scalar similarity function. A subsequent kernel-alignment task projects the data into a lower-dimensional latent space whose Gram matrix aims to match the high-dimensional reconstruction kernel, thus transferring the auto-reconstruction geometry of the RKHS to the embedding. Therefore, the proposed algorithms represent an extended approach to the autorepresentation property, exhibited by many natural data, by using and adapting well-known results of Kernel Learning Theory. Numerical experiments on both simulated (concentric circles and swiss-roll) and real (cancer molecular activity and IoT network intrusions) datasets provide empirical evidence of the practical effectiveness of the proposed approach.
Related papers
- Multi-Dimensional Visual Data Recovery: Scale-Aware Tensor Modeling and Accelerated Randomized Computation [51.65236537605077]
We propose a new type of network compression optimization technique, fully randomized tensor network compression (FCTN)<n>FCTN has significant advantages in correlation characterization and transpositional in algebra, and has notable achievements in multi-dimensional data processing and analysis.<n>We derive efficient algorithms with guarantees to solve the formulated models.
arXiv Detail & Related papers (2026-02-13T14:56:37Z) - A joint optimization approach to identifying sparse dynamics using least squares kernel collocation [70.13783231186183]
We develop an all-at-once modeling framework for learning systems of ordinary differential equations (ODE) from scarce, partial, and noisy observations of the states.<n>The proposed methodology amounts to a combination of sparse recovery strategies for the ODE over a function library combined with techniques from reproducing kernel Hilbert space (RKHS) theory for estimating the state and discretizing the ODE.
arXiv Detail & Related papers (2025-11-23T18:04:15Z) - Iso-Riemannian Optimization on Learned Data Manifolds [6.345340156849189]
We introduce a principled framework for optimization on learned data manifold using iso-Riemannian geometry.<n>We show that our approach yields interpretable barycentres, improved clustering, and provably efficient solutions to inverse problems.<n>These results establish that optimization under iso-Riemannian geometry can overcome distortions inherent to learned manifold mappings.
arXiv Detail & Related papers (2025-10-23T22:34:55Z) - IIKL: Isometric Immersion Kernel Learning with Riemannian Manifold for Geometric Preservation [15.82760919569542]
Previous research generally mapped non-Euclidean data into Euclidean space during representation learning.<n>In this paper, we propose a novel Isometric Immersion Kernel Learning (IIKL) method.<n>We show that our method could reduce the inner product invariant loss by more than 90% compared to state-of-the-art methods.
arXiv Detail & Related papers (2025-05-07T12:08:33Z) - Distributional Reduction: Unifying Dimensionality Reduction and Clustering with Gromov-Wasserstein [56.62376364594194]
Unsupervised learning aims to capture the underlying structure of potentially large and high-dimensional datasets.<n>In this work, we revisit these approaches under the lens of optimal transport and exhibit relationships with the Gromov-Wasserstein problem.<n>This unveils a new general framework, called distributional reduction, that recovers DR and clustering as special cases and allows addressing them jointly within a single optimization problem.
arXiv Detail & Related papers (2024-02-03T19:00:19Z) - Understanding Augmentation-based Self-Supervised Representation Learning
via RKHS Approximation and Regression [53.15502562048627]
Recent work has built the connection between self-supervised learning and the approximation of the top eigenspace of a graph Laplacian operator.
This work delves into a statistical analysis of augmentation-based pretraining.
arXiv Detail & Related papers (2023-06-01T15:18:55Z) - On Hypothesis Transfer Learning of Functional Linear Models [7.243632426715939]
We study the transfer learning (TL) for the functional linear regression (FLR) under the Reproducing Kernel Space (RKHS) framework.<n>We measure the similarity across tasks using RKHS distance, allowing the type of information being transferred to be tied to the properties of the imposed RKHS.
arXiv Detail & Related papers (2022-06-09T04:50:16Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z) - Hierarchical regularization networks for sparsification based learning
on noisy datasets [0.0]
hierarchy follows from approximation spaces identified at successively finer scales.
For promoting model generalization at each scale, we also introduce a novel, projection based penalty operator across multiple dimension.
Results show the performance of the approach as a data reduction and modeling strategy on both synthetic and real datasets.
arXiv Detail & Related papers (2020-06-09T18:32:24Z) - Two-Dimensional Semi-Nonnegative Matrix Factorization for Clustering [50.43424130281065]
We propose a new Semi-Nonnegative Matrix Factorization method for 2-dimensional (2D) data, named TS-NMF.
It overcomes the drawback of existing methods that seriously damage the spatial information of the data by converting 2D data to vectors in a preprocessing step.
arXiv Detail & Related papers (2020-05-19T05:54:14Z) - Kernel Bi-Linear Modeling for Reconstructing Data on Manifolds: The
Dynamic-MRI Case [12.925252330672246]
A kernel-based framework is developed to fit the dynamic-(d)MRI-data recovery problem.
The proposed methodology uses no training data and employs no graph Laplacian matrix to penalize the optimization task.
The framework is validated on synthetically generated dMRI data, where comparisons against state-of-the-art schemes highlight the rich potential of the proposed approach in data-recovery problems.
arXiv Detail & Related papers (2020-02-27T02:42:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.