Learning in Feature Spaces via Coupled Covariances: Asymmetric Kernel SVD and Nyström method
- URL: http://arxiv.org/abs/2406.08748v1
- Date: Thu, 13 Jun 2024 02:12:18 GMT
- Title: Learning in Feature Spaces via Coupled Covariances: Asymmetric Kernel SVD and Nyström method
- Authors: Qinghua Tao, Francesco Tonin, Alex Lambert, Yingyi Chen, Panagiotis Patrinos, Johan A. K. Suykens,
- Abstract summary: We introduce a new asymmetric learning paradigm based on coupled covariance eigenproblem (CCE)
We formalize the asymmetric Nystr"om method through a finite sample approximation to speed up training.
- Score: 21.16129116282759
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In contrast with Mercer kernel-based approaches as used e.g., in Kernel Principal Component Analysis (KPCA), it was previously shown that Singular Value Decomposition (SVD) inherently relates to asymmetric kernels and Asymmetric Kernel Singular Value Decomposition (KSVD) has been proposed. However, the existing formulation to KSVD cannot work with infinite-dimensional feature mappings, the variational objective can be unbounded, and needs further numerical evaluation and exploration towards machine learning. In this work, i) we introduce a new asymmetric learning paradigm based on coupled covariance eigenproblem (CCE) through covariance operators, allowing infinite-dimensional feature maps. The solution to CCE is ultimately obtained from the SVD of the induced asymmetric kernel matrix, providing links to KSVD. ii) Starting from the integral equations corresponding to a pair of coupled adjoint eigenfunctions, we formalize the asymmetric Nystr\"om method through a finite sample approximation to speed up training. iii) We provide the first empirical evaluations verifying the practical utility and benefits of KSVD and compare with methods resorting to symmetrization or linear SVD across multiple tasks.
Related papers
- Synergistic eigenanalysis of covariance and Hessian matrices for enhanced binary classification [72.77513633290056]
We present a novel approach that combines the eigenanalysis of a covariance matrix evaluated on a training set with a Hessian matrix evaluated on a deep learning model.
Our method captures intricate patterns and relationships, enhancing classification performance.
arXiv Detail & Related papers (2024-02-14T16:10:42Z) - Self-Attention through Kernel-Eigen Pair Sparse Variational Gaussian Processes [20.023544206079304]
We propose Kernel-Eigen Pair Sparse Variational Gaussian Processes (KEP-SVGP) for building uncertainty-aware self-attention.
Experiments verify our excellent performances and efficiency on in-distribution, distribution-shift and out-of-distribution benchmarks.
arXiv Detail & Related papers (2024-02-02T15:05:13Z) - CFASL: Composite Factor-Aligned Symmetry Learning for Disentanglement in Variational AutoEncoder [2.048226951354646]
We propose a novel method, Composite Factor-Aligned Symmetry Learning (CFASL), which is integrated into VAEs for learning symmetry-based disentanglement.
CFASL incorporates three novel features for learning symmetry-based disentanglement.
CFASL demonstrates a significant improvement of disentanglement in single-factor change, and multi-factor change conditions.
arXiv Detail & Related papers (2024-01-17T00:46:24Z) - Nonlinear SVD with Asymmetric Kernels: feature learning and asymmetric
Nystr\"om method [14.470859959783995]
Asymmetric data naturally exist in real life, such as directed graphs.
This paper tackles the asymmetric kernel-based learning problem.
Experiments show that asymmetric KSVD learns features outperforming Mercer- Kernel.
arXiv Detail & Related papers (2023-06-12T11:39:34Z) - Non-Parametric Learning of Stochastic Differential Equations with Non-asymptotic Fast Rates of Convergence [65.63201894457404]
We propose a novel non-parametric learning paradigm for the identification of drift and diffusion coefficients of non-linear differential equations.
The key idea essentially consists of fitting a RKHS-based approximation of the corresponding Fokker-Planck equation to such observations.
arXiv Detail & Related papers (2023-05-24T20:43:47Z) - Measuring dissimilarity with diffeomorphism invariance [94.02751799024684]
We introduce DID, a pairwise dissimilarity measure applicable to a wide range of data spaces.
We prove that DID enjoys properties which make it relevant for theoretical study and practical use.
arXiv Detail & Related papers (2022-02-11T13:51:30Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Semi-orthogonal Embedding for Efficient Unsupervised Anomaly
Segmentation [6.135577623169028]
We generalize an ad-hoc method, random feature selection, into semi-orthogonal embedding for robust approximation.
With the scrutiny of ablation studies, the proposed method achieves a new state-of-the-art with significant margins for the MVTec AD, KolektorSDD, KolektorSDD2, and mSTC datasets.
arXiv Detail & Related papers (2021-05-31T07:02:20Z) - Nonlinear Independent Component Analysis for Continuous-Time Signals [85.59763606620938]
We study the classical problem of recovering a multidimensional source process from observations of mixtures of this process.
We show that this recovery is possible for many popular models of processes (up to order and monotone scaling of their coordinates) if the mixture is given by a sufficiently differentiable, invertible function.
arXiv Detail & Related papers (2021-02-04T20:28:44Z) - FREDE: Linear-Space Anytime Graph Embeddings [12.53022591889574]
Low-dimensional representations, or embeddings, of a graph's nodes facilitate data mining tasks.
FREquent Directions Embedding is a sketching-based method that iteratively improves on quality while processing rows of the similarity matrix individually.
Our evaluation on variably sized networks shows that FREDE performs as well as SVD and competitively against current state-of-the-art methods in diverse data mining tasks.
arXiv Detail & Related papers (2020-06-08T16:51:24Z) - Inverse Learning of Symmetries [71.62109774068064]
We learn the symmetry transformation with a model consisting of two latent subspaces.
Our approach is based on the deep information bottleneck in combination with a continuous mutual information regulariser.
Our model outperforms state-of-the-art methods on artificial and molecular datasets.
arXiv Detail & Related papers (2020-02-07T13:48:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.