Invertible Kernel PCA with Random Fourier Features
- URL: http://arxiv.org/abs/2303.05043v1
- Date: Thu, 9 Mar 2023 05:42:10 GMT
- Title: Invertible Kernel PCA with Random Fourier Features
- Authors: Daniel Gedon, Ant\^oni H. Ribeiro, Niklas Wahlstr\"om, Thomas B.
Sch\"on
- Abstract summary: Kernel principal component analysis (kPCA) is a widely studied method to construct a low-dimensional data representation after a nonlinear transformation.
We present an alternative method where the reconstruction follows naturally from the compression step.
We show that ikPCA performs similarly to kPCA with supervised reconstruction on denoising tasks, making it a strong alternative.
- Score: 0.22940141855172028
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Kernel principal component analysis (kPCA) is a widely studied method to
construct a low-dimensional data representation after a nonlinear
transformation. The prevailing method to reconstruct the original input signal
from kPCA -- an important task for denoising -- requires us to solve a
supervised learning problem. In this paper, we present an alternative method
where the reconstruction follows naturally from the compression step. We first
approximate the kernel with random Fourier features. Then, we exploit the fact
that the nonlinear transformation is invertible in a certain subdomain. Hence,
the name \emph{invertible kernel PCA (ikPCA)}. We experiment with different
data modalities and show that ikPCA performs similarly to kPCA with supervised
reconstruction on denoising tasks, making it a strong alternative.
Related papers
- Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Optimal Algorithms for the Inhomogeneous Spiked Wigner Model [89.1371983413931]
We derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem.
We identify in particular the existence of a statistical-to-computational gap where known algorithms require a signal-to-noise ratio bigger than the information-theoretic threshold to perform better than random.
arXiv Detail & Related papers (2023-02-13T19:57:17Z) - Contrastive Learning Can Find An Optimal Basis For Approximately
View-Invariant Functions [18.440569330385323]
We show that multiple contrastive learning methods can be reinterpreted as learning kernel functions that approximate a fixed positive-pair kernel.
We prove that a simple representation obtained by combining this kernel with PCA provably minimizes the worst-case approximation error of linear predictors.
arXiv Detail & Related papers (2022-10-04T20:02:52Z) - PCA-Boosted Autoencoders for Nonlinear Dimensionality Reduction in Low
Data Regimes [0.2925461470287228]
We propose a technique that harnesses the best of both worlds: an autoencoder that leverages PCA to perform well on scarce nonlinear data.
A synthetic example is presented first to study the effects of data nonlinearity and size on the performance of the proposed method.
arXiv Detail & Related papers (2022-05-23T23:46:52Z) - Stochastic and Private Nonconvex Outlier-Robust PCA [11.688030627514532]
Outlier-robust PCA seeks an underlying low-dimensional linear subspace from a dataset corrupted with outliers.
We show that our methods involve our methods, which involve a geodesic descent and a novel convergence analysis.
The main application method is an effectively private algorithm for outlier-robust PCA.
arXiv Detail & Related papers (2022-03-17T12:00:47Z) - Neural Fields as Learnable Kernels for 3D Reconstruction [101.54431372685018]
We present a novel method for reconstructing implicit 3D shapes based on a learned kernel ridge regression.
Our technique achieves state-of-the-art results when reconstructing 3D objects and large scenes from sparse oriented points.
arXiv Detail & Related papers (2021-11-26T18:59:04Z) - Kernel Identification Through Transformers [54.3795894579111]
Kernel selection plays a central role in determining the performance of Gaussian Process (GP) models.
This work addresses the challenge of constructing custom kernel functions for high-dimensional GP regression models.
We introduce a novel approach named KITT: Kernel Identification Through Transformers.
arXiv Detail & Related papers (2021-06-15T14:32:38Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Sparse Spectrum Warped Input Measures for Nonstationary Kernel Learning [29.221457769884648]
We propose a general form of explicit, input-dependent, measure-valued warpings for learning nonstationary kernels.
The proposed learning algorithm warps inputs as conditional Gaussian measures that control the smoothness of a standard stationary kernel.
We demonstrate a remarkable efficiency in the number of parameters of the warping functions in learning problems with both small and large data regimes.
arXiv Detail & Related papers (2020-10-09T01:10:08Z) - Improved guarantees and a multiple-descent curve for Column Subset
Selection and the Nystr\"om method [76.73096213472897]
We develop techniques which exploit spectral properties of the data matrix to obtain improved approximation guarantees.
Our approach leads to significantly better bounds for datasets with known rates of singular value decay.
We show that both our improved bounds and the multiple-descent curve can be observed on real datasets simply by varying the RBF parameter.
arXiv Detail & Related papers (2020-02-21T00:43:06Z) - Nonparametric Bayesian volatility learning under microstructure noise [2.812395851874055]
We study the problem of learning the volatility under market microstructure noise.
Specifically, we consider noisy discrete time observations from a differential equation.
We develop a novel computational method to learn the diffusion coefficient of the equation.
arXiv Detail & Related papers (2018-05-15T07:32:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.