Extending Kernel PCA through Dualization: Sparsity, Robustness and Fast
Algorithms
- URL: http://arxiv.org/abs/2306.05815v1
- Date: Fri, 9 Jun 2023 11:27:35 GMT
- Title: Extending Kernel PCA through Dualization: Sparsity, Robustness and Fast
Algorithms
- Authors: Francesco Tonin, Alex Lambert, Panagiotis Patrinos, Johan A. K.
Suykens
- Abstract summary: This paper revisits Kernel Principal Component Analysis (KPCA) through dualization of a difference of convex functions.
This allows to naturally extend KPCA to multiple objective functions and leads to efficient gradient-based algorithms avoiding the expensive SVD of the Gram matrix.
- Score: 14.964073009670194
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of this paper is to revisit Kernel Principal Component Analysis
(KPCA) through dualization of a difference of convex functions. This allows to
naturally extend KPCA to multiple objective functions and leads to efficient
gradient-based algorithms avoiding the expensive SVD of the Gram matrix.
Particularly, we consider objective functions that can be written as Moreau
envelopes, demonstrating how to promote robustness and sparsity within the same
framework. The proposed method is evaluated on synthetic and real-world
benchmarks, showing significant speedup in KPCA training time as well as
highlighting the benefits in terms of robustness and sparsity.
Related papers
- Efficient Duple Perturbation Robustness in Low-rank MDPs [14.53555781866821]
We introduce duple robustness, i.e. perturbation on both the feature and factor vectors for low-rank Markov decision processes (MDPs)
The novel robust MDP formulation is compatible with the function representation view, and therefore, is naturally applicable to practical RL problems with large or even continuous state-action spaces.
It also gives rise to a provably efficient and practical algorithm with theoretical convergence rate guarantee.
arXiv Detail & Related papers (2024-04-11T19:07:15Z) - Optimizing Two-way Partial AUC with an End-to-end Framework [154.47590401735323]
Area Under the ROC Curve (AUC) is a crucial metric for machine learning.
Recent work shows that the TPAUC is essentially inconsistent with the existing Partial AUC metrics.
We present the first trial in this paper to optimize this new metric.
arXiv Detail & Related papers (2022-06-23T12:21:30Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - When AUC meets DRO: Optimizing Partial AUC for Deep Learning with
Non-Convex Convergence Guarantee [51.527543027813344]
We propose systematic and efficient gradient-based methods for both one-way and two-way partial AUC (pAUC)
For both one-way and two-way pAUC, we propose two algorithms and prove their convergence for optimizing their two formulations, respectively.
arXiv Detail & Related papers (2022-03-01T01:59:53Z) - Learnable Faster Kernel-PCA for Nonlinear Fault Detection: Deep
Autoencoder-Based Realization [7.057302509355857]
Kernel principal component analysis (KPCA) is a well-recognized nonlinear dimensionality reduction method.
In this work, a learnable faster realization of the conventional KPCA is proposed.
The proposed DAE-PCA method is proved to be equivalent to KPCA but has more advantage in terms of automatic searching of the most suitable nonlinear high-dimensional space according to the inputs.
arXiv Detail & Related papers (2021-12-08T09:41:46Z) - Boosting RANSAC via Dual Principal Component Pursuit [24.942079487458624]
We introduce Dual Principal Component Pursuit (DPCP) as a robust subspace learning method with strong theoretical support and efficient algorithms.
Experiments on estimating two-view homographies, fundamental and essential matrices, and three-view homographic tensors show that our approach consistently has higher accuracy than state-of-the-art alternatives.
arXiv Detail & Related papers (2021-10-06T17:04:45Z) - Doubly Robust Off-Policy Actor-Critic: Convergence and Optimality [131.45028999325797]
We develop a doubly robust off-policy AC (DR-Off-PAC) for discounted MDP.
DR-Off-PAC adopts a single timescale structure, in which both actor and critics are updated simultaneously with constant stepsize.
We study the finite-time convergence rate and characterize the sample complexity for DR-Off-PAC to attain an $epsilon$-accurate optimal policy.
arXiv Detail & Related papers (2021-02-23T18:56:13Z) - Logistic Q-Learning [87.00813469969167]
We propose a new reinforcement learning algorithm derived from a regularized linear-programming formulation of optimal control in MDPs.
The main feature of our algorithm is a convex loss function for policy evaluation that serves as a theoretically sound alternative to the widely used squared Bellman error.
arXiv Detail & Related papers (2020-10-21T17:14:31Z) - Approximation Algorithms for Sparse Principal Component Analysis [57.5357874512594]
Principal component analysis (PCA) is a widely used dimension reduction technique in machine learning and statistics.
Various approaches to obtain sparse principal direction loadings have been proposed, which are termed Sparse Principal Component Analysis.
We present thresholding as a provably accurate, time, approximation algorithm for the SPCA problem.
arXiv Detail & Related papers (2020-06-23T04:25:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.