Deep-RLS: A Model-Inspired Deep Learning Approach to Nonlinear PCA
- URL: http://arxiv.org/abs/2011.07458v2
- Date: Wed, 18 Nov 2020 04:45:01 GMT
- Title: Deep-RLS: A Model-Inspired Deep Learning Approach to Nonlinear PCA
- Authors: Zahra Esmaeilbeig, Shahin Khobahi, Mojtaba Soltanalian
- Abstract summary: We propose a task-based deep learning approach, referred to as Deep-RLS, to perform nonlinear PCA.
In particular, we formulate the nonlinear PCA for the blind source separation (BSS) problem and show through numerical analysis that Deep-RLS results in a significant improvement in the accuracy of recovering the source signals.
- Score: 12.629088975832797
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we consider the application of model-based deep learning in
nonlinear principal component analysis (PCA). Inspired by the deep unfolding
methodology, we propose a task-based deep learning approach, referred to as
Deep-RLS, that unfolds the iterations of the well-known recursive least squares
(RLS) algorithm into the layers of a deep neural network in order to perform
nonlinear PCA. In particular, we formulate the nonlinear PCA for the blind
source separation (BSS) problem and show through numerical analysis that
Deep-RLS results in a significant improvement in the accuracy of recovering the
source signals in BSS when compared to the traditional RLS algorithm.
Related papers
- Component-based Sketching for Deep ReLU Nets [55.404661149594375]
We develop a sketching scheme based on deep net components for various tasks.
We transform deep net training into a linear empirical risk minimization problem.
We show that the proposed component-based sketching provides almost optimal rates in approximating saturated functions.
arXiv Detail & Related papers (2024-09-21T15:30:43Z) - Nonlinear model reduction for operator learning [1.0364028373854508]
We propose an efficient framework that combines neural networks with kernel principal component analysis (KPCA) for operator learning.
Our results demonstrate the superior performance of KPCA-DeepONet over POD-DeepONet.
arXiv Detail & Related papers (2024-03-27T16:24:26Z) - On Sample-Efficient Offline Reinforcement Learning: Data Diversity,
Posterior Sampling, and Beyond [29.449446595110643]
We propose a notion of data diversity that subsumes the previous notions of coverage measures in offline RL.
Our proposed model-free PS-based algorithm for offline RL is novel, with sub-optimality bounds that are frequentist (i.e., worst-case) in nature.
arXiv Detail & Related papers (2024-01-06T20:52:04Z) - Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint [56.74058752955209]
This paper studies the alignment process of generative models with Reinforcement Learning from Human Feedback (RLHF)
We first identify the primary challenges of existing popular methods like offline PPO and offline DPO as lacking in strategical exploration of the environment.
We propose efficient algorithms with finite-sample theoretical guarantees.
arXiv Detail & Related papers (2023-12-18T18:58:42Z) - Deep Learning Meets Adaptive Filtering: A Stein's Unbiased Risk
Estimator Approach [13.887632153924512]
We introduce task-based deep learning frameworks, denoted as Deep RLS and Deep EASI.
These architectures transform the iterations of the original algorithms into layers of a deep neural network, enabling efficient source signal estimation.
To further enhance performance, we propose training these deep unrolled networks utilizing a surrogate loss function grounded on Stein's unbiased risk estimator (SURE)
arXiv Detail & Related papers (2023-07-31T14:26:41Z) - The Power of Learned Locally Linear Models for Nonlinear Policy
Optimization [26.45568696453259]
This paper conducts a rigorous analysis of a simplified variant of this strategy for general nonlinear systems.
We analyze an algorithm which iterates between estimating local linear models of nonlinear system dynamics and performing $mathttiLQR$-like policy updates.
arXiv Detail & Related papers (2023-05-16T17:13:00Z) - A Provably Efficient Model-Free Posterior Sampling Method for Episodic
Reinforcement Learning [50.910152564914405]
Existing posterior sampling methods for reinforcement learning are limited by being model-based or lack worst-case theoretical guarantees beyond linear MDPs.
This paper proposes a new model-free formulation of posterior sampling that applies to more general episodic reinforcement learning problems with theoretical guarantees.
arXiv Detail & Related papers (2022-08-23T12:21:01Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Learned Robust PCA: A Scalable Deep Unfolding Approach for
High-Dimensional Outlier Detection [23.687598836093333]
Robust principal component analysis is a critical tool in machine learning, which detects outliers in the task of low-rank reconstruction.
In this paper, we propose a scalable and learnable approach for high-dimensional RPCA problems which we call LRPCA.
We show that LRPCA outperforms the state-of-the-art RPCA algorithms, such as neurald AltProj, on both datasets real-world applications.
arXiv Detail & Related papers (2021-10-11T23:37:55Z) - Learning to Estimate RIS-Aided mmWave Channels [50.15279409856091]
We focus on uplink cascaded channel estimation, where known and fixed base station combining and RIS phase control matrices are considered for collecting observations.
To boost the estimation performance and reduce the training overhead, the inherent channel sparsity of mmWave channels is leveraged in the deep unfolding method.
It is verified that the proposed deep unfolding network architecture can outperform the least squares (LS) method with a relatively smaller training overhead and online computational complexity.
arXiv Detail & Related papers (2021-07-27T06:57:56Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.