Conjugate Gradient Adaptive Learning with Tukey's Biweight M-Estimate
- URL: http://arxiv.org/abs/2203.10205v1
- Date: Sat, 19 Mar 2022 01:02:43 GMT
- Title: Conjugate Gradient Adaptive Learning with Tukey's Biweight M-Estimate
- Authors: Lu Lu, Yi Yu, Rodrigo C. de Lamare and Xiaomin Yang
- Abstract summary: We propose a novel M-estimate conjugate gradient (CG) algorithm, termed Tukey's biweight M-estimate CG (TbMCG)
In particular, the TbMCG algorithm can achieve a faster convergence while retaining a reduced computational complexity.
Simulation results confirm the excellent performance of the proposed TbMCG algorithm for system identification and active noise control applications.
- Score: 35.60818658948953
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel M-estimate conjugate gradient (CG) algorithm, termed
Tukey's biweight M-estimate CG (TbMCG), for system identification in impulsive
noise environments. In particular, the TbMCG algorithm can achieve a faster
convergence while retaining a reduced computational complexity as compared to
the recursive least-squares (RLS) algorithm. Specifically, the Tukey's biweight
M-estimate incorporates a constraint into the CG filter to tackle impulsive
noise environments. Moreover, the convergence behavior of the TbMCG algorithm
is analyzed. Simulation results confirm the excellent performance of the
proposed TbMCG algorithm for system identification and active noise control
applications.
Related papers
- Provably Efficient Information-Directed Sampling Algorithms for Multi-Agent Reinforcement Learning [50.92957910121088]
This work designs and analyzes a novel set of algorithms for multi-agent reinforcement learning (MARL) based on the principle of information-directed sampling (IDS)
For episodic two-player zero-sum MGs, we present three sample-efficient algorithms for learning Nash equilibrium.
We extend Reg-MAIDS to multi-player general-sum MGs and prove that it can learn either the Nash equilibrium or coarse correlated equilibrium in a sample efficient manner.
arXiv Detail & Related papers (2024-04-30T06:48:56Z) - Multi-kernel Correntropy-based Orientation Estimation of IMUs: Gradient
Descent Methods [3.8286082196845466]
Correntropy-based descent gradient (CGD) and correntropy-based decoupled orientation estimation (CDOE)
Traditional methods rely on the mean squared error (MSE) criterion, making them vulnerable to external acceleration and magnetic interference.
New algorithms demonstrate significantly lower computational complexity than Kalman filter-based approaches.
arXiv Detail & Related papers (2023-04-13T13:57:33Z) - Active RIS-aided EH-NOMA Networks: A Deep Reinforcement Learning
Approach [66.53364438507208]
An active reconfigurable intelligent surface (RIS)-aided multi-user downlink communication system is investigated.
Non-orthogonal multiple access (NOMA) is employed to improve spectral efficiency, and the active RIS is powered by energy harvesting (EH)
An advanced LSTM based algorithm is developed to predict users' dynamic communication state.
A DDPG based algorithm is proposed to joint control the amplification matrix and phase shift matrix RIS.
arXiv Detail & Related papers (2023-04-11T13:16:28Z) - Accelerated parallel MRI using memory efficient and robust monotone
operator learning (MOL) [24.975981795360845]
The main focus of this paper is to determine the utility of the monotone operator learning framework in the parallel MRI setting.
The benefits of this approach include similar guarantees as compressive sensing algorithms including uniqueness, convergence, and stability.
We validate the proposed scheme by comparing it with different unrolled algorithms in the context of accelerated parallel MRI for static and dynamic settings.
arXiv Detail & Related papers (2023-04-03T20:26:59Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Deep unfolding of the weighted MMSE beamforming algorithm [9.518010235273783]
We propose the novel application of deep unfolding to the WMMSE algorithm for a MISO downlink channel.
Deep unfolding naturally incorporates expert knowledge, with the benefits of immediate and well-grounded architecture selection, fewer trainable parameters, and better explainability.
By means of simulations, we show that, in most of the settings, the unfolded WMMSE outperforms or performs equally to the WMMSE for a fixed number of iterations.
arXiv Detail & Related papers (2020-06-15T14:51:20Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z) - Isotropic SGD: a Practical Approach to Bayesian Posterior Sampling [18.64160180251004]
This work defines a unified mathematical framework to deepen our understanding of the role of gradient (SG) noise on the behavior of Markov chain Monte Carlo (SGMCMC) algorithms.
Our formulation unlocks the design of a novel, practical approach to posterior sampling, which makes the SG noise isotropic using a fixed learning rate.
Our proposal is competitive with the state-of-the-art on sgmcmc, while being much more practical to use.
arXiv Detail & Related papers (2020-06-09T07:31:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.