Sparsity-Aware Robust Normalized Subband Adaptive Filtering algorithms
based on Alternating Optimization
- URL: http://arxiv.org/abs/2205.07172v1
- Date: Sun, 15 May 2022 03:38:13 GMT
- Title: Sparsity-Aware Robust Normalized Subband Adaptive Filtering algorithms
based on Alternating Optimization
- Authors: Yi Yu, Zongxin Huang, Hongsen He, Yuriy Zakharov and Rodrigo C. de
Lamare
- Abstract summary: We propose a unified sparsity-aware robust normalized subband adaptive filtering (SA-RNSAF) algorithm for identification of sparse systems under impulsive noise.
The proposed SA-RNSAF algorithm generalizes different algorithms by defining the robust criterion and sparsity-aware penalty.
- Score: 27.43948386608
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a unified sparsity-aware robust normalized subband
adaptive filtering (SA-RNSAF) algorithm for identification of sparse systems
under impulsive noise. The proposed SA-RNSAF algorithm generalizes different
algorithms by defining the robust criterion and sparsity-aware penalty.
Furthermore, by alternating optimization of the parameters (AOP) of the
algorithm, including the step-size and the sparsity penalty weight, we develop
the AOP-SA-RNSAF algorithm, which not only exhibits fast convergence but also
obtains low steady-state misadjustment for sparse systems. Simulations in
various noise scenarios have verified that the proposed AOP-SA-RNSAF algorithm
outperforms existing techniques.
Related papers
- Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.
To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.
Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - Floorplanning of VLSI by Mixed-Variable Optimization [42.82770651937298]
This paper proposes memetic algorithms to solve mixed-variable floorplanning problems.
Proposed algorithms are superior to some celebrated B*-tree based floorplanning algorithms.
arXiv Detail & Related papers (2024-01-27T06:34:16Z) - A Hybrid SFANC-FxNLMS Algorithm for Active Noise Control based on Deep
Learning [17.38644275080562]
Filtered-X normalized least-mean-square (FxNLMS) algorithm can obtain lower steady-state errors through adaptive optimization.
This paper proposes a hybrid SFANC-FxNLMS approach to overcome the adaptive algorithm's slow convergence.
Experiments show that the hybrid SFANC-FxNLMS algorithm can achieve a rapid response time, a low noise reduction error, and a high degree of robustness.
arXiv Detail & Related papers (2022-08-17T05:42:39Z) - Study of General Robust Subband Adaptive Filtering [47.29178517675426]
We propose a general robust subband adaptive filtering (GR-SAF) scheme against impulsive noise.
By choosing different scaling factors such as from the M-estimate and maximum correntropy robust criteria, we can easily obtain different GR-SAF algorithms.
The proposed GR-SAF algorithm can be reduced to a variable regularization robust normalized SAF algorithm, thus having fast convergence rate and low steady-state error.
arXiv Detail & Related papers (2022-08-04T01:39:03Z) - Study of Robust Sparsity-Aware RLS algorithms with Jointly-Optimized
Parameters for Impulsive Noise Environments [0.0]
The proposed algorithm generalizes multiple algorithms only by replacing the specified criterion of robustness and sparsity-aware penalty.
By jointly optimizing the forgetting factor and the sparsity penalty parameter, we develop the jointly-optimized S-RRLS (JO-S-RRLS) algorithm.
Simulations in impulsive noise scenarios demonstrate that the proposed S-RRLS and JO-S-RRLS algorithms outperform existing techniques.
arXiv Detail & Related papers (2022-04-09T01:13:26Z) - Adaptive Differentially Private Empirical Risk Minimization [95.04948014513226]
We propose an adaptive (stochastic) gradient perturbation method for differentially private empirical risk minimization.
We prove that the ADP method considerably improves the utility guarantee compared to the standard differentially private method in which vanilla random noise is added.
arXiv Detail & Related papers (2021-10-14T15:02:20Z) - Study of Proximal Normalized Subband Adaptive Algorithm for Acoustic
Echo Cancellation [23.889870461547105]
We propose a novel normalized subband adaptive filter algorithm suited for sparse scenarios.
The proposed algorithm is derived based on the proximal forward-backward splitting and the soft-thresholding methods.
We analyze the mean and mean square behaviors of the algorithm, which is supported by simulations.
arXiv Detail & Related papers (2021-08-14T22:20:09Z) - Variance-Reduced Off-Policy Memory-Efficient Policy Search [61.23789485979057]
Off-policy policy optimization is a challenging problem in reinforcement learning.
Off-policy algorithms are memory-efficient and capable of learning from off-policy samples.
arXiv Detail & Related papers (2020-09-14T16:22:46Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.