Study of Robust Sparsity-Aware RLS algorithms with Jointly-Optimized
Parameters for Impulsive Noise Environments
- URL: http://arxiv.org/abs/2204.08990v1
- Date: Sat, 9 Apr 2022 01:13:26 GMT
- Title: Study of Robust Sparsity-Aware RLS algorithms with Jointly-Optimized
Parameters for Impulsive Noise Environments
- Authors: Y. Yu, L. Lu, Y. Zakharov, R. C. de Lamare and B. Chen
- Abstract summary: The proposed algorithm generalizes multiple algorithms only by replacing the specified criterion of robustness and sparsity-aware penalty.
By jointly optimizing the forgetting factor and the sparsity penalty parameter, we develop the jointly-optimized S-RRLS (JO-S-RRLS) algorithm.
Simulations in impulsive noise scenarios demonstrate that the proposed S-RRLS and JO-S-RRLS algorithms outperform existing techniques.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a unified sparsity-aware robust recursive least-squares
RLS (S-RRLS) algorithm for the identification of sparse systems under impulsive
noise. The proposed algorithm generalizes multiple algorithms only by replacing
the specified criterion of robustness and sparsity-aware penalty. Furthermore,
by jointly optimizing the forgetting factor and the sparsity penalty parameter,
we develop the jointly-optimized S-RRLS (JO-S-RRLS) algorithm, which not only
exhibits low misadjustment but also can track well sudden changes of a sparse
system. Simulations in impulsive noise scenarios demonstrate that the proposed
S-RRLS and JO-S-RRLS algorithms outperform existing techniques.
Related papers
- Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.
To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.
Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - Parameter optimization comparison in QAOA using Stochastic Hill Climbing with Random Re-starts and Local Search with entangled and non-entangled mixing operators [0.0]
This study investigates the efficacy of Hill Climbing with Random Restarts (SHC-RR) compared to Local Search (LS) strategies.
Our results consistently show that SHC-RR outperforms LS approaches, showcasing superior efficacy despite its ostensibly simpler optimization mechanism.
arXiv Detail & Related papers (2024-05-14T20:12:17Z) - Hyperparameter Estimation for Sparse Bayesian Learning Models [1.0172874946490507]
Aparse Bayesian Learning (SBL) models are extensively used in signal processing and machine learning for promoting sparsity through hierarchical priors.
This paper presents a framework for the improvement of SBL models for various objective functions.
A novel algorithm is introduced showing enhanced efficiency, especially under signal noise ratios.
arXiv Detail & Related papers (2024-01-04T21:24:01Z) - A Comparative Study of Deep Learning and Iterative Algorithms for Joint Channel Estimation and Signal Detection in OFDM Systems [11.190815358585137]
Joint channel estimation and signal detection is crucial in frequency division multiplexing systems.
Traditional algorithms perform poorly in low signal-to-noise ratio (SNR) scenarios.
Deep learning (DL) methods have been investigated, but concerns regarding computational expense and lack of validation in low-SNR settings remain.
arXiv Detail & Related papers (2023-03-07T06:34:04Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - Sparsity-Aware Robust Normalized Subband Adaptive Filtering algorithms
based on Alternating Optimization [27.43948386608]
We propose a unified sparsity-aware robust normalized subband adaptive filtering (SA-RNSAF) algorithm for identification of sparse systems under impulsive noise.
The proposed SA-RNSAF algorithm generalizes different algorithms by defining the robust criterion and sparsity-aware penalty.
arXiv Detail & Related papers (2022-05-15T03:38:13Z) - Study of Proximal Normalized Subband Adaptive Algorithm for Acoustic
Echo Cancellation [23.889870461547105]
We propose a novel normalized subband adaptive filter algorithm suited for sparse scenarios.
The proposed algorithm is derived based on the proximal forward-backward splitting and the soft-thresholding methods.
We analyze the mean and mean square behaviors of the algorithm, which is supported by simulations.
arXiv Detail & Related papers (2021-08-14T22:20:09Z) - Momentum Accelerates the Convergence of Stochastic AUPRC Maximization [80.8226518642952]
We study optimization of areas under precision-recall curves (AUPRC), which is widely used for imbalanced tasks.
We develop novel momentum methods with a better iteration of $O (1/epsilon4)$ for finding an $epsilon$stationary solution.
We also design a novel family of adaptive methods with the same complexity of $O (1/epsilon4)$, which enjoy faster convergence in practice.
arXiv Detail & Related papers (2021-07-02T16:21:52Z) - Fast Distributionally Robust Learning with Variance Reduced Min-Max
Optimization [85.84019017587477]
Distributionally robust supervised learning is emerging as a key paradigm for building reliable machine learning systems for real-world applications.
Existing algorithms for solving Wasserstein DRSL involve solving complex subproblems or fail to make use of gradients.
We revisit Wasserstein DRSL through the lens of min-max optimization and derive scalable and efficiently implementable extra-gradient algorithms.
arXiv Detail & Related papers (2021-04-27T16:56:09Z) - Learning Sampling Policy for Faster Derivative Free Optimization [100.27518340593284]
We propose a new reinforcement learning based ZO algorithm (ZO-RL) with learning the sampling policy for generating the perturbations in ZO optimization instead of using random sampling.
Our results show that our ZO-RL algorithm can effectively reduce the variances of ZO gradient by learning a sampling policy, and converge faster than existing ZO algorithms in different scenarios.
arXiv Detail & Related papers (2021-04-09T14:50:59Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.