A Generalized Proportionate-Type Normalized Subband Adaptive Filter
- URL: http://arxiv.org/abs/2111.08952v1
- Date: Wed, 17 Nov 2021 07:49:38 GMT
- Title: A Generalized Proportionate-Type Normalized Subband Adaptive Filter
- Authors: Kuan-Lin Chen, Ching-Hua Lee, Bhaskar D. Rao, Harinath Garudadri
- Abstract summary: We show that a new design criterion, i.e., the least squares on subband errors regularized by a weighted norm, can be used to generalize the proportionate-type normalized subband adaptive filtering (PtNSAF) framework.
The impact of the proposed generalized PtNSAF (GPtNSAF) is studied for the system identification problem via computer simulations.
- Score: 25.568699776077164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We show that a new design criterion, i.e., the least squares on subband
errors regularized by a weighted norm, can be used to generalize the
proportionate-type normalized subband adaptive filtering (PtNSAF) framework.
The new criterion directly penalizes subband errors and includes a sparsity
penalty term which is minimized using the damped regularized Newton's method.
The impact of the proposed generalized PtNSAF (GPtNSAF) is studied for the
system identification problem via computer simulations. Specifically, we study
the effects of using different numbers of subbands and various sparsity penalty
terms for quasi-sparse, sparse, and dispersive systems. The results show that
the benefit of increasing the number of subbands is larger than promoting
sparsity of the estimated filter coefficients when the target system is
quasi-sparse or dispersive. On the other hand, for sparse target systems,
promoting sparsity becomes more important. More importantly, the two aspects
provide complementary and additive benefits to the GPtNSAF for speeding up
convergence.
Related papers
- A Pre-Training and Adaptive Fine-Tuning Framework for Graph Anomaly Detection [67.77204352386897]
Graph anomaly detection (GAD) has garnered increasing attention in recent years, yet it remains challenging due to the scarcity of abnormal nodes and the high cost of label annotations.
We propose PAF, a framework specifically designed for GAD that combines low- and high-pass filters in the pre-training phase to capture the full spectrum of frequency information in node features.
arXiv Detail & Related papers (2025-04-19T09:57:35Z) - Graph-Structured Driven Dual Adaptation for Mitigating Popularity Bias [29.518103753073145]
Popularity bias challenges recommender systems by causing uneven recommendation performance and amplifying the Matthew effect.
Existing supervised alignment and reweighting methods mitigate this bias but have key limitations.
We propose the Graph-Structured Dual Adaptation Framework (GSDA) to address these issues.
arXiv Detail & Related papers (2025-03-30T08:26:29Z) - A Unified Bayesian Perspective for Conventional and Robust Adaptive Filters [15.640261000544077]
We present a new perspective on the origin and interpretation of adaptive filters.
We can present, in a unified framework, derivations of many adaptive filters which depend on the probabilistic model of the observational noise.
Numerical examples are shown to illustrate the properties and provide a better insight into the performance of the derived adaptive filters.
arXiv Detail & Related papers (2025-02-25T16:20:10Z) - Adversarial Transform Particle Filters [11.330617592263744]
The particle filter (PF) and the ensemble Kalman filter (EnKF) are widely used for approximate inference in state-space models.
We propose the Adversarial Transform Particle Filter (ATPF), a novel filtering framework that combines the strengths of the PF and the EnKF through adversarial learning.
arXiv Detail & Related papers (2025-02-10T05:31:35Z) - Rule-Based Modeling of Low-Dimensional Data with PCA and Binary Particle Swarm Optimization (BPSO) in ANFIS [0.29465623430708915]
Fuzzy rule-based systems interpret data in low-dimensional domains, providing transparency and interpretability.
Deep learning excels in complex tasks but is prone to overfitting in sparse, unstructured, or low-dimensional data.
This interpretability is crucial in fields like healthcare and finance.
arXiv Detail & Related papers (2025-02-06T09:13:55Z) - Contrastive CFG: Improving CFG in Diffusion Models by Contrasting Positive and Negative Concepts [55.298031232672734]
As-Free Guidance (CFG) has proven effective in conditional diffusion model sampling for improved condition alignment.
We present a novel method to enhance negative CFG guidance using contrastive loss.
arXiv Detail & Related papers (2024-11-26T03:29:27Z) - Minimum norm interpolation by perceptra: Explicit regularization and
implicit bias [0.3499042782396683]
We investigate how shallow ReLU networks interpolate between known regions.
We numerically study the implicit bias of common optimization algorithms towards known minimum norm interpolants.
arXiv Detail & Related papers (2023-11-10T15:55:47Z) - An adaptive ensemble filter for heavy-tailed distributions: tuning-free
inflation and localization [0.3749861135832072]
Heavy tails is a common feature of filtering distributions that results from the nonlinear dynamical and observation processes.
We propose an algorithm to estimate the prior-to-posterior update from samples of joint forecast distribution of the states and observations.
We demonstrate the benefits of this new ensemble filter on challenging filtering problems.
arXiv Detail & Related papers (2023-10-12T21:56:14Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - Focus Your Attention (with Adaptive IIR Filters) [62.80628327613344]
We present a new layer in which dynamic (i.e.,input-dependent) Infinite Impulse Response (IIR) filters of order two are used to process the input sequence.
Despite their relatively low order, the causal adaptive filters are shown to focus attention on the relevant sequence elements.
arXiv Detail & Related papers (2023-05-24T09:42:30Z) - Penalising the biases in norm regularisation enforces sparsity [28.86954341732928]
This work shows the parameters' norm required to represent a function is given by the total variation of its second derivative, weighted by a $sqrt1+x2$ factor.
Notably, this weighting factor disappears when the norm of bias terms is not regularised.
arXiv Detail & Related papers (2023-03-02T15:33:18Z) - Instance-Dependent Generalization Bounds via Optimal Transport [51.71650746285469]
Existing generalization bounds fail to explain crucial factors that drive the generalization of modern neural networks.
We derive instance-dependent generalization bounds that depend on the local Lipschitz regularity of the learned prediction function in the data space.
We empirically analyze our generalization bounds for neural networks, showing that the bound values are meaningful and capture the effect of popular regularization methods during training.
arXiv Detail & Related papers (2022-11-02T16:39:42Z) - Study of General Robust Subband Adaptive Filtering [47.29178517675426]
We propose a general robust subband adaptive filtering (GR-SAF) scheme against impulsive noise.
By choosing different scaling factors such as from the M-estimate and maximum correntropy robust criteria, we can easily obtain different GR-SAF algorithms.
The proposed GR-SAF algorithm can be reduced to a variable regularization robust normalized SAF algorithm, thus having fast convergence rate and low steady-state error.
arXiv Detail & Related papers (2022-08-04T01:39:03Z) - Asymptotic Soft Cluster Pruning for Deep Neural Networks [5.311178623385279]
Filter pruning method introduces structural sparsity by removing selected filters.
We propose a novel filter pruning method called Asymptotic Soft Cluster Pruning.
Our method can achieve competitive results compared with many state-of-the-art algorithms.
arXiv Detail & Related papers (2022-06-16T13:58:58Z) - Dependency Aware Filter Pruning [74.69495455411987]
Pruning a proportion of unimportant filters is an efficient way to mitigate the inference cost.
Previous work prunes filters according to their weight norms or the corresponding batch-norm scaling factors.
We propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.
arXiv Detail & Related papers (2020-05-06T07:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.