A Generalized Proportionate-Type Normalized Subband Adaptive Filter
- URL: http://arxiv.org/abs/2111.08952v1
- Date: Wed, 17 Nov 2021 07:49:38 GMT
- Title: A Generalized Proportionate-Type Normalized Subband Adaptive Filter
- Authors: Kuan-Lin Chen, Ching-Hua Lee, Bhaskar D. Rao, Harinath Garudadri
- Abstract summary: We show that a new design criterion, i.e., the least squares on subband errors regularized by a weighted norm, can be used to generalize the proportionate-type normalized subband adaptive filtering (PtNSAF) framework.
The impact of the proposed generalized PtNSAF (GPtNSAF) is studied for the system identification problem via computer simulations.
- Score: 25.568699776077164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We show that a new design criterion, i.e., the least squares on subband
errors regularized by a weighted norm, can be used to generalize the
proportionate-type normalized subband adaptive filtering (PtNSAF) framework.
The new criterion directly penalizes subband errors and includes a sparsity
penalty term which is minimized using the damped regularized Newton's method.
The impact of the proposed generalized PtNSAF (GPtNSAF) is studied for the
system identification problem via computer simulations. Specifically, we study
the effects of using different numbers of subbands and various sparsity penalty
terms for quasi-sparse, sparse, and dispersive systems. The results show that
the benefit of increasing the number of subbands is larger than promoting
sparsity of the estimated filter coefficients when the target system is
quasi-sparse or dispersive. On the other hand, for sparse target systems,
promoting sparsity becomes more important. More importantly, the two aspects
provide complementary and additive benefits to the GPtNSAF for speeding up
convergence.
Related papers
- Minimum norm interpolation by perceptra: Explicit regularization and
implicit bias [0.3499042782396683]
We investigate how shallow ReLU networks interpolate between known regions.
We numerically study the implicit bias of common optimization algorithms towards known minimum norm interpolants.
arXiv Detail & Related papers (2023-11-10T15:55:47Z) - An adaptive ensemble filter for heavy-tailed distributions: tuning-free
inflation and localization [0.3749861135832072]
Heavy tails is a common feature of filtering distributions that results from the nonlinear dynamical and observation processes.
We propose an algorithm to estimate the prior-to-posterior update from samples of joint forecast distribution of the states and observations.
We demonstrate the benefits of this new ensemble filter on challenging filtering problems.
arXiv Detail & Related papers (2023-10-12T21:56:14Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - Focus Your Attention (with Adaptive IIR Filters) [62.80628327613344]
We present a new layer in which dynamic (i.e.,input-dependent) Infinite Impulse Response (IIR) filters of order two are used to process the input sequence.
Despite their relatively low order, the causal adaptive filters are shown to focus attention on the relevant sequence elements.
arXiv Detail & Related papers (2023-05-24T09:42:30Z) - Penalising the biases in norm regularisation enforces sparsity [28.86954341732928]
This work shows the parameters' norm required to represent a function is given by the total variation of its second derivative, weighted by a $sqrt1+x2$ factor.
Notably, this weighting factor disappears when the norm of bias terms is not regularised.
arXiv Detail & Related papers (2023-03-02T15:33:18Z) - Instance-Dependent Generalization Bounds via Optimal Transport [51.71650746285469]
Existing generalization bounds fail to explain crucial factors that drive the generalization of modern neural networks.
We derive instance-dependent generalization bounds that depend on the local Lipschitz regularity of the learned prediction function in the data space.
We empirically analyze our generalization bounds for neural networks, showing that the bound values are meaningful and capture the effect of popular regularization methods during training.
arXiv Detail & Related papers (2022-11-02T16:39:42Z) - Study of General Robust Subband Adaptive Filtering [47.29178517675426]
We propose a general robust subband adaptive filtering (GR-SAF) scheme against impulsive noise.
By choosing different scaling factors such as from the M-estimate and maximum correntropy robust criteria, we can easily obtain different GR-SAF algorithms.
The proposed GR-SAF algorithm can be reduced to a variable regularization robust normalized SAF algorithm, thus having fast convergence rate and low steady-state error.
arXiv Detail & Related papers (2022-08-04T01:39:03Z) - Asymptotic Soft Cluster Pruning for Deep Neural Networks [5.311178623385279]
Filter pruning method introduces structural sparsity by removing selected filters.
We propose a novel filter pruning method called Asymptotic Soft Cluster Pruning.
Our method can achieve competitive results compared with many state-of-the-art algorithms.
arXiv Detail & Related papers (2022-06-16T13:58:58Z) - When Does Preconditioning Help or Hurt Generalization? [74.25170084614098]
We show how the textitimplicit bias of first and second order methods affects the comparison of generalization properties.
We discuss several approaches to manage the bias-variance tradeoff, and the potential benefit of interpolating between GD and NGD.
arXiv Detail & Related papers (2020-06-18T17:57:26Z) - Dependency Aware Filter Pruning [74.69495455411987]
Pruning a proportion of unimportant filters is an efficient way to mitigate the inference cost.
Previous work prunes filters according to their weight norms or the corresponding batch-norm scaling factors.
We propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.
arXiv Detail & Related papers (2020-05-06T07:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.