Combinations of Adaptive Filters
- URL: http://arxiv.org/abs/2112.12245v1
- Date: Wed, 22 Dec 2021 22:21:43 GMT
- Title: Combinations of Adaptive Filters
- Authors: Jer\'onimo Arenas-Garc\'ia and Luis A. Azpicueta-Ruiz and Magno T.M.
Silva and Vitor H. Nascimento and Ali H. Sayed
- Abstract summary: Combination of adaptive filters exploits divide and conquer principle.
In particular, the problem of combining the outputs of several learning algorithms has been studied in the computational learning field.
- Score: 38.0505909175152
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adaptive filters are at the core of many signal processing applications,
ranging from acoustic noise supression to echo cancelation, array beamforming,
channel equalization, to more recent sensor network applications in
surveillance, target localization, and tracking. A trending approach in this
direction is to recur to in-network distributed processing in which individual
nodes implement adaptation rules and diffuse their estimation to the network.
When the a priori knowledge about the filtering scenario is limited or
imprecise, selecting the most adequate filter structure and adjusting its
parameters becomes a challenging task, and erroneous choices can lead to
inadequate performance. To address this difficulty, one useful approach is to
rely on combinations of adaptive structures.
The combination of adaptive filters exploits to some extent the same divide
and conquer principle that has also been successfully exploited by the
machine-learning community (e.g., in bagging or boosting). In particular, the
problem of combining the outputs of several learning algorithms (mixture of
experts) has been studied in the computational learning field under a different
perspective: rather than studying the expected performance of the mixture,
deterministic bounds are derived that apply to individual sequences and,
therefore, reflect worst-case scenarios. These bounds require assumptions
different from the ones typically used in adaptive filtering, which is the
emphasis of this overview article. We review the key ideas and principles
behind these combination schemes, with emphasis on design rules. We also
illustrate their performance with a variety of examples.
Related papers
- Combining Explicit and Implicit Regularization for Efficient Learning in
Deep Networks [3.04585143845864]
In deep linear networks, gradient descent implicitly regularizes toward low-rank solutions on matrix completion/factorization tasks.
We propose an explicit penalty to mirror this implicit bias which only takes effect with certain adaptive gradient generalizations.
This combination can enable a single-layer network to achieve low-rank approximations with degenerate error comparable to deep linear networks.
arXiv Detail & Related papers (2023-06-01T04:47:17Z) - Understanding the Covariance Structure of Convolutional Filters [86.0964031294896]
Recent ViT-inspired convolutional networks such as ConvMixer and ConvNeXt use large-kernel depthwise convolutions with notable structure.
We first observe that such learned filters have highly-structured covariance matrices, and we find that covariances calculated from small networks may be used to effectively initialize a variety of larger networks.
arXiv Detail & Related papers (2022-10-07T15:59:13Z) - Direct design of biquad filter cascades with deep learning by sampling
random polynomials [5.1118282767275005]
In this work, we learn a direct mapping from the target magnitude response to the filter coefficient space with a neural network trained on millions of random filters.
We demonstrate our approach enables both fast and accurate estimation of filter coefficients given a desired response.
We compare our method against existing methods including modified Yule-Walker and gradient descent and show IIRNet is, on average, both faster and more accurate.
arXiv Detail & Related papers (2021-10-07T17:58:08Z) - Unsharp Mask Guided Filtering [53.14430987860308]
The goal of this paper is guided image filtering, which emphasizes the importance of structure transfer during filtering.
We propose a new and simplified formulation of the guided filter inspired by unsharp masking.
Our formulation enjoys a filtering prior to a low-pass filter and enables explicit structure transfer by estimating a single coefficient.
arXiv Detail & Related papers (2021-06-02T19:15:34Z) - Adaptive Sampling for Minimax Fair Classification [40.936345085421955]
We propose an adaptive sampling algorithm based on the principle of optimism, and derive theoretical bounds on its performance.
By deriving algorithm independent lower-bounds for a specific class of problems, we show that the performance achieved by our adaptive scheme cannot be improved in general.
arXiv Detail & Related papers (2021-03-01T04:58:27Z) - Deep Shells: Unsupervised Shape Correspondence with Optimal Transport [52.646396621449]
We propose a novel unsupervised learning approach to 3D shape correspondence.
We show that the proposed method significantly improves over the state-of-the-art on multiple datasets.
arXiv Detail & Related papers (2020-10-28T22:24:07Z) - Dependency Aware Filter Pruning [74.69495455411987]
Pruning a proportion of unimportant filters is an efficient way to mitigate the inference cost.
Previous work prunes filters according to their weight norms or the corresponding batch-norm scaling factors.
We propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.
arXiv Detail & Related papers (2020-05-06T07:41:22Z) - Rethinking Differentiable Search for Mixed-Precision Neural Networks [83.55785779504868]
Low-precision networks with weights and activations quantized to low bit-width are widely used to accelerate inference on edge devices.
Current solutions are uniform, using identical bit-width for all filters.
This fails to account for the different sensitivities of different filters and is suboptimal.
Mixed-precision networks address this problem, by tuning the bit-width to individual filter requirements.
arXiv Detail & Related papers (2020-04-13T07:02:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.