Direct design of biquad filter cascades with deep learning by sampling
random polynomials
- URL: http://arxiv.org/abs/2110.03691v1
- Date: Thu, 7 Oct 2021 17:58:08 GMT
- Title: Direct design of biquad filter cascades with deep learning by sampling
random polynomials
- Authors: Joseph T. Colonel, Christian J. Steinmetz, Marcus Michelen and Joshua
D. Reiss
- Abstract summary: In this work, we learn a direct mapping from the target magnitude response to the filter coefficient space with a neural network trained on millions of random filters.
We demonstrate our approach enables both fast and accurate estimation of filter coefficients given a desired response.
We compare our method against existing methods including modified Yule-Walker and gradient descent and show IIRNet is, on average, both faster and more accurate.
- Score: 5.1118282767275005
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Designing infinite impulse response filters to match an arbitrary magnitude
response requires specialized techniques. Methods like modified Yule-Walker are
relatively efficient, but may not be sufficiently accurate in matching high
order responses. On the other hand, iterative optimization techniques often
enable superior performance, but come at the cost of longer run-times and are
sensitive to initial conditions, requiring manual tuning. In this work, we
address some of these limitations by learning a direct mapping from the target
magnitude response to the filter coefficient space with a neural network
trained on millions of random filters. We demonstrate our approach enables both
fast and accurate estimation of filter coefficients given a desired response.
We investigate training with different families of random filters, and find
training with a variety of filter families enables better generalization when
estimating real-world filters, using head-related transfer functions and guitar
cabinets as case studies. We compare our method against existing methods
including modified Yule-Walker and gradient descent and show IIRNet is, on
average, both faster and more accurate.
Related papers
- Beyond Kalman Filters: Deep Learning-Based Filters for Improved Object
Tracking [3.5693768338940304]
We propose two innovative data-driven filtering methods for tracking-by-detection systems.
The first method employs a Bayesian filter with a trainable motion model to predict an object's future location.
The second method, an end-to-end trainable filter, goes a step further by learning to correct detector errors.
arXiv Detail & Related papers (2024-02-15T10:47:44Z) - Implicit Maximum a Posteriori Filtering via Adaptive Optimization [4.767884267554628]
We frame the standard Bayesian filtering problem as optimization over a time-varying objective.
We show that our framework results in filters that are effective, robust, and scalable to high-dimensional systems.
arXiv Detail & Related papers (2023-11-17T15:30:44Z) - Filter Pruning for Efficient CNNs via Knowledge-driven Differential
Filter Sampler [103.97487121678276]
Filter pruning simultaneously accelerates the computation and reduces the memory overhead of CNNs.
We propose a novel Knowledge-driven Differential Filter Sampler(KDFS) with Masked Filter Modeling(MFM) framework for filter pruning.
arXiv Detail & Related papers (2023-07-01T02:28:41Z) - Computational Doob's h-transforms for Online Filtering of Discretely
Observed Diffusions [65.74069050283998]
We propose a computational framework to approximate Doob's $h$-transforms.
The proposed approach can be orders of magnitude more efficient than state-of-the-art particle filters.
arXiv Detail & Related papers (2022-06-07T15:03:05Z) - Combinations of Adaptive Filters [38.0505909175152]
Combination of adaptive filters exploits divide and conquer principle.
In particular, the problem of combining the outputs of several learning algorithms has been studied in the computational learning field.
arXiv Detail & Related papers (2021-12-22T22:21:43Z) - Learning Versatile Convolution Filters for Efficient Visual Recognition [125.34595948003745]
This paper introduces versatile filters to construct efficient convolutional neural networks.
We conduct theoretical analysis on network complexity and an efficient convolution scheme is introduced.
Experimental results on benchmark datasets and neural networks demonstrate that our versatile filters are able to achieve comparable accuracy as that of original filters.
arXiv Detail & Related papers (2021-09-20T06:07:14Z) - Fast Variational AutoEncoder with Inverted Multi-Index for Collaborative
Filtering [59.349057602266]
Variational AutoEncoder (VAE) has been extended as a representative nonlinear method for collaborative filtering.
We propose to decompose the inner-product-based softmax probability based on the inverted multi-index.
FastVAE can outperform the state-of-the-art baselines in terms of both sampling quality and efficiency.
arXiv Detail & Related papers (2021-09-13T08:31:59Z) - Unsharp Mask Guided Filtering [53.14430987860308]
The goal of this paper is guided image filtering, which emphasizes the importance of structure transfer during filtering.
We propose a new and simplified formulation of the guided filter inspired by unsharp masking.
Our formulation enjoys a filtering prior to a low-pass filter and enables explicit structure transfer by estimating a single coefficient.
arXiv Detail & Related papers (2021-06-02T19:15:34Z) - Dependency Aware Filter Pruning [74.69495455411987]
Pruning a proportion of unimportant filters is an efficient way to mitigate the inference cost.
Previous work prunes filters according to their weight norms or the corresponding batch-norm scaling factors.
We propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.
arXiv Detail & Related papers (2020-05-06T07:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.