A Hybrid SFANC-FxNLMS Algorithm for Active Noise Control based on Deep
Learning
- URL: http://arxiv.org/abs/2208.08082v1
- Date: Wed, 17 Aug 2022 05:42:39 GMT
- Title: A Hybrid SFANC-FxNLMS Algorithm for Active Noise Control based on Deep
Learning
- Authors: Zhengding Luo, Dongyuan Shi, and Woon-Seng Gan
- Abstract summary: Filtered-X normalized least-mean-square (FxNLMS) algorithm can obtain lower steady-state errors through adaptive optimization.
This paper proposes a hybrid SFANC-FxNLMS approach to overcome the adaptive algorithm's slow convergence.
Experiments show that the hybrid SFANC-FxNLMS algorithm can achieve a rapid response time, a low noise reduction error, and a high degree of robustness.
- Score: 17.38644275080562
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The selective fixed-filter active noise control (SFANC) method selecting the
best pre-trained control filters for various types of noise can achieve a fast
response time. However, it may lead to large steady-state errors due to
inaccurate filter selection and the lack of adaptability. In comparison, the
filtered-X normalized least-mean-square (FxNLMS) algorithm can obtain lower
steady-state errors through adaptive optimization. Nonetheless, its slow
convergence has a detrimental effect on dynamic noise attenuation. Therefore,
this paper proposes a hybrid SFANC-FxNLMS approach to overcome the adaptive
algorithm's slow convergence and provide a better noise reduction level than
the SFANC method. A lightweight one-dimensional convolutional neural network
(1D CNN) is designed to automatically select the most suitable pre-trained
control filter for each frame of the primary noise. Meanwhile, the FxNLMS
algorithm continues to update the coefficients of the chosen pre-trained
control filter at the sampling rate. Owing to the effective combination of the
two algorithms, experimental results show that the hybrid SFANC-FxNLMS
algorithm can achieve a rapid response time, a low noise reduction error, and a
high degree of robustness.
Related papers
- Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment [81.84950252537618]
This paper reveals a unified game-theoretic connection between iterative BOND and self-play alignment.
We establish a novel framework, WIN rate Dominance (WIND), with a series of efficient algorithms for regularized win rate dominance optimization.
arXiv Detail & Related papers (2024-10-28T04:47:39Z) - Gradient Normalization with(out) Clipping Ensures Convergence of Nonconvex SGD under Heavy-Tailed Noise with Improved Results [60.92029979853314]
This paper investigates Gradient Normalization without (NSGDC) its gradient reduction variant (NSGDC-VR)
We present significant improvements in the theoretical results for both algorithms.
arXiv Detail & Related papers (2024-10-21T22:40:42Z) - Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.
To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.
Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - Deep Generative Fixed-filter Active Noise Control [17.42035489262148]
A generative fixed-filter active noise control (GFANC) method is proposed in this paper to overcome the limitation.
Based on deep learning and a perfect-reconstruction filter bank, the GFANC method only requires a few prior data.
The efficacy of the GFANC method is demonstrated by numerical simulations on real-recorded noises.
arXiv Detail & Related papers (2023-03-10T08:47:22Z) - Performance Evaluation of Selective Fixed-filter Active Noise Control
based on Different Convolutional Neural Networks [19.540619271798455]
The selective fixed-filter active noise control (SFANC) method appears to be a viable candidate for widespread use.
Deep learning technologies can be used in SFANC methods to enable a more flexible selection of the most appropriate control filters.
This paper investigates the performance of SFANC based on different one-dimensional and two-dimensional convolutional neural networks.
arXiv Detail & Related papers (2022-08-17T05:47:38Z) - Sparsity-Aware Robust Normalized Subband Adaptive Filtering algorithms
based on Alternating Optimization [27.43948386608]
We propose a unified sparsity-aware robust normalized subband adaptive filtering (SA-RNSAF) algorithm for identification of sparse systems under impulsive noise.
The proposed SA-RNSAF algorithm generalizes different algorithms by defining the robust criterion and sparsity-aware penalty.
arXiv Detail & Related papers (2022-05-15T03:38:13Z) - Filter-enhanced MLP is All You Need for Sequential Recommendation [89.0974365344997]
In online platforms, logged user behavior data is inevitable to contain noise.
We borrow the idea of filtering algorithms from signal processing that attenuates the noise in the frequency domain.
We propose textbfFMLP-Rec, an all-MLP model with learnable filters for sequential recommendation task.
arXiv Detail & Related papers (2022-02-28T05:49:35Z) - Adaptive Low-Pass Filtering using Sliding Window Gaussian Processes [71.23286211775084]
We propose an adaptive low-pass filter based on Gaussian process regression.
We show that the estimation error of the proposed method is uniformly bounded.
arXiv Detail & Related papers (2021-11-05T17:06:59Z) - A Comparison of Various Classical Optimizers for a Variational Quantum
Linear Solver [0.0]
Variational Hybrid Quantum Classical Algorithms (VHQCAs) are a class of quantum algorithms intended to run on noisy quantum devices.
These algorithms employ a parameterized quantum circuit (ansatz) and a quantum-classical feedback loop.
A classical device is used to optimize the parameters in order to minimize a cost function that can be computed far more efficiently on a quantum device.
arXiv Detail & Related papers (2021-06-16T10:40:00Z) - Learning Sampling Policy for Faster Derivative Free Optimization [100.27518340593284]
We propose a new reinforcement learning based ZO algorithm (ZO-RL) with learning the sampling policy for generating the perturbations in ZO optimization instead of using random sampling.
Our results show that our ZO-RL algorithm can effectively reduce the variances of ZO gradient by learning a sampling policy, and converge faster than existing ZO algorithms in different scenarios.
arXiv Detail & Related papers (2021-04-09T14:50:59Z) - Using Kalman Filter The Right Way: Noise Estimation Is Not Optimal [46.556605821252276]
We show that even a seemingly small violation of KF assumptions can significantly modify the effective noise.
We suggest a method to apply gradient-based optimization efficiently to the symmetric and positive-definite (SPD) parameters of KF.
arXiv Detail & Related papers (2021-04-06T08:59:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.