Ultra Low Complexity Deep Learning Based Noise Suppression
- URL: http://arxiv.org/abs/2312.08132v1
- Date: Wed, 13 Dec 2023 13:34:15 GMT
- Title: Ultra Low Complexity Deep Learning Based Noise Suppression
- Authors: Shrishti Saha Shetu, Soumitro Chakrabarty, Oliver Thiergart, Edwin
Mabande
- Abstract summary: This paper introduces an innovative method for reducing the computational complexity of deep neural networks in real-time speech enhancement on resource-constrained devices.
Our algorithm exhibits 3 to 4 times less computational complexity and memory usage than prior state-of-the-art approaches.
- Score: 3.4373727078460665
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces an innovative method for reducing the computational
complexity of deep neural networks in real-time speech enhancement on
resource-constrained devices. The proposed approach utilizes a two-stage
processing framework, employing channelwise feature reorientation to reduce the
computational load of convolutional operations. By combining this with a
modified power law compression technique for enhanced perceptual quality, this
approach achieves noise suppression performance comparable to state-of-the-art
methods with significantly less computational requirements. Notably, our
algorithm exhibits 3 to 4 times less computational complexity and memory usage
than prior state-of-the-art approaches.
Related papers
- Deep Convolutional Neural Networks Meet Variational Shape Compactness Priors for Image Segmentation [7.314877483509877]
Shape compactness is a key geometrical property to describe interesting regions in many image segmentation tasks.
We propose two novel algorithms to solve the introduced image segmentation problem that incorporates a shape-compactness prior.
The proposed algorithms significantly improve IoU by 20% training on a highly noisy image dataset.
arXiv Detail & Related papers (2024-05-23T11:05:35Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Reducing Computational Complexity of Neural Networks in Optical Channel
Equalization: From Concepts to Implementation [1.6987798749419218]
We show that it is possible to design an NN-based equalizer that is simpler to implement and has better performance than the conventional digital back-propagation (DBP) equalizer with only one step per span.
An equalizer based on NN can also achieve superior performance while still maintaining the same degree of complexity as the full electronic chromatic dispersion compensation block.
arXiv Detail & Related papers (2022-08-26T21:00:05Z) - On Effects of Compression with Hyperdimensional Computing in Distributed
Randomized Neural Networks [6.25118865553438]
We propose a model for distributed classification based on randomized neural networks and hyperdimensional computing.
In this work, we propose a more flexible approach to compression and compare it to conventional compression algorithms, dimensionality reduction, and quantization techniques.
arXiv Detail & Related papers (2021-06-17T22:02:40Z) - A Novel Fast 3D Single Image Super-Resolution Algorithm [8.922669577341225]
This paper introduces a novel computationally efficient method of solving the 3D single image super-resolution (SR) problem.
The main contribution lies in the original way of handling simultaneously the associated decimation and operators.
The proposed decomposition technique of the 3D decimation operator allows a straightforward implementation for Tikhonov regularization.
arXiv Detail & Related papers (2020-10-29T11:23:28Z) - PowerGossip: Practical Low-Rank Communication Compression in
Decentralized Deep Learning [62.440827696638664]
We introduce a simple algorithm that directly compresses the model differences between neighboring workers.
Inspired by the PowerSGD for centralized deep learning, this algorithm uses power steps to maximize the information transferred per bit.
arXiv Detail & Related papers (2020-08-04T09:14:52Z) - ALF: Autoencoder-based Low-rank Filter-sharing for Efficient
Convolutional Neural Networks [63.91384986073851]
We propose the autoencoder-based low-rank filter-sharing technique technique (ALF)
ALF shows a reduction of 70% in network parameters, 61% in operations and 41% in execution time, with minimal loss in accuracy.
arXiv Detail & Related papers (2020-07-27T09:01:22Z) - WrapNet: Neural Net Inference with Ultra-Low-Resolution Arithmetic [57.07483440807549]
We propose a method that adapts neural networks to use low-resolution (8-bit) additions in the accumulators, achieving classification accuracy comparable to their 32-bit counterparts.
We demonstrate the efficacy of our approach on both software and hardware platforms.
arXiv Detail & Related papers (2020-07-26T23:18:38Z) - Noise-Sampling Cross Entropy Loss: Improving Disparity Regression Via
Cost Volume Aware Regularizer [38.86850327892113]
We propose a noise-sampling cross entropy loss function to regularize the cost volume produced by deep neural networks to be unimodal and coherent.
Experiments validate that the proposed noise-sampling cross entropy loss can not only help neural networks learn more informative cost volume, but also lead to better stereo matching performance.
arXiv Detail & Related papers (2020-05-18T15:29:55Z) - Parallelization Techniques for Verifying Neural Networks [52.917845265248744]
We introduce an algorithm based on the verification problem in an iterative manner and explore two partitioning strategies.
We also introduce a highly parallelizable pre-processing algorithm that uses the neuron activation phases to simplify the neural network verification problems.
arXiv Detail & Related papers (2020-04-17T20:21:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.