Ultra Dual-Path Compression For Joint Echo Cancellation And Noise
Suppression
- URL: http://arxiv.org/abs/2308.11053v2
- Date: Tue, 10 Oct 2023 06:46:21 GMT
- Title: Ultra Dual-Path Compression For Joint Echo Cancellation And Noise
Suppression
- Authors: Hangting Chen, Jianwei Yu, Yi Luo, Rongzhi Gu, Weihua Li, Zhuocheng
Lu, Chao Weng
- Abstract summary: Under fixed compression ratios, dual-path compression combining both the time and frequency methods will give further performance improvement.
Proposed models show competitive performance compared with fast FullSubNet and DeepNetFilter.
- Score: 38.09558772881095
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Echo cancellation and noise reduction are essential for full-duplex
communication, yet most existing neural networks have high computational costs
and are inflexible in tuning model complexity. In this paper, we introduce
time-frequency dual-path compression to achieve a wide range of compression
ratios on computational cost. Specifically, for frequency compression,
trainable filters are used to replace manually designed filters for dimension
reduction. For time compression, only using frame skipped prediction causes
large performance degradation, which can be alleviated by a post-processing
network with full sequence modeling. We have found that under fixed compression
ratios, dual-path compression combining both the time and frequency methods
will give further performance improvement, covering compression ratios from 4x
to 32x with little model size change. Moreover, the proposed models show
competitive performance compared with fast FullSubNet and DeepFilterNet.
Related papers
- ZipNN: Lossless Compression for AI Models [10.111136691015554]
We present ZipNN a lossless compression tailored to neural networks.
On popular models (e.g. Llama 3) ZipNN shows space savings that are over 17% better than vanilla compression.
We estimate that these methods could save over an ExaByte per month of network traffic downloaded from a large model hub like Hugging Face.
arXiv Detail & Related papers (2024-11-07T23:28:23Z) - Fast Feedforward 3D Gaussian Splatting Compression [55.149325473447384]
3D Gaussian Splatting (FCGS) is an optimization-free model that can compress 3DGS representations rapidly in a single feed-forward pass.
FCGS achieves a compression ratio of over 20X while maintaining fidelity, surpassing most per-scene SOTA optimization-based methods.
arXiv Detail & Related papers (2024-10-10T15:13:08Z) - Lossy and Lossless (L$^2$) Post-training Model Size Compression [12.926354646945397]
We propose a post-training model size compression method that combines lossy and lossless compression in a unified way.
Our method can achieve a stable $10times$ compression ratio without sacrificing accuracy and a $20times$ compression ratio with minor accuracy loss in a short time.
arXiv Detail & Related papers (2023-08-08T14:10:16Z) - GraVAC: Adaptive Compression for Communication-Efficient Distributed DL
Training [0.0]
Distributed data-parallel (DDP) training improves overall application throughput as multiple devices train on a subset of data and aggregate updates to produce a globally shared model.
GraVAC is a framework to dynamically adjust compression factor throughout training by evaluating model progress and assessing information loss associated with compression.
As opposed to using a static compression factor, GraVAC reduces end-to-end training time for ResNet101, VGG16 and LSTM by 4.32x, 1.95x and 6.67x respectively.
arXiv Detail & Related papers (2023-05-20T14:25:17Z) - Unrolled Compressed Blind-Deconvolution [77.88847247301682]
sparse multichannel blind deconvolution (S-MBD) arises frequently in many engineering applications such as radar/sonar/ultrasound imaging.
We propose a compression method that enables blind recovery from much fewer measurements with respect to the full received signal in time.
arXiv Detail & Related papers (2022-09-28T15:16:58Z) - Deep Lossy Plus Residual Coding for Lossless and Near-lossless Image
Compression [85.93207826513192]
We propose a unified and powerful deep lossy plus residual (DLPR) coding framework for both lossless and near-lossless image compression.
We solve the joint lossy and residual compression problem in the approach of VAEs.
In the near-lossless mode, we quantize the original residuals to satisfy a given $ell_infty$ error bound.
arXiv Detail & Related papers (2022-09-11T12:11:56Z) - Towards Compact CNNs via Collaborative Compression [166.86915086497433]
We propose a Collaborative Compression scheme, which joints channel pruning and tensor decomposition to compress CNN models.
We achieve 52.9% FLOPs reduction by removing 48.4% parameters on ResNet-50 with only a Top-1 accuracy drop of 0.56% on ImageNet 2012.
arXiv Detail & Related papers (2021-05-24T12:07:38Z) - Conditional Automated Channel Pruning for Deep Neural Networks [22.709646484723876]
We develop a conditional model that takes an arbitrary compression rate as input and outputs the corresponding compressed model.
In the experiments, the resultant models with different compression rates consistently outperform the models compressed by existing methods.
arXiv Detail & Related papers (2020-09-21T09:55:48Z) - Structured Sparsification with Joint Optimization of Group Convolution
and Channel Shuffle [117.95823660228537]
We propose a novel structured sparsification method for efficient network compression.
The proposed method automatically induces structured sparsity on the convolutional weights.
We also address the problem of inter-group communication with a learnable channel shuffle mechanism.
arXiv Detail & Related papers (2020-02-19T12:03:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.