Unfolding Neural Networks for Compressive Multichannel Blind
Deconvolution
- URL: http://arxiv.org/abs/2010.11391v2
- Date: Fri, 12 Feb 2021 01:12:39 GMT
- Title: Unfolding Neural Networks for Compressive Multichannel Blind
Deconvolution
- Authors: Bahareh Tolooshams, Satish Mulleti, Demba Ba, and Yonina C. Eldar
- Abstract summary: We propose a learned-structured unfolding neural network for the problem of compressive sparse multichannel blind-deconvolution.
In this problem, each channel's measurements are given as convolution of a common source signal and sparse filter.
We demonstrate that our method is superior to classical structured compressive sparse multichannel blind-deconvolution methods in terms of accuracy and speed of sparse filter recovery.
- Score: 71.29848468762789
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a learned-structured unfolding neural network for the problem of
compressive sparse multichannel blind-deconvolution. In this problem, each
channel's measurements are given as convolution of a common source signal and
sparse filter. Unlike prior works where the compression is achieved either
through random projections or by applying a fixed structured compression
matrix, this paper proposes to learn the compression matrix from data. Given
the full measurements, the proposed network is trained in an unsupervised
fashion to learn the source and estimate sparse filters. Then, given the
estimated source, we learn a structured compression operator while optimizing
for signal reconstruction and sparse filter recovery. The efficient structure
of the compression allows its practical hardware implementation. The proposed
neural network is an autoencoder constructed based on an unfolding approach:
upon training, the encoder maps the compressed measurements into an estimate of
sparse filters using the compression operator and the source, and the linear
convolutional decoder reconstructs the full measurements. We demonstrate that
our method is superior to classical structured compressive sparse multichannel
blind-deconvolution methods in terms of accuracy and speed of sparse filter
recovery.
Related papers
- Compression of Structured Data with Autoencoders: Provable Benefit of
Nonlinearities and Depth [83.15263499262824]
We prove that gradient descent converges to a solution that completely disregards the sparse structure of the input.
We show how to improve upon Gaussian performance for the compression of sparse data by adding a denoising function to a shallow architecture.
We validate our findings on image datasets, such as CIFAR-10 and MNIST.
arXiv Detail & Related papers (2024-02-07T16:32:29Z) - Unrolled Compressed Blind-Deconvolution [77.88847247301682]
sparse multichannel blind deconvolution (S-MBD) arises frequently in many engineering applications such as radar/sonar/ultrasound imaging.
We propose a compression method that enables blind recovery from much fewer measurements with respect to the full received signal in time.
arXiv Detail & Related papers (2022-09-28T15:16:58Z) - Lossy Compression with Gaussian Diffusion [28.930398810600504]
We describe a novel lossy compression approach called DiffC which is based on unconditional diffusion generative models.
We implement a proof of concept and find that it works surprisingly well despite the lack of an encoder transform.
We show that a flow-based reconstruction achieves a 3 dB gain over ancestral sampling at highs.
arXiv Detail & Related papers (2022-06-17T16:46:31Z) - COIN++: Data Agnostic Neural Compression [55.27113889737545]
COIN++ is a neural compression framework that seamlessly handles a wide range of data modalities.
We demonstrate the effectiveness of our method by compressing various data modalities.
arXiv Detail & Related papers (2022-01-30T20:12:04Z) - Low-rank Tensor Decomposition for Compression of Convolutional Neural
Networks Using Funnel Regularization [1.8579693774597708]
We propose a model reduction method to compress the pre-trained networks using low-rank tensor decomposition.
A new regularization method, called funnel function, is proposed to suppress the unimportant factors during the compression.
For ResNet18 with ImageNet2012, our reduced model can reach more than twi times speed up in terms of GMAC with merely 0.7% Top-1 accuracy drop.
arXiv Detail & Related papers (2021-12-07T13:41:51Z) - Compressing Neural Networks: Towards Determining the Optimal Layer-wise
Decomposition [62.41259783906452]
We present a novel global compression framework for deep neural networks.
It automatically analyzes each layer to identify the optimal per-layer compression ratio.
Our results open up new avenues for future research into the global performance-size trade-offs of modern neural networks.
arXiv Detail & Related papers (2021-07-23T20:01:30Z) - Deep Neural Networks and End-to-End Learning for Audio Compression [2.084078990567849]
We present an end-to-end deep learning approach that combines recurrent neural networks (RNNs) within the training strategy of variational autoencoders (VAEs) with a binary representation of the latent space.
This is the first end-to-end learning for a single audio compression model with RNNs, and our model achieves a Signal to Distortion Ratio (SDR) of 20.54.
arXiv Detail & Related papers (2021-05-25T05:36:30Z) - DeepReduce: A Sparse-tensor Communication Framework for Distributed Deep
Learning [79.89085533866071]
This paper introduces DeepReduce, a versatile framework for the compressed communication of sparse tensors.
DeepReduce decomposes tensors in two sets, values and indices, and allows both independent and combined compression of these sets.
Our experiments with large real models demonstrate that DeepReduce transmits fewer data and imposes lower computational overhead than existing methods.
arXiv Detail & Related papers (2021-02-05T11:31:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.