Unrolled Compressed Blind-Deconvolution
- URL: http://arxiv.org/abs/2209.14165v2
- Date: Thu, 18 May 2023 14:12:13 GMT
- Title: Unrolled Compressed Blind-Deconvolution
- Authors: Bahareh Tolooshams, Satish Mulleti, Demba Ba, Yonina C. Eldar
- Abstract summary: sparse multichannel blind deconvolution (S-MBD) arises frequently in many engineering applications such as radar/sonar/ultrasound imaging.
We propose a compression method that enables blind recovery from much fewer measurements with respect to the full received signal in time.
- Score: 77.88847247301682
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The problem of sparse multichannel blind deconvolution (S-MBD) arises
frequently in many engineering applications such as radar/sonar/ultrasound
imaging. To reduce its computational and implementation cost, we propose a
compression method that enables blind recovery from much fewer measurements
with respect to the full received signal in time. The proposed compression
measures the signal through a filter followed by a subsampling, allowing for a
significant reduction in implementation cost. We derive theoretical guarantees
for the identifiability and recovery of a sparse filter from compressed
measurements. Our results allow for the design of a wide class of compression
filters. We, then, propose a data-driven unrolled learning framework to learn
the compression filter and solve the S-MBD problem. The encoder is a recurrent
inference network that maps compressed measurements into an estimate of sparse
filters. We demonstrate that our unrolled learning method is more robust to
choices of source shapes and has better recovery performance compared to
optimization-based methods. Finally, in data-limited applications (fewshot
learning), we highlight the superior generalization capability of unrolled
learning compared to conventional deep learning.
Related papers
- Compression of Structured Data with Autoencoders: Provable Benefit of
Nonlinearities and Depth [83.15263499262824]
We prove that gradient descent converges to a solution that completely disregards the sparse structure of the input.
We show how to improve upon Gaussian performance for the compression of sparse data by adding a denoising function to a shallow architecture.
We validate our findings on image datasets, such as CIFAR-10 and MNIST.
arXiv Detail & Related papers (2024-02-07T16:32:29Z) - Optimal Compression of Unit Norm Vectors in the High Distortion Regime [30.6205706348233]
We investigate the method for compressing a unit norm vector into the minimum number of bits, while still allowing for some acceptable level of distortion in recovery.
Our study considers both biased and unbiased compression methods and determines the optimal compression rates.
While the results are a mix of new and known, they are compiled in this paper for completeness.
arXiv Detail & Related papers (2023-07-16T04:23:57Z) - Estimating the Resize Parameter in End-to-end Learned Image Compression [50.20567320015102]
We describe a search-free resizing framework that can further improve the rate-distortion tradeoff of recent learned image compression models.
Our results show that our new resizing parameter estimation framework can provide Bjontegaard-Delta rate (BD-rate) improvement of about 10% against leading perceptual quality engines.
arXiv Detail & Related papers (2022-04-26T01:35:02Z) - Modeling Image Quantization Tradeoffs for Optimal Compression [0.0]
Lossy compression algorithms target tradeoffs by quantizating high frequency data to increase compression rates.
We propose a new method of optimizing quantization tables using Deep Learning and a minimax loss function.
arXiv Detail & Related papers (2021-12-14T07:35:22Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Wideband and Entropy-Aware Deep Soft Bit Quantization [1.7259824817932292]
We introduce a novel deep learning solution for soft bit quantization across wideband channels.
Our method is trained end-to-end with quantization- and entropy-aware augmentations to the loss function.
Our method achieves a compression gain of up to $10 %$ in the high SNR regime versus previous state-of-the-art methods.
arXiv Detail & Related papers (2021-10-18T18:00:05Z) - Analyzing and Mitigating JPEG Compression Defects in Deep Learning [69.04777875711646]
We present a unified study of the effects of JPEG compression on a range of common tasks and datasets.
We show that there is a significant penalty on common performance metrics for high compression.
arXiv Detail & Related papers (2020-11-17T20:32:57Z) - Unfolding Neural Networks for Compressive Multichannel Blind
Deconvolution [71.29848468762789]
We propose a learned-structured unfolding neural network for the problem of compressive sparse multichannel blind-deconvolution.
In this problem, each channel's measurements are given as convolution of a common source signal and sparse filter.
We demonstrate that our method is superior to classical structured compressive sparse multichannel blind-deconvolution methods in terms of accuracy and speed of sparse filter recovery.
arXiv Detail & Related papers (2020-10-22T02:34:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.