ADMM-DAD net: a deep unfolding network for analysis compressed sensing
- URL: http://arxiv.org/abs/2110.06986v1
- Date: Wed, 13 Oct 2021 18:56:59 GMT
- Title: ADMM-DAD net: a deep unfolding network for analysis compressed sensing
- Authors: Vasiliki Kouni, Georgios Paraskevopoulos, Holger Rauhut, George C.
Alexandropoulos
- Abstract summary: We propose a new deep unfolding neural network based on the ADMM algorithm for analysis Compressed Sensing.
The proposed network jointly learns a redundant analysis operator for sparsification and reconstructs the signal of interest.
- Score: 20.88999913266683
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a new deep unfolding neural network based on the
ADMM algorithm for analysis Compressed Sensing. The proposed network jointly
learns a redundant analysis operator for sparsification and reconstructs the
signal of interest. We compare our proposed network with a state-of-the-art
unfolded ISTA decoder, that also learns an orthogonal sparsifier. Moreover, we
consider not only image, but also speech datasets as test examples.
Computational experiments demonstrate that our proposed network outperforms the
state-of-the-art deep unfolding networks, consistently for both real-world
image and speech datasets.
Related papers
- Image segmentation with traveling waves in an exactly solvable recurrent
neural network [71.74150501418039]
We show that a recurrent neural network can effectively divide an image into groups according to a scene's structural characteristics.
We present a precise description of the mechanism underlying object segmentation in this network.
We then demonstrate a simple algorithm for object segmentation that generalizes across inputs ranging from simple geometric objects in grayscale images to natural images.
arXiv Detail & Related papers (2023-11-28T16:46:44Z) - Self-supervised Neural Networks for Spectral Snapshot Compressive
Imaging [15.616674529295366]
We consider using untrained neural networks to solve the reconstruction problem of snapshot compressive imaging (SCI)
In this paper, inspired by the untrained neural networks such as deep image priors (DIP) and deep decoders, we develop a framework by integrating DIP into the plug-and-play regime, leading to a self-supervised network for spectral SCI reconstruction.
arXiv Detail & Related papers (2021-08-28T14:17:38Z) - Discovering "Semantics" in Super-Resolution Networks [54.45509260681529]
Super-resolution (SR) is a fundamental and representative task of low-level vision area.
It is generally thought that the features extracted from the SR network have no specific semantic information.
Can we find any "semantics" in SR networks?
arXiv Detail & Related papers (2021-08-01T09:12:44Z) - Joint Learning of Neural Transfer and Architecture Adaptation for Image
Recognition [77.95361323613147]
Current state-of-the-art visual recognition systems rely on pretraining a neural network on a large-scale dataset and finetuning the network weights on a smaller dataset.
In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness.
Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks.
arXiv Detail & Related papers (2021-03-31T08:15:17Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Learning low-rank latent mesoscale structures in networks [1.1470070927586016]
We present a new approach for describing low-rank mesoscale structures in networks.
We use several synthetic network models and empirical friendship, collaboration, and protein--protein interaction (PPI) networks.
We show how to denoise a corrupted network by using only the latent motifs that one learns directly from the corrupted network.
arXiv Detail & Related papers (2021-02-13T18:54:49Z) - NetReAct: Interactive Learning for Network Summarization [60.18513812680714]
We present NetReAct, a novel interactive network summarization algorithm which supports the visualization of networks induced by text corpora to perform sensemaking.
We show how NetReAct is successful in generating high-quality summaries and visualizations that reveal hidden patterns better than other non-trivial baselines.
arXiv Detail & Related papers (2020-12-22T03:56:26Z) - SNoRe: Scalable Unsupervised Learning of Symbolic Node Representations [0.0]
The proposed SNoRe algorithm is capable of learning symbolic, human-understandable representations of individual network nodes.
SNoRe's interpretable features are suitable for direct explanation of individual predictions.
The vectorized implementation of SNoRe scales to large networks, making it suitable for contemporary network learning and analysis tasks.
arXiv Detail & Related papers (2020-09-08T08:13:21Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - Complexity Analysis of an Edge Preserving CNN SAR Despeckling Algorithm [1.933681537640272]
We exploit the effect of the complexity of the convolutional neural network for SAR despeckling.
Deeper networks better generalize on both simulated and real images.
arXiv Detail & Related papers (2020-04-17T17:02:01Z) - Geometric Approaches to Increase the Expressivity of Deep Neural
Networks for MR Reconstruction [41.62169556793355]
Deep learning approaches have been extensively investigated to reconstruct images from accelerated magnetic resonance image (MRI) acquisition.
It is not clear how to choose a suitable network architecture to balance the trade-off between network complexity and performance.
This paper proposes a systematic geometric approach using bootstrapping and subnetwork aggregation to increase the expressivity of the underlying neural network.
arXiv Detail & Related papers (2020-03-17T14:18:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.