DECONET: an Unfolding Network for Analysis-based Compressed Sensing with
Generalization Error Bounds
- URL: http://arxiv.org/abs/2205.07050v6
- Date: Wed, 26 Apr 2023 10:22:07 GMT
- Title: DECONET: an Unfolding Network for Analysis-based Compressed Sensing with
Generalization Error Bounds
- Authors: Vicky Kouni, Yannis Panagakis
- Abstract summary: We present a new deep unfolding network for analysis-sparsity-based Compressed Sensing.
The proposed network coined Decoding Network (DECONET) jointly learns a decoder that reconstructs vectors from their incomplete, noisy measurements.
- Score: 27.53377180094267
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a new deep unfolding network for analysis-sparsity-based
Compressed Sensing. The proposed network coined Decoding Network (DECONET)
jointly learns a decoder that reconstructs vectors from their incomplete, noisy
measurements and a redundant sparsifying analysis operator, which is shared
across the layers of DECONET. Moreover, we formulate the hypothesis class of
DECONET and estimate its associated Rademacher complexity. Then, we use this
estimate to deliver meaningful upper bounds for the generalization error of
DECONET. Finally, the validity of our theoretical results is assessed and
comparisons to state-of-the-art unfolding networks are made, on both synthetic
and real-world datasets. Experimental results indicate that our proposed
network outperforms the baselines, consistently for all datasets, and its
behaviour complies with our theoretical findings.
Related papers
- Optimization dependent generalization bound for ReLU networks based on
sensitivity in the tangent bundle [0.0]
We propose a PAC type bound on the generalization error of feedforward ReLU networks.
The obtained bound does not explicitly depend on the depth of the network.
arXiv Detail & Related papers (2023-10-26T13:14:13Z) - SPP-CNN: An Efficient Framework for Network Robustness Prediction [13.742495880357493]
This paper develops an efficient framework for network robustness prediction, the spatial pyramid pooling convolutional neural network (SPP-CNN)
The new framework installs a spatial pyramid pooling layer between the convolutional and fully-connected layers, overcoming the common mismatch issue in the CNN-based prediction approaches.
arXiv Detail & Related papers (2023-05-13T09:09:20Z) - Towards the Characterization of Representations Learned via
Capsule-based Network Architectures [0.0]
Capsule Networks (CapsNets) have been re-introduced as a more compact and interpretable alternative to standard deep neural networks.
Here, we conduct a systematic and principled study towards assessing the interpretability of these types of networks.
Our analysis in the MNIST, SVHN, PASCAL-part and CelebA datasets suggest that the representations encoded in CapsNets might not be as disentangled nor strictly related to parts-whole relationships as is commonly stated in the literature.
arXiv Detail & Related papers (2023-05-09T11:20:11Z) - Generalization and Estimation Error Bounds for Model-based Neural
Networks [78.88759757988761]
We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks.
We derive practical design rules that allow to construct model-based networks with guaranteed high generalization.
arXiv Detail & Related papers (2023-04-19T16:39:44Z) - Generalization analysis of an unfolding network for analysis-based
Compressed Sensing [27.53377180094267]
Unfolding networks have shown promising results in the Compressed Sensing (CS) field.
In this paper, we perform generalization analysis of a state-of-the-art ADMM-based unfolding network.
arXiv Detail & Related papers (2023-03-09T21:13:32Z) - Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural
Networks [89.28881869440433]
This paper provides the first theoretical characterization of joint edge-model sparse learning for graph neural networks (GNNs)
It proves analytically that both sampling important nodes and pruning neurons with the lowest-magnitude can reduce the sample complexity and improve convergence without compromising the test accuracy.
arXiv Detail & Related papers (2023-02-06T16:54:20Z) - Rank Diminishing in Deep Neural Networks [71.03777954670323]
Rank of neural networks measures information flowing across layers.
It is an instance of a key structural condition that applies across broad domains of machine learning.
For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear.
arXiv Detail & Related papers (2022-06-13T12:03:32Z) - Fitting the Search Space of Weight-sharing NAS with Graph Convolutional
Networks [100.14670789581811]
We train a graph convolutional network to fit the performance of sampled sub-networks.
With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates.
arXiv Detail & Related papers (2020-04-17T19:12:39Z) - Complexity Analysis of an Edge Preserving CNN SAR Despeckling Algorithm [1.933681537640272]
We exploit the effect of the complexity of the convolutional neural network for SAR despeckling.
Deeper networks better generalize on both simulated and real images.
arXiv Detail & Related papers (2020-04-17T17:02:01Z) - ReActNet: Towards Precise Binary Neural Network with Generalized
Activation Functions [76.05981545084738]
We propose several ideas for enhancing a binary network to close its accuracy gap from real-valued networks without incurring any additional computational cost.
We first construct a baseline network by modifying and binarizing a compact real-valued network with parameter-free shortcuts.
We show that the proposed ReActNet outperforms all the state-of-the-arts by a large margin.
arXiv Detail & Related papers (2020-03-07T02:12:02Z) - Understanding Generalization in Deep Learning via Tensor Methods [53.808840694241]
We advance the understanding of the relations between the network's architecture and its generalizability from the compression perspective.
We propose a series of intuitive, data-dependent and easily-measurable properties that tightly characterize the compressibility and generalizability of neural networks.
arXiv Detail & Related papers (2020-01-14T22:26:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.