Generalization analysis of an unfolding network for analysis-based
Compressed Sensing
- URL: http://arxiv.org/abs/2303.05582v1
- Date: Thu, 9 Mar 2023 21:13:32 GMT
- Title: Generalization analysis of an unfolding network for analysis-based
Compressed Sensing
- Authors: Vicky Kouni, Yannis Panagakis
- Abstract summary: Unfolding networks have shown promising results in the Compressed Sensing (CS) field.
In this paper, we perform generalization analysis of a state-of-the-art ADMM-based unfolding network.
- Score: 27.53377180094267
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unfolding networks have shown promising results in the Compressed Sensing
(CS) field. Yet, the investigation of their generalization ability is still in
its infancy. In this paper, we perform generalization analysis of a
state-of-the-art ADMM-based unfolding network, which jointly learns a decoder
for CS and a sparsifying redundant analysis operator. To this end, we first
impose a structural constraint on the learnable sparsifier, which parametrizes
the network's hypothesis class. For the latter, we estimate its Rademacher
complexity. With this estimate in hand, we deliver generalization error bounds
for the examined network. Finally, the validity of our theory is assessed and
numerical comparisons to a state-of-the-art unfolding network are made, on
synthetic and real-world datasets. Our experimental results demonstrate that
our proposed framework complies with our theoretical findings and outperforms
the baseline, consistently for all datasets.
Related papers
- Inferring Dynamic Networks from Marginals with Iterative Proportional Fitting [57.487936697747024]
A common network inference problem, arising from real-world data constraints, is how to infer a dynamic network from its time-aggregated adjacency matrix.
We introduce a principled algorithm that guarantees IPF converges under minimal changes to the network structure.
arXiv Detail & Related papers (2024-02-28T20:24:56Z) - Sparsity-aware generalization theory for deep neural networks [12.525959293825318]
We present a new approach to analyzing generalization for deep feed-forward ReLU networks.
We show fundamental trade-offs between sparsity and generalization.
arXiv Detail & Related papers (2023-07-01T20:59:05Z) - Towards the Characterization of Representations Learned via
Capsule-based Network Architectures [0.0]
Capsule Networks (CapsNets) have been re-introduced as a more compact and interpretable alternative to standard deep neural networks.
Here, we conduct a systematic and principled study towards assessing the interpretability of these types of networks.
Our analysis in the MNIST, SVHN, PASCAL-part and CelebA datasets suggest that the representations encoded in CapsNets might not be as disentangled nor strictly related to parts-whole relationships as is commonly stated in the literature.
arXiv Detail & Related papers (2023-05-09T11:20:11Z) - Generalization and Estimation Error Bounds for Model-based Neural
Networks [78.88759757988761]
We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks.
We derive practical design rules that allow to construct model-based networks with guaranteed high generalization.
arXiv Detail & Related papers (2023-04-19T16:39:44Z) - Generalization Guarantee of Training Graph Convolutional Networks with
Graph Topology Sampling [83.77955213766896]
Graph convolutional networks (GCNs) have recently achieved great empirical success in learning graphstructured data.
To address its scalability issue, graph topology sampling has been proposed to reduce the memory and computational cost of training Gs.
This paper provides first theoretical justification of graph topology sampling in training (up to) three-layer GCNs.
arXiv Detail & Related papers (2022-07-07T21:25:55Z) - Rank Diminishing in Deep Neural Networks [71.03777954670323]
Rank of neural networks measures information flowing across layers.
It is an instance of a key structural condition that applies across broad domains of machine learning.
For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear.
arXiv Detail & Related papers (2022-06-13T12:03:32Z) - DECONET: an Unfolding Network for Analysis-based Compressed Sensing with
Generalization Error Bounds [27.53377180094267]
We present a new deep unfolding network for analysis-sparsity-based Compressed Sensing.
The proposed network coined Decoding Network (DECONET) jointly learns a decoder that reconstructs vectors from their incomplete, noisy measurements.
arXiv Detail & Related papers (2022-05-14T12:50:48Z) - Predicting the generalization gap in neural networks using topological
data analysis [33.511371257571504]
We study the generalization gap of neural networks using methods from topological data analysis.
We compute homological persistence diagrams of weighted graphs constructed from neuron activation correlations after a training phase.
We compare the usefulness of different numerical summaries from persistence diagrams and show that a combination of some of them can accurately predict and partially explain the generalization gap without the need of a test set.
arXiv Detail & Related papers (2022-03-23T11:15:36Z) - Formalizing Generalization and Robustness of Neural Networks to Weight
Perturbations [58.731070632586594]
We provide the first formal analysis for feed-forward neural networks with non-negative monotone activation functions against weight perturbations.
We also design a new theory-driven loss function for training generalizable and robust neural networks against weight perturbations.
arXiv Detail & Related papers (2021-03-03T06:17:03Z) - Understanding Generalization in Deep Learning via Tensor Methods [53.808840694241]
We advance the understanding of the relations between the network's architecture and its generalizability from the compression perspective.
We propose a series of intuitive, data-dependent and easily-measurable properties that tightly characterize the compressibility and generalizability of neural networks.
arXiv Detail & Related papers (2020-01-14T22:26:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.