Evaluating the Generalization Ability of Super-Resolution Networks
- URL: http://arxiv.org/abs/2205.07019v2
- Date: Mon, 4 Sep 2023 03:42:34 GMT
- Title: Evaluating the Generalization Ability of Super-Resolution Networks
- Authors: Yihao Liu, Hengyuan Zhao, Jinjin Gu, Yu Qiao, Chao Dong
- Abstract summary: We propose a Generalization Assessment Index for SR networks, namely SRGA.
SRGA exploits the statistical characteristics of the internal features of deep networks to measure the generalization ability.
We benchmark existing SR models on the generalization ability.
- Score: 45.867729539843
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Performance and generalization ability are two important aspects to evaluate
the deep learning models. However, research on the generalization ability of
Super-Resolution (SR) networks is currently absent. Assessing the
generalization ability of deep models not only helps us to understand their
intrinsic mechanisms, but also allows us to quantitatively measure their
applicability boundaries, which is important for unrestricted real-world
applications. To this end, we make the first attempt to propose a
Generalization Assessment Index for SR networks, namely SRGA. SRGA exploits the
statistical characteristics of the internal features of deep networks to
measure the generalization ability. Specially, it is a non-parametric and
non-learning metric. To better validate our method, we collect a patch-based
image evaluation set (PIES) that includes both synthetic and real-world images,
covering a wide range of degradations. With SRGA and PIES dataset, we benchmark
existing SR models on the generalization ability. This work provides insights
and tools for future research on model generalization in low-level vision.
Related papers
- Generalizability of Neural Networks Minimizing Empirical Risk Based on Expressive Ability [20.371836553400232]
This paper investigates the generalizability of neural networks that minimize or approximately minimize empirical risk.
We provide theoretical insights into several phenomena in deep learning, including robust generalization.
arXiv Detail & Related papers (2025-03-06T05:36:35Z) - Generalization and Knowledge Transfer in Abstract Visual Reasoning Models [0.0]
We study generalization and knowledge reuse capabilities of deep neural networks in the domain of abstract visual reasoning.
We introduce Attributeless-I-RAVEN, a benchmark with four generalization regimes that allow to test generalization of abstract rules applied to held-out attributes.
We construct I-RAVEN-Mesh, a dataset that enriches RPMs with a novel component structure comprising line-based patterns.
arXiv Detail & Related papers (2024-06-16T20:26:38Z) - Hyperspectral Benchmark: Bridging the Gap between HSI Applications
through Comprehensive Dataset and Pretraining [11.935879491267634]
Hyperspectral Imaging (HSI) serves as a non-destructive spatial spectroscopy technique with a multitude of potential applications.
A recurring challenge lies in the limited size of the target datasets, impeding exhaustive architecture search.
This study introduces an innovative benchmark dataset encompassing three markedly distinct HSI applications.
arXiv Detail & Related papers (2023-09-20T08:08:34Z) - Sparsity-aware generalization theory for deep neural networks [12.525959293825318]
We present a new approach to analyzing generalization for deep feed-forward ReLU networks.
We show fundamental trade-offs between sparsity and generalization.
arXiv Detail & Related papers (2023-07-01T20:59:05Z) - Generalization and Estimation Error Bounds for Model-based Neural
Networks [78.88759757988761]
We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks.
We derive practical design rules that allow to construct model-based networks with guaranteed high generalization.
arXiv Detail & Related papers (2023-04-19T16:39:44Z) - Exploiting Explainable Metrics for Augmented SGD [43.00691899858408]
There are several unanswered questions about how learning under optimization really works and why certain strategies are better than others.
We propose new explainability metrics that measure the redundant information in a network's layers.
We then exploit these metrics to augment the Gradient Descent (SGD) by adaptively adjusting the learning rate in each layer to improve generalization performance.
arXiv Detail & Related papers (2022-03-31T00:16:44Z) - Generalized Real-World Super-Resolution through Adversarial Robustness [107.02188934602802]
We present Robust Super-Resolution, a method that leverages the generalization capability of adversarial attacks to tackle real-world SR.
Our novel framework poses a paradigm shift in the development of real-world SR methods.
By using a single robust model, we outperform state-of-the-art specialized methods on real-world benchmarks.
arXiv Detail & Related papers (2021-08-25T22:43:20Z) - Discovering "Semantics" in Super-Resolution Networks [54.45509260681529]
Super-resolution (SR) is a fundamental and representative task of low-level vision area.
It is generally thought that the features extracted from the SR network have no specific semantic information.
Can we find any "semantics" in SR networks?
arXiv Detail & Related papers (2021-08-01T09:12:44Z) - Adversarial Feature Augmentation and Normalization for Visual
Recognition [109.6834687220478]
Recent advances in computer vision take advantage of adversarial data augmentation to ameliorate the generalization ability of classification models.
Here, we present an effective and efficient alternative that advocates adversarial augmentation on intermediate feature embeddings.
We validate the proposed approach across diverse visual recognition tasks with representative backbone networks.
arXiv Detail & Related papers (2021-03-22T20:36:34Z) - Representation Based Complexity Measures for Predicting Generalization
in Deep Learning [0.0]
Deep Neural Networks can generalize despite being significantly overparametrized.
Recent research has tried to examine this phenomenon from various view points.
We provide an interpretation of generalization from the perspective of quality of internal representations.
arXiv Detail & Related papers (2020-12-04T18:53:44Z) - Target-Embedding Autoencoders for Supervised Representation Learning [111.07204912245841]
This paper analyzes a framework for improving generalization in a purely supervised setting, where the target space is high-dimensional.
We motivate and formalize the general framework of target-embedding autoencoders (TEA) for supervised prediction, learning intermediate latent representations jointly optimized to be both predictable from features as well as predictive of targets.
arXiv Detail & Related papers (2020-01-23T02:37:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.