Gi and Pal Scores: Deep Neural Network Generalization Statistics
- URL: http://arxiv.org/abs/2104.03469v1
- Date: Thu, 8 Apr 2021 01:52:49 GMT
- Title: Gi and Pal Scores: Deep Neural Network Generalization Statistics
- Authors: Yair Schiff, Brian Quanz, Payel Das, Pin-Yu Chen
- Abstract summary: We introduce two new measures, the Gi-score and Pal-score, that capture a deep neural network's generalization capabilities.
Inspired by the Gini coefficient and Palma ratio, our statistics are robust measures of a network's invariance to perturbations that accurately predict generalization gaps.
- Score: 58.8755389068888
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The field of Deep Learning is rich with empirical evidence of human-like
performance on a variety of regression, classification, and control tasks.
However, despite these successes, the field lacks strong theoretical error
bounds and consistent measures of network generalization and learned
invariances. In this work, we introduce two new measures, the Gi-score and
Pal-score, that capture a deep neural network's generalization capabilities.
Inspired by the Gini coefficient and Palma ratio, measures of income
inequality, our statistics are robust measures of a network's invariance to
perturbations that accurately predict generalization gaps, i.e., the difference
between accuracy on training and test sets.
Related papers
- Generalization bounds for regression and classification on adaptive covering input domains [1.4141453107129398]
We focus on the generalization bound, which serves as an upper limit for the generalization error.
In the case of classification tasks, we treat the target function as a one-hot, a piece-wise constant function, and employ 0/1 loss for error measurement.
arXiv Detail & Related papers (2024-07-29T05:40:08Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - GIT: Detecting Uncertainty, Out-Of-Distribution and Adversarial Samples
using Gradients and Invariance Transformations [77.34726150561087]
We propose a holistic approach for the detection of generalization errors in deep neural networks.
GIT combines the usage of gradient information and invariance transformations.
Our experiments demonstrate the superior performance of GIT compared to the state-of-the-art on a variety of network architectures.
arXiv Detail & Related papers (2023-07-05T22:04:38Z) - Inconsistency, Instability, and Generalization Gap of Deep Neural
Network Training [14.871738070617491]
We show that inconsistency is a more reliable indicator of generalization gap than the sharpness of the loss landscape.
The results also provide a theoretical basis for existing methods such as co-distillation and ensemble.
arXiv Detail & Related papers (2023-05-31T20:28:13Z) - Modeling Uncertain Feature Representation for Domain Generalization [49.129544670700525]
We show that our method consistently improves the network generalization ability on multiple vision tasks.
Our methods are simple yet effective and can be readily integrated into networks without additional trainable parameters or loss constraints.
arXiv Detail & Related papers (2023-01-16T14:25:02Z) - Predicting Deep Neural Network Generalization with Perturbation Response
Curves [58.8755389068888]
We propose a new framework for evaluating the generalization capabilities of trained networks.
Specifically, we introduce two new measures for accurately predicting generalization gaps.
We attain better predictive scores than the current state-of-the-art measures on a majority of tasks in the Predicting Generalization in Deep Learning (PGDL) NeurIPS 2020 competition.
arXiv Detail & Related papers (2021-06-09T01:37:36Z) - Vulnerability Under Adversarial Machine Learning: Bias or Variance? [77.30759061082085]
We investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network.
Our analysis sheds light on why the deep neural networks have poor performance under adversarial perturbation.
We introduce a new adversarial machine learning algorithm with lower computational complexity than well-known adversarial machine learning strategies.
arXiv Detail & Related papers (2020-08-01T00:58:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.