The Role of Mutual Information in Variational Classifiers
- URL: http://arxiv.org/abs/2010.11642v3
- Date: Thu, 13 Apr 2023 11:53:48 GMT
- Title: The Role of Mutual Information in Variational Classifiers
- Authors: Matias Vera, Leonardo Rey Vega and Pablo Piantanida
- Abstract summary: We study the generalization error of classifiers relying on encodings trained on the cross-entropy loss.
We derive bounds to the generalization error showing that there exists a regime where the generalization error is bounded by the mutual information.
- Score: 47.10478919049443
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Overfitting data is a well-known phenomenon related with the generation of a
model that mimics too closely (or exactly) a particular instance of data, and
may therefore fail to predict future observations reliably. In practice, this
behaviour is controlled by various--sometimes heuristics--regularization
techniques, which are motivated by developing upper bounds to the
generalization error. In this work, we study the generalization error of
classifiers relying on stochastic encodings trained on the cross-entropy loss,
which is often used in deep learning for classification problems. We derive
bounds to the generalization error showing that there exists a regime where the
generalization error is bounded by the mutual information between input
features and the corresponding representations in the latent space, which are
randomly generated according to the encoding distribution. Our bounds provide
an information-theoretic understanding of generalization in the so-called class
of variational classifiers, which are regularized by a Kullback-Leibler (KL)
divergence term. These results give theoretical grounds for the highly popular
KL term in variational inference methods that was already recognized to act
effectively as a regularization penalty. We further observe connections with
well studied notions such as Variational Autoencoders, Information Dropout,
Information Bottleneck and Boltzmann Machines. Finally, we perform numerical
experiments on MNIST and CIFAR datasets and show that mutual information is
indeed highly representative of the behaviour of the generalization error.
Related papers
- Enhancing Anomaly Detection Generalization through Knowledge Exposure: The Dual Effects of Augmentation [9.740752855568202]
Anomaly detection involves identifying instances within a dataset that deviates from the norm and occur infrequently.
Current benchmarks tend to favor methods biased towards low diversity in normal data, which does not align with real-world scenarios.
We propose new testing protocols and a novel method called Knowledge Exposure (KE), which integrates external knowledge to comprehend concept dynamics.
arXiv Detail & Related papers (2024-06-15T12:37:36Z) - Learning Divergence Fields for Shift-Robust Graph Representations [73.11818515795761]
In this work, we propose a geometric diffusion model with learnable divergence fields for the challenging problem with interdependent data.
We derive a new learning objective through causal inference, which can guide the model to learn generalizable patterns of interdependence that are insensitive across domains.
arXiv Detail & Related papers (2024-06-07T14:29:21Z) - Class-wise Generalization Error: an Information-Theoretic Analysis [22.877440350595222]
We study the class-generalization error, which quantifies the generalization performance of each individual class.
We empirically validate our proposed bounds in different neural networks and show that they accurately capture the complex class-generalization error behavior.
arXiv Detail & Related papers (2024-01-05T17:05:14Z) - GIT: Detecting Uncertainty, Out-Of-Distribution and Adversarial Samples
using Gradients and Invariance Transformations [77.34726150561087]
We propose a holistic approach for the detection of generalization errors in deep neural networks.
GIT combines the usage of gradient information and invariance transformations.
Our experiments demonstrate the superior performance of GIT compared to the state-of-the-art on a variety of network architectures.
arXiv Detail & Related papers (2023-07-05T22:04:38Z) - FedGen: Generalizable Federated Learning for Sequential Data [8.784435748969806]
In many real-world distributed settings, spurious correlations exist due to biases and data sampling issues.
We present a generalizable federated learning framework called FedGen, which allows clients to identify and distinguish between spurious and invariant features.
We show that FedGen results in models that achieve significantly better generalization and can outperform the accuracy of current federated learning approaches by over 24%.
arXiv Detail & Related papers (2022-11-03T15:48:14Z) - Instance-Dependent Generalization Bounds via Optimal Transport [51.71650746285469]
Existing generalization bounds fail to explain crucial factors that drive the generalization of modern neural networks.
We derive instance-dependent generalization bounds that depend on the local Lipschitz regularity of the learned prediction function in the data space.
We empirically analyze our generalization bounds for neural networks, showing that the bound values are meaningful and capture the effect of popular regularization methods during training.
arXiv Detail & Related papers (2022-11-02T16:39:42Z) - ER: Equivariance Regularizer for Knowledge Graph Completion [107.51609402963072]
We propose a new regularizer, namely, Equivariance Regularizer (ER)
ER can enhance the generalization ability of the model by employing the semantic equivariance between the head and tail entities.
The experimental results indicate a clear and substantial improvement over the state-of-the-art relation prediction methods.
arXiv Detail & Related papers (2022-06-24T08:18:05Z) - Fluctuations, Bias, Variance & Ensemble of Learners: Exact Asymptotics
for Convex Losses in High-Dimension [25.711297863946193]
We develop a theory for the study of fluctuations in an ensemble of generalised linear models trained on different, but correlated, features.
We provide a complete description of the joint distribution of the empirical risk minimiser for generic convex loss and regularisation in the high-dimensional limit.
arXiv Detail & Related papers (2022-01-31T17:44:58Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Generalisation error in learning with random features and the hidden
manifold model [23.71637173968353]
We study generalised linear regression and classification for a synthetically generated dataset.
We consider the high-dimensional regime and using the replica method from statistical physics.
We show how to obtain the so-called double descent behaviour for logistic regression with a peak at the threshold.
We discuss the role played by correlations in the data generated by the hidden manifold model.
arXiv Detail & Related papers (2020-02-21T14:49:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.