Invariance Measures for Neural Networks
- URL: http://arxiv.org/abs/2310.17404v1
- Date: Thu, 26 Oct 2023 13:59:39 GMT
- Title: Invariance Measures for Neural Networks
- Authors: Facundo Manuel Quiroga and Jordina Torrents-Barrena and Laura Cristina
Lanzarini and Domenec Puig-Valls
- Abstract summary: We propose measures to quantify the invariance of neural networks in terms of their internal representation.
The measures are efficient and interpretable, and can be applied to any neural network model.
- Score: 1.2845309023495566
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Invariances in neural networks are useful and necessary for many tasks.
However, the representation of the invariance of most neural network models has
not been characterized. We propose measures to quantify the invariance of
neural networks in terms of their internal representation. The measures are
efficient and interpretable, and can be applied to any neural network model.
They are also more sensitive to invariance than previously defined measures. We
validate the measures and their properties in the domain of affine
transformations and the CIFAR10 and MNIST datasets, including their stability
and interpretability. Using the measures, we perform a first analysis of CNN
models and show that their internal invariance is remarkably stable to random
weight initializations, but not to changes in dataset or transformation. We
believe the measures will enable new avenues of research in invariance
representation.
Related papers
- Trade-Offs of Diagonal Fisher Information Matrix Estimators [53.35448232352667]
The Fisher information matrix can be used to characterize the local geometry of the parameter space of neural networks.
We examine two popular estimators whose accuracy and sample complexity depend on their associated variances.
We derive bounds of the variances and instantiate them in neural networks for regression and classification.
arXiv Detail & Related papers (2024-02-08T03:29:10Z) - What Affects Learned Equivariance in Deep Image Recognition Models? [10.590129221143222]
We find evidence for a correlation between learned translation equivariance and validation accuracy on ImageNet.
Data augmentation, reduced model capacity and inductive bias in the form of convolutions induce higher learned equivariance in neural networks.
arXiv Detail & Related papers (2023-04-05T17:54:25Z) - Variational Neural Networks [88.24021148516319]
We propose a method for uncertainty estimation in neural networks called Variational Neural Network (VNN)
VNN generates parameters for the output distribution of a layer by transforming its inputs with learnable sub-layers.
In uncertainty quality estimation experiments, we show that VNNs achieve better uncertainty quality than Monte Carlo Dropout or Bayes By Backpropagation methods.
arXiv Detail & Related papers (2022-07-04T15:41:02Z) - Measuring Representational Robustness of Neural Networks Through Shared
Invariances [45.94090422087775]
A major challenge in studying robustness in deep learning is defining the set of meaningless'' perturbations to which a given Neural Network (NN) should be invariant.
Most work on robustness implicitly uses a human as the reference model to define such perturbations.
Our work offers a new view on robustness by using another reference NN to define the set of perturbations a given NN should be invariant to.
arXiv Detail & Related papers (2022-06-23T18:49:13Z) - Learning Invariant Weights in Neural Networks [16.127299898156203]
Many commonly used models in machine learning are constraint to respect certain symmetries in the data.
We propose a weight-space equivalent to this approach, by minimizing a lower bound on the marginal likelihood to learn invariances in neural networks.
arXiv Detail & Related papers (2022-02-25T00:17:09Z) - Uncertainty Modeling for Out-of-Distribution Generalization [56.957731893992495]
We argue that the feature statistics can be properly manipulated to improve the generalization ability of deep learning models.
Common methods often consider the feature statistics as deterministic values measured from the learned features.
We improve the network generalization ability by modeling the uncertainty of domain shifts with synthesized feature statistics during training.
arXiv Detail & Related papers (2022-02-08T16:09:12Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Learning Invariances in Neural Networks [51.20867785006147]
We show how to parameterize a distribution over augmentations and optimize the training loss simultaneously with respect to the network parameters and augmentation parameters.
We can recover the correct set and extent of invariances on image classification, regression, segmentation, and molecular property prediction from a large space of augmentations.
arXiv Detail & Related papers (2020-10-22T17:18:48Z) - Physical invariance in neural networks for subgrid-scale scalar flux
modeling [5.333802479607541]
We present a new strategy to model the subgrid-scale scalar flux in a three-dimensional turbulent incompressible flow using physics-informed neural networks (NNs)
We show that the proposed transformation-invariant NN model outperforms both purely data-driven ones and parametric state-of-the-art subgrid-scale models.
arXiv Detail & Related papers (2020-10-09T16:09:54Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.