Measuring Representational Robustness of Neural Networks Through Shared
Invariances
- URL: http://arxiv.org/abs/2206.11939v1
- Date: Thu, 23 Jun 2022 18:49:13 GMT
- Title: Measuring Representational Robustness of Neural Networks Through Shared
Invariances
- Authors: Vedant Nanda and Till Speicher and Camila Kolling and John P.
Dickerson and Krishna P. Gummadi and Adrian Weller
- Abstract summary: A major challenge in studying robustness in deep learning is defining the set of meaningless'' perturbations to which a given Neural Network (NN) should be invariant.
Most work on robustness implicitly uses a human as the reference model to define such perturbations.
Our work offers a new view on robustness by using another reference NN to define the set of perturbations a given NN should be invariant to.
- Score: 45.94090422087775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A major challenge in studying robustness in deep learning is defining the set
of ``meaningless'' perturbations to which a given Neural Network (NN) should be
invariant. Most work on robustness implicitly uses a human as the reference
model to define such perturbations. Our work offers a new view on robustness by
using another reference NN to define the set of perturbations a given NN should
be invariant to, thus generalizing the reliance on a reference ``human NN'' to
any NN. This makes measuring robustness equivalent to measuring the extent to
which two NNs share invariances, for which we propose a measure called STIR.
STIR re-purposes existing representation similarity measures to make them
suitable for measuring shared invariances. Using our measure, we are able to
gain insights into how shared invariances vary with changes in weight
initialization, architecture, loss functions, and training dataset. Our
implementation is available at: \url{https://github.com/nvedant07/STIR}.
Related papers
- Invariance Measures for Neural Networks [1.2845309023495566]
We propose measures to quantify the invariance of neural networks in terms of their internal representation.
The measures are efficient and interpretable, and can be applied to any neural network model.
arXiv Detail & Related papers (2023-10-26T13:59:39Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - Sparse Function-space Representation of Neural Networks [23.4128813752424]
Deep neural networks (NNs) are known to lack uncertainty estimates and struggle to incorporate new data.
We present a method that mitigates these issues by converting NNs from weight space to function space, via a dual parameterization.
arXiv Detail & Related papers (2023-09-05T12:56:35Z) - Variational Neural Networks [88.24021148516319]
We propose a method for uncertainty estimation in neural networks called Variational Neural Network (VNN)
VNN generates parameters for the output distribution of a layer by transforming its inputs with learnable sub-layers.
In uncertainty quality estimation experiments, we show that VNNs achieve better uncertainty quality than Monte Carlo Dropout or Bayes By Backpropagation methods.
arXiv Detail & Related papers (2022-07-04T15:41:02Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Structure and Distribution Metric for Quantifying the Quality of
Uncertainty: Assessing Gaussian Processes, Deep Neural Nets, and Deep Neural
Operators for Regression [0.0]
We propose two comparison metrics that may be implemented to arbitrary dimensions in regression tasks.
The structure metric assesses the similarity in shape and location of uncertainty with the true error, while the distribution metric quantifies the supported magnitudes between the two.
We apply these metrics to Gaussian Processes (GPs), Ensemble Deep Neural Nets (DNNs), and Ensemble Deep Neural Operators (DNOs) on high-dimensional and nonlinear test cases.
arXiv Detail & Related papers (2022-03-09T04:16:31Z) - Encoding Involutory Invariance in Neural Networks [1.6371837018687636]
In certain situations, Neural Networks (NN) are trained upon data that obey underlying physical symmetries.
In this work, we explore a special kind of symmetry where functions are invariant with respect to involutory linear/affine transformations up to parity.
Numerical experiments indicate that the proposed models outperform baseline networks while respecting the imposed symmetry.
An adaption of our technique to convolutional NN classification tasks for datasets with inherent horizontal/vertical reflection symmetry has also been proposed.
arXiv Detail & Related papers (2021-06-07T16:07:15Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Towards Understanding the Regularization of Adversarial Robustness on
Neural Networks [46.54437309608066]
We study the degradation through the regularization perspective.
We find that AR is achieved by regularizing/biasing NNs towards less confident solutions.
arXiv Detail & Related papers (2020-11-15T08:32:09Z) - Learning Invariances in Neural Networks [51.20867785006147]
We show how to parameterize a distribution over augmentations and optimize the training loss simultaneously with respect to the network parameters and augmentation parameters.
We can recover the correct set and extent of invariances on image classification, regression, segmentation, and molecular property prediction from a large space of augmentations.
arXiv Detail & Related papers (2020-10-22T17:18:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.