On the relationship between class selectivity, dimensionality, and
robustness
- URL: http://arxiv.org/abs/2007.04440v2
- Date: Tue, 13 Oct 2020 22:35:42 GMT
- Title: On the relationship between class selectivity, dimensionality, and
robustness
- Authors: Matthew L. Leavitt, Ari S. Morcos
- Abstract summary: We investigate whether class selectivity confers robustness (or vulnerability) to perturbations of input data.
We found that mean class selectivity predicts vulnerability to naturalistic corruptions.
We found that class selectivity increases robustness to multiple types of gradient-based adversarial attacks.
- Score: 25.48362370177062
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While the relative trade-offs between sparse and distributed representations
in deep neural networks (DNNs) are well-studied, less is known about how these
trade-offs apply to representations of semantically-meaningful information.
Class selectivity, the variability of a unit's responses across data classes or
dimensions, is one way of quantifying the sparsity of semantic representations.
Given recent evidence showing that class selectivity can impair generalization,
we sought to investigate whether it also confers robustness (or vulnerability)
to perturbations of input data. We found that mean class selectivity predicts
vulnerability to naturalistic corruptions; networks regularized to have lower
levels of class selectivity are more robust to corruption, while networks with
higher class selectivity are more vulnerable to corruption, as measured using
Tiny ImageNetC and CIFAR10C. In contrast, we found that class selectivity
increases robustness to multiple types of gradient-based adversarial attacks.
To examine this difference, we studied the dimensionality of the change in the
representation due to perturbation, finding that decreasing class selectivity
increases the dimensionality of this change for both corruption types, but with
a notably larger increase for adversarial attacks. These results demonstrate
the causal relationship between selectivity and robustness and provide new
insights into the mechanisms of this relationship.
Related papers
- Understanding the Detrimental Class-level Effects of Data Augmentation [63.1733767714073]
achieving optimal average accuracy comes at the cost of significantly hurting individual class accuracy by as much as 20% on ImageNet.
We present a framework for understanding how DA interacts with class-level learning dynamics.
We show that simple class-conditional augmentation strategies improve performance on the negatively affected classes.
arXiv Detail & Related papers (2023-12-07T18:37:43Z) - Causal Feature Selection via Transfer Entropy [59.999594949050596]
Causal discovery aims to identify causal relationships between features with observational data.
We introduce a new causal feature selection approach that relies on the forward and backward feature selection procedures.
We provide theoretical guarantees on the regression and classification errors for both the exact and the finite-sample cases.
arXiv Detail & Related papers (2023-10-17T08:04:45Z) - How to Fix a Broken Confidence Estimator: Evaluating Post-hoc Methods for Selective Classification with Deep Neural Networks [1.4502611532302039]
We show that a simple $p$-norm normalization of the logits, followed by taking the maximum logit as the confidence estimator, can lead to considerable gains in selective classification performance.
Our results are shown to be consistent under distribution shift.
arXiv Detail & Related papers (2023-05-24T18:56:55Z) - Variational Classification [51.2541371924591]
We derive a variational objective to train the model, analogous to the evidence lower bound (ELBO) used to train variational auto-encoders.
Treating inputs to the softmax layer as samples of a latent variable, our abstracted perspective reveals a potential inconsistency.
We induce a chosen latent distribution, instead of the implicit assumption found in a standard softmax layer.
arXiv Detail & Related papers (2023-05-17T17:47:19Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - A Little Robustness Goes a Long Way: Leveraging Universal Features for
Targeted Transfer Attacks [4.511923587827301]
We show that training the source classifier to be "slightly robust" substantially improves the transferability of targeted attacks.
We argue that this result supports a non-versa hypothesis.
arXiv Detail & Related papers (2021-06-03T19:53:46Z) - Context Decoupling Augmentation for Weakly Supervised Semantic
Segmentation [53.49821324597837]
Weakly supervised semantic segmentation is a challenging problem that has been deeply studied in recent years.
We present a Context Decoupling Augmentation ( CDA) method to change the inherent context in which the objects appear.
To validate the effectiveness of the proposed method, extensive experiments on PASCAL VOC 2012 dataset with several alternative network architectures demonstrate that CDA can boost various popular WSSS methods to the new state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-03-02T15:05:09Z) - Selective Classification Can Magnify Disparities Across Groups [89.14499988774985]
We find that while selective classification can improve average accuracies, it can simultaneously magnify existing accuracy disparities.
Increasing abstentions can even decrease accuracies on some groups.
We train distributionally-robust models that achieve similar full-coverage accuracies across groups and show that selective classification uniformly improves each group.
arXiv Detail & Related papers (2020-10-27T08:51:30Z) - Linking average- and worst-case perturbation robustness via class
selectivity and dimensionality [7.360807642941714]
We investigate whether class selectivity confers robustness (or vulnerability) to perturbations of input data.
We found that networks regularized to have lower levels of class selectivity were more robust to average-case perturbations.
In contrast, class selectivity increases robustness to multiple types of worst-case perturbations.
arXiv Detail & Related papers (2020-10-14T00:45:29Z) - Are there any 'object detectors' in the hidden layers of CNNs trained to
identify objects or scenes? [5.718442081858377]
We compare various measures on a large set of units in AlexNet.
We find that the different measures provide different estimates of object selectivity.
We fail to find any units that are even remotely as selective as the 'grandmother cell' units reported in recurrent neural networks.
arXiv Detail & Related papers (2020-07-02T12:33:37Z) - Selectivity considered harmful: evaluating the causal impact of class
selectivity in DNNs [7.360807642941714]
We investigate the causal impact of class selectivity on network function by directly regularizing for or against class selectivity.
Using this regularizer to reduce class selectivity across units in convolutional neural networks increased test accuracy by over 2% for ResNet18 trained on Tiny ImageNet.
For ResNet20 trained on CIFAR10 we could reduce class selectivity by a factor of 2.5 with no impact on test accuracy, and reduce it nearly to zero with only a small ($sim$2%) drop in test accuracy.
arXiv Detail & Related papers (2020-03-03T00:22:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.