Robustness to Transformations Across Categories: Is Robustness To
Transformations Driven by Invariant Neural Representations?
- URL: http://arxiv.org/abs/2007.00112v4
- Date: Wed, 14 Jun 2023 22:34:29 GMT
- Title: Robustness to Transformations Across Categories: Is Robustness To
Transformations Driven by Invariant Neural Representations?
- Authors: Hojin Jang, Syed Suleman Abbas Zaidi, Xavier Boix, Neeraj Prasad,
Sharon Gilad-Gutnick, Shlomit Ben-Ami, Pawan Sinha
- Abstract summary: Deep Convolutional Neural Networks (DCNNs) have demonstrated impressive robustness to recognize objects under transformations.
A hypothesis to explain such robustness is that DCNNs develop invariant neural representations that remain unaltered when the image is transformed.
This paper investigates the conditions under which invariant neural representations emerge by leveraging that they facilitate robustness to transformations.
- Score: 1.7251667223970861
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Convolutional Neural Networks (DCNNs) have demonstrated impressive
robustness to recognize objects under transformations (eg. blur or noise) when
these transformations are included in the training set. A hypothesis to explain
such robustness is that DCNNs develop invariant neural representations that
remain unaltered when the image is transformed. However, to what extent this
hypothesis holds true is an outstanding question, as robustness to
transformations could be achieved with properties different from invariance,
eg. parts of the network could be specialized to recognize either transformed
or non-transformed images. This paper investigates the conditions under which
invariant neural representations emerge by leveraging that they facilitate
robustness to transformations beyond the training distribution. Concretely, we
analyze a training paradigm in which only some object categories are seen
transformed during training and evaluate whether the DCNN is robust to
transformations across categories not seen transformed. Our results with
state-of-the-art DCNNs indicate that invariant neural representations do not
always drive robustness to transformations, as networks show robustness for
categories seen transformed during training even in the absence of invariant
neural representations. Invariance only emerges as the number of transformed
categories in the training set is increased. This phenomenon is much more
prominent with local transformations such as blurring and high-pass filtering
than geometric transformations such as rotation and thinning, which entail
changes in the spatial arrangement of the object. Our results contribute to a
better understanding of invariant neural representations in deep learning and
the conditions under which it spontaneously emerges.
Related papers
- Invariant Shape Representation Learning For Image Classification [41.610264291150706]
In this paper, we introduce a novel framework that for the first time develops invariant shape representation learning (ISRL)
Our model ISRL is designed to jointly capture invariant features in latent shape spaces parameterized by deformable transformations.
By embedding the features that are invariant with regard to target variables in different environments, our model consistently offers more accurate predictions.
arXiv Detail & Related papers (2024-11-19T03:39:43Z) - Unsupervised Learning of Invariance Transformations [105.54048699217668]
We develop an algorithmic framework for finding approximate graph automorphisms.
We discuss how this framework can be used to find approximate automorphisms in weighted graphs in general.
arXiv Detail & Related papers (2023-07-24T17:03:28Z) - B-cos Alignment for Inherently Interpretable CNNs and Vision
Transformers [97.75725574963197]
We present a new direction for increasing the interpretability of deep neural networks (DNNs) by promoting weight-input alignment during training.
We show that a sequence of such transformations induces a single linear transformation that faithfully summarises the full model computations.
We show that the resulting explanations are of high visual quality and perform well under quantitative interpretability metrics.
arXiv Detail & Related papers (2023-06-19T12:54:28Z) - Domain Generalization In Robust Invariant Representation [10.132611239890345]
In this paper, we investigate the generalization of invariant representations on out-of-distribution data.
We show that the invariant model learns unstructured latent representations that are robust to distribution shifts.
arXiv Detail & Related papers (2023-04-07T00:58:30Z) - Self-Supervised Learning for Group Equivariant Neural Networks [75.62232699377877]
Group equivariant neural networks are the models whose structure is restricted to commute with the transformations on the input.
We propose two concepts for self-supervised tasks: equivariant pretext labels and invariant contrastive loss.
Experiments on standard image recognition benchmarks demonstrate that the equivariant neural networks exploit the proposed self-supervised tasks.
arXiv Detail & Related papers (2023-03-08T08:11:26Z) - Revisiting Transformation Invariant Geometric Deep Learning: Are Initial
Representations All You Need? [80.86819657126041]
We show that transformation-invariant and distance-preserving initial representations are sufficient to achieve transformation invariance.
Specifically, we realize transformation-invariant and distance-preserving initial point representations by modifying multi-dimensional scaling.
We prove that TinvNN can strictly guarantee transformation invariance, being general and flexible enough to be combined with the existing neural networks.
arXiv Detail & Related papers (2021-12-23T03:52:33Z) - Deformation Robust Roto-Scale-Translation Equivariant CNNs [10.44236628142169]
Group-equivariant convolutional neural networks (G-CNNs) achieve significantly improved generalization performance with intrinsic symmetry.
General theory and practical implementation of G-CNNs have been studied for planar images under either rotation or scaling transformation.
arXiv Detail & Related papers (2021-11-22T03:58:24Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z) - Augmenting Implicit Neural Shape Representations with Explicit
Deformation Fields [95.39603371087921]
Implicit neural representation is a recent approach to learn shape collections as zero level-sets of neural networks.
We advocate deformation-aware regularization for implicit neural representations, aiming at producing plausible deformations as latent code changes.
arXiv Detail & Related papers (2021-08-19T22:07:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.