Self-Supervised Learning for Group Equivariant Neural Networks
- URL: http://arxiv.org/abs/2303.04427v1
- Date: Wed, 8 Mar 2023 08:11:26 GMT
- Title: Self-Supervised Learning for Group Equivariant Neural Networks
- Authors: Yusuke Mukuta and Tatsuya Harada
- Abstract summary: Group equivariant neural networks are the models whose structure is restricted to commute with the transformations on the input.
We propose two concepts for self-supervised tasks: equivariant pretext labels and invariant contrastive loss.
Experiments on standard image recognition benchmarks demonstrate that the equivariant neural networks exploit the proposed self-supervised tasks.
- Score: 75.62232699377877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a method to construct pretext tasks for self-supervised
learning on group equivariant neural networks. Group equivariant neural
networks are the models whose structure is restricted to commute with the
transformations on the input. Therefore, it is important to construct pretext
tasks for self-supervised learning that do not contradict this equivariance. To
ensure that training is consistent with the equivariance, we propose two
concepts for self-supervised tasks: equivariant pretext labels and invariant
contrastive loss. Equivariant pretext labels use a set of labels on which we
can define the transformations that correspond to the input change. Invariant
contrastive loss uses a modified contrastive loss that absorbs the effect of
transformations on each input. Experiments on standard image recognition
benchmarks demonstrate that the equivariant neural networks exploit the
proposed equivariant self-supervised tasks.
Related papers
- PseudoNeg-MAE: Self-Supervised Point Cloud Learning using Conditional Pseudo-Negative Embeddings [55.55445978692678]
PseudoNeg-MAE is a self-supervised learning framework that enhances global feature representation of point cloud mask autoencoders.
We show that PseudoNeg-MAE achieves state-of-the-art performance on the ModelNet40 and ScanObjectNN datasets.
arXiv Detail & Related papers (2024-09-24T07:57:21Z) - Using and Abusing Equivariance [10.70891251559827]
We show how Group Equivariant Convolutional Neural Networks use subsampling to learn to break equivariance to their symmetries.
We show that a change in the input dimension of a network as small as a single pixel can be enough for commonly used architectures to become approximately equivariant, rather than exactly.
arXiv Detail & Related papers (2023-08-22T09:49:26Z) - Restore Translation Using Equivariant Neural Networks [7.78895108256899]
In this paper, we propose a pre-classifier restorer to recover translated (or even rotated) inputs to a convolutional neural network.
The restorer is based on a theoretical result which gives a sufficient and necessary condition for an affine operator to be translational equivariant on a tensor space.
arXiv Detail & Related papers (2023-06-29T13:34:35Z) - Learning Rotation-Equivariant Features for Visual Correspondence [41.79256655501003]
We introduce a self-supervised learning framework to extract discriminative rotation-invariant descriptors.
Thanks to employing group-equivariant CNNs, our method effectively learns to obtain rotation-equivariant features and their orientations explicitly.
Our method demonstrates state-of-the-art matching accuracy among existing rotation-invariant descriptors under varying rotation.
arXiv Detail & Related papers (2023-03-25T13:42:07Z) - Equivariant Disentangled Transformation for Domain Generalization under
Combination Shift [91.38796390449504]
Combinations of domains and labels are not observed during training but appear in the test environment.
We provide a unique formulation of the combination shift problem based on the concepts of homomorphism, equivariance, and a refined definition of disentanglement.
arXiv Detail & Related papers (2022-08-03T12:31:31Z) - Unsupervised Learning of Group Invariant and Equivariant Representations [10.252723257176566]
We extend group invariant and equivariant representation learning to the field of unsupervised deep learning.
We propose a general learning strategy based on an encoder-decoder framework in which the latent representation is separated in an invariant term and an equivariant group action component.
The key idea is that the network learns to encode and decode data to and from a group-invariant representation by additionally learning to predict the appropriate group action to align input and output pose to solve the reconstruction task.
arXiv Detail & Related papers (2022-02-15T16:44:21Z) - Improving the Sample-Complexity of Deep Classification Networks with
Invariant Integration [77.99182201815763]
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks.
We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems.
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets.
arXiv Detail & Related papers (2022-02-08T16:16:11Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z) - Group Equivariant Neural Architecture Search via Group Decomposition and
Reinforcement Learning [17.291131923335918]
We prove a new group-theoretic result in the context of equivariant neural networks.
We also design an algorithm to construct equivariant networks that significantly improves computational complexity.
We use deep Q-learning to search for group equivariant networks that maximize performance.
arXiv Detail & Related papers (2021-04-10T19:37:25Z) - Learning Invariances in Neural Networks [51.20867785006147]
We show how to parameterize a distribution over augmentations and optimize the training loss simultaneously with respect to the network parameters and augmentation parameters.
We can recover the correct set and extent of invariances on image classification, regression, segmentation, and molecular property prediction from a large space of augmentations.
arXiv Detail & Related papers (2020-10-22T17:18:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.