Unsupervised Learning of Invariance Transformations
- URL: http://arxiv.org/abs/2307.12937v1
- Date: Mon, 24 Jul 2023 17:03:28 GMT
- Title: Unsupervised Learning of Invariance Transformations
- Authors: Aleksandar Vu\v{c}kovi\'c, Benedikt Stock, Alexander V. Hopp, Mathias
Winkel, and Helmut Linde
- Abstract summary: We develop an algorithmic framework for finding approximate graph automorphisms.
We discuss how this framework can be used to find approximate automorphisms in weighted graphs in general.
- Score: 105.54048699217668
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The need for large amounts of training data in modern machine learning is one
of the biggest challenges of the field. Compared to the brain, current
artificial algorithms are much less capable of learning invariance
transformations and employing them to extrapolate knowledge from small sample
sets. It has recently been proposed that the brain might encode perceptual
invariances as approximate graph symmetries in the network of synaptic
connections. Such symmetries may arise naturally through a biologically
plausible process of unsupervised Hebbian learning. In the present paper, we
illustrate this proposal on numerical examples, showing that invariance
transformations can indeed be recovered from the structure of recurrent
synaptic connections which form within a layer of feature detector neurons via
a simple Hebbian learning rule. In order to numerically recover the invariance
transformations from the resulting recurrent network, we develop a general
algorithmic framework for finding approximate graph automorphisms. We discuss
how this framework can be used to find approximate automorphisms in weighted
graphs in general.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Does the Brain Infer Invariance Transformations from Graph Symmetries? [0.0]
The invariance of natural objects under perceptual changes is possibly encoded in the brain by symmetries in the graph of synaptic connections.
The graph can be established via unsupervised learning in a biologically plausible process across different perceptual modalities.
arXiv Detail & Related papers (2021-11-11T12:35:13Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z) - Self-Supervised Graph Representation Learning via Topology
Transformations [61.870882736758624]
We present the Topology Transformation Equivariant Representation learning, a general paradigm of self-supervised learning for node representations of graph data.
In experiments, we apply the proposed model to the downstream node and graph classification tasks, and results show that the proposed method outperforms the state-of-the-art unsupervised approaches.
arXiv Detail & Related papers (2021-05-25T06:11:03Z) - Category-Learning with Context-Augmented Autoencoder [63.05016513788047]
Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning.
We propose a novel method of using data augmentations when training autoencoders.
We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network.
arXiv Detail & Related papers (2020-10-10T14:04:44Z) - Malicious Network Traffic Detection via Deep Learning: An Information
Theoretic View [0.0]
We study how homeomorphism affects learned representation of a malware traffic dataset.
Our results suggest that although the details of learned representations and the specific coordinate system defined over the manifold of all parameters differ slightly, the functional approximations are the same.
arXiv Detail & Related papers (2020-09-16T15:37:44Z) - Teaching Recurrent Neural Networks to Modify Chaotic Memories by Example [14.91507266777207]
We show that a recurrent neural network can learn to modify its representation of complex information using only examples.
We provide a mechanism for how these computations are learned, and demonstrate that a single network can simultaneously learn multiple computations.
arXiv Detail & Related papers (2020-05-03T20:51:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.