Binding and Perspective Taking as Inference in a Generative Neural
Network Model
- URL: http://arxiv.org/abs/2012.05152v1
- Date: Wed, 9 Dec 2020 16:43:26 GMT
- Title: Binding and Perspective Taking as Inference in a Generative Neural
Network Model
- Authors: Mahdi Sadeghi, Fabian Schrodt, Sebastian Otte, Martin V. Butz
- Abstract summary: generative encoder-decoder architecture adapts its perspective and binds features by means of retrospective inference.
We show that the resulting gradient-based inference process solves the perspective taking and binding problem for known biological motion patterns.
- Score: 1.0323063834827415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability to flexibly bind features into coherent wholes from different
perspectives is a hallmark of cognition and intelligence. Importantly, the
binding problem is not only relevant for vision but also for general
intelligence, sensorimotor integration, event processing, and language. Various
artificial neural network models have tackled this problem with dynamic neural
fields and related approaches. Here we focus on a generative encoder-decoder
architecture that adapts its perspective and binds features by means of
retrospective inference. We first train a model to learn sufficiently accurate
generative models of dynamic biological motion or other harmonic motion
patterns, such as a pendulum. We then scramble the input to a certain extent,
possibly vary the perspective onto it, and propagate the prediction error back
onto a binding matrix, that is, hidden neural states that determine feature
binding. Moreover, we propagate the error further back onto perspective taking
neurons, which rotate and translate the input features onto a known frame of
reference. Evaluations show that the resulting gradient-based inference process
solves the perspective taking and binding problem for known biological motion
patterns, essentially yielding a Gestalt perception mechanism. In addition,
redundant feature properties and population encodings are shown to be highly
useful. While we evaluate the algorithm on biological motion patterns, the
principled approach should be applicable to binding and Gestalt perception
problems in other domains.
Related papers
- Binding Dynamics in Rotating Features [72.80071820194273]
We propose an alternative "cosine binding" mechanism, which explicitly computes the alignment between features and adjusts weights accordingly.
This allows us to draw direct connections to self-attention and biological neural processes, and to shed light on the fundamental dynamics for object-centric representations to emerge in Rotating Features.
arXiv Detail & Related papers (2024-02-08T12:31:08Z) - Exploring mechanisms of Neural Robustness: probing the bridge between geometry and spectrum [0.0]
We study the link between representation smoothness and spectrum by using weight, Jacobian and spectral regularization.
Our research aims to understand the interplay between geometry, spectral properties, robustness, and expressivity in neural representations.
arXiv Detail & Related papers (2024-02-05T12:06:00Z) - Unsupervised Learning of Invariance Transformations [105.54048699217668]
We develop an algorithmic framework for finding approximate graph automorphisms.
We discuss how this framework can be used to find approximate automorphisms in weighted graphs in general.
arXiv Detail & Related papers (2023-07-24T17:03:28Z) - Language Knowledge-Assisted Representation Learning for Skeleton-Based
Action Recognition [71.35205097460124]
How humans understand and recognize the actions of others is a complex neuroscientific problem.
LA-GCN proposes a graph convolution network using large-scale language models (LLM) knowledge assistance.
arXiv Detail & Related papers (2023-05-21T08:29:16Z) - Binding Dancers Into Attractors [0.5801044612920815]
Feature binding and perspective taking are crucial cognitive abilities.
We propose a recurrent neural network model that solves both challenges.
We first train an LSTM to predict 3D motion dynamics from a canonical perspective.
We then present similar motion dynamics with novel viewpoints and feature arrangements.
arXiv Detail & Related papers (2022-06-01T22:01:29Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Modeling Implicit Bias with Fuzzy Cognitive Maps [0.0]
This paper presents a Fuzzy Cognitive Map model to quantify implicit bias in structured datasets.
We introduce a new reasoning mechanism equipped with a normalization-like transfer function that prevents neurons from saturating.
arXiv Detail & Related papers (2021-12-23T17:04:12Z) - Self-Supervised Graph Representation Learning for Neuronal Morphologies [75.38832711445421]
We present GraphDINO, a data-driven approach to learn low-dimensional representations of 3D neuronal morphologies from unlabeled datasets.
We show, in two different species and across multiple brain areas, that this method yields morphological cell type clusterings on par with manual feature-based classification by experts.
Our method could potentially enable data-driven discovery of novel morphological features and cell types in large-scale datasets.
arXiv Detail & Related papers (2021-12-23T12:17:47Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Drop, Swap, and Generate: A Self-Supervised Approach for Generating
Neural Activity [33.06823702945747]
We introduce a novel unsupervised approach for learning disentangled representations of neural activity called Swap-VAE.
Our approach combines a generative modeling framework with an instance-specific alignment loss.
We show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.
arXiv Detail & Related papers (2021-11-03T16:39:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.