On the Binding Problem in Artificial Neural Networks
- URL: http://arxiv.org/abs/2012.05208v1
- Date: Wed, 9 Dec 2020 18:02:49 GMT
- Title: On the Binding Problem in Artificial Neural Networks
- Authors: Klaus Greff, Sjoerd van Steenkiste, J\"urgen Schmidhuber
- Abstract summary: We argue that the underlying cause for this shortcoming is their inability to dynamically and flexibly bind information.
We propose a unifying framework that revolves around forming meaningful entities from unstructured sensory inputs.
We believe that a compositional approach to AI, in terms of grounded symbol-like representations, is of fundamental importance for realizing human-level generalization.
- Score: 12.04468744445707
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Contemporary neural networks still fall short of human-level generalization,
which extends far beyond our direct experiences. In this paper, we argue that
the underlying cause for this shortcoming is their inability to dynamically and
flexibly bind information that is distributed throughout the network. This
binding problem affects their capacity to acquire a compositional understanding
of the world in terms of symbol-like entities (like objects), which is crucial
for generalizing in predictable and systematic ways. To address this issue, we
propose a unifying framework that revolves around forming meaningful entities
from unstructured sensory inputs (segregation), maintaining this separation of
information at a representational level (representation), and using these
entities to construct new inferences, predictions, and behaviors (composition).
Our analysis draws inspiration from a wealth of research in neuroscience and
cognitive psychology, and surveys relevant mechanisms from the machine learning
literature, to help identify a combination of inductive biases that allow
symbolic information processing to emerge naturally in neural networks. We
believe that a compositional approach to AI, in terms of grounded symbol-like
representations, is of fundamental importance for realizing human-level
generalization, and we hope that this paper may contribute towards that goal as
a reference and inspiration.
Related papers
- A Relational Inductive Bias for Dimensional Abstraction in Neural
Networks [3.5063551678446494]
This paper investigates the impact of the relational bottleneck on the learning of factorized representations conducive to compositional coding.
We demonstrate that such a bottleneck not only improves generalization and learning efficiency, but also aligns network performance with human-like behavioral biases.
arXiv Detail & Related papers (2024-02-28T15:51:05Z) - Binding Dynamics in Rotating Features [72.80071820194273]
We propose an alternative "cosine binding" mechanism, which explicitly computes the alignment between features and adjusts weights accordingly.
This allows us to draw direct connections to self-attention and biological neural processes, and to shed light on the fundamental dynamics for object-centric representations to emerge in Rotating Features.
arXiv Detail & Related papers (2024-02-08T12:31:08Z) - A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - Rotating Features for Object Discovery [74.1465486264609]
We present Rotating Features, a generalization of complex-valued features to higher dimensions, and a new evaluation procedure for extracting objects from distributed representations.
Together, these advancements enable us to scale distributed object-centric representations from simple toy to real-world data.
arXiv Detail & Related papers (2023-06-01T12:16:26Z) - Mapping Knowledge Representations to Concepts: A Review and New
Perspectives [0.6875312133832078]
This review focuses on research that aims to associate internal representations with human understandable concepts.
We find this taxonomy and theories of causality, useful for understanding what can be expected, and not expected, from neural network explanations.
The analysis additionally uncovers an ambiguity in the reviewed literature related to the goal of model explainability.
arXiv Detail & Related papers (2022-12-31T12:56:12Z) - Interpreting Neural Policies with Disentangled Tree Representations [58.769048492254555]
We study interpretability of compact neural policies through the lens of disentangled representation.
We leverage decision trees to obtain factors of variation for disentanglement in robot learning.
We introduce interpretability metrics that measure disentanglement of learned neural dynamics.
arXiv Detail & Related papers (2022-10-13T01:10:41Z) - Synergistic information supports modality integration and flexible
learning in neural networks solving multiple tasks [107.8565143456161]
We investigate the information processing strategies adopted by simple artificial neural networks performing a variety of cognitive tasks.
Results show that synergy increases as neural networks learn multiple diverse tasks.
randomly turning off neurons during training through dropout increases network redundancy, corresponding to an increase in robustness.
arXiv Detail & Related papers (2022-10-06T15:36:27Z) - The Neural Race Reduction: Dynamics of Abstraction in Gated Networks [12.130628846129973]
We introduce the Gated Deep Linear Network framework that schematizes how pathways of information flow impact learning dynamics.
We derive an exact reduction and, for certain cases, exact solutions to the dynamics of learning.
Our work gives rise to general hypotheses relating neural architecture to learning and provides a mathematical approach towards understanding the design of more complex architectures.
arXiv Detail & Related papers (2022-07-21T12:01:03Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Learning Intermediate Features of Object Affordances with a
Convolutional Neural Network [1.52292571922932]
We train a deep convolutional neural network (CNN) to recognize affordances from images and to learn the underlying features or the dimensionality of affordances.
We view this representational analysis as the first step towards a more formal account of how humans perceive and interact with the environment.
arXiv Detail & Related papers (2020-02-20T19:04:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.