On Binding Objects to Symbols: Learning Physical Concepts to Understand
Real from Fake
- URL: http://arxiv.org/abs/2207.12186v1
- Date: Mon, 25 Jul 2022 17:21:59 GMT
- Title: On Binding Objects to Symbols: Learning Physical Concepts to Understand
Real from Fake
- Authors: Alessandro Achille, Stefano Soatto
- Abstract summary: We revisit the classic signal-to-symbol barrier in light of the remarkable ability of deep neural networks to generate synthetic data.
We characterize physical objects as abstract concepts and use the previous analysis to show that physical objects can be encoded by finite architectures.
We conclude that binding physical entities to digital identities is possible in finite time with finite resources.
- Score: 155.6741526791004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We revisit the classic signal-to-symbol barrier in light of the remarkable
ability of deep neural networks to generate realistic synthetic data. DeepFakes
and spoofing highlight the feebleness of the link between physical reality and
its abstract representation, whether learned by a digital computer or a
biological agent. Starting from a widely applicable definition of abstract
concept, we show that standard feed-forward architectures cannot capture but
trivial concepts, regardless of the number of weights and the amount of
training data, despite being extremely effective classifiers. On the other
hand, architectures that incorporate recursion can represent a significantly
larger class of concepts, but may still be unable to learn them from a finite
dataset. We qualitatively describe the class of concepts that can be
"understood" by modern architectures trained with variants of stochastic
gradient descent, using a (free energy) Lagrangian to measure information
complexity. Even if a concept has been understood, however, a network has no
means of communicating its understanding to an external agent, except through
continuous interaction and validation. We then characterize physical objects as
abstract concepts and use the previous analysis to show that physical objects
can be encoded by finite architectures. However, to understand physical
concepts, sensors must provide persistently exciting observations, for which
the ability to control the data acquisition process is essential (active
perception). The importance of control depends on the modality, benefiting
visual more than acoustic or chemical perception. Finally, we conclude that
binding physical entities to digital identities is possible in finite time with
finite resources, solving in principle the signal-to-symbol barrier problem,
but we highlight the need for continuous validation.
Related papers
- Learning Discrete Concepts in Latent Hierarchical Models [73.01229236386148]
Learning concepts from natural high-dimensional data holds potential in building human-aligned and interpretable machine learning models.
We formalize concepts as discrete latent causal variables that are related via a hierarchical causal model.
We substantiate our theoretical claims with synthetic data experiments.
arXiv Detail & Related papers (2024-06-01T18:01:03Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Intrinsic Physical Concepts Discovery with Object-Centric Predictive
Models [86.25460882547581]
We introduce the PHYsical Concepts Inference NEtwork (PHYCINE), a system that infers physical concepts in different abstract levels without supervision.
We show that object representations containing the discovered physical concepts variables could help achieve better performance in causal reasoning tasks.
arXiv Detail & Related papers (2023-03-03T11:52:21Z) - Robust and Controllable Object-Centric Learning through Energy-based
Models [95.68748828339059]
ours is a conceptually simple and general approach to learning object-centric representations through an energy-based model.
We show that ours can be easily integrated into existing architectures and can effectively extract high-quality object-centric representations.
arXiv Detail & Related papers (2022-10-11T15:11:15Z) - Acquiring and Modelling Abstract Commonsense Knowledge via Conceptualization [49.00409552570441]
We study the role of conceptualization in commonsense reasoning, and formulate a framework to replicate human conceptual induction.
We apply the framework to ATOMIC, a large-scale human-annotated CKG, aided by the taxonomy Probase.
arXiv Detail & Related papers (2022-06-03T12:24:49Z) - Emergent Symbols through Binding in External Memory [2.3562267625320352]
We introduce the Emergent Symbol Binding Network (ESBN), a recurrent network augmented with an external memory.
This binding mechanism allows symbol-like representations to emerge through the learning process without the need to explicitly incorporate symbol-processing machinery.
Across a series of tasks, we show that this architecture displays nearly perfect generalization of learned rules to novel entities.
arXiv Detail & Related papers (2020-12-29T04:28:32Z) - On the Binding Problem in Artificial Neural Networks [12.04468744445707]
We argue that the underlying cause for this shortcoming is their inability to dynamically and flexibly bind information.
We propose a unifying framework that revolves around forming meaningful entities from unstructured sensory inputs.
We believe that a compositional approach to AI, in terms of grounded symbol-like representations, is of fundamental importance for realizing human-level generalization.
arXiv Detail & Related papers (2020-12-09T18:02:49Z) - Learning Neural-Symbolic Descriptive Planning Models via Cube-Space
Priors: The Voyage Home (to STRIPS) [13.141761152863868]
We show that our neuro-symbolic architecture is trained end-to-end to produce a succinct and effective discrete state transition model from images alone.
Our target representation is already in a form that off-the-shelf solvers can consume, and opens the door to the rich array of modern search capabilities.
arXiv Detail & Related papers (2020-04-27T15:01:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.