Learning and Compositionality: a Unification Attempt via Connectionist
Probabilistic Programming
- URL: http://arxiv.org/abs/2208.12789v1
- Date: Fri, 26 Aug 2022 17:20:58 GMT
- Title: Learning and Compositionality: a Unification Attempt via Connectionist
Probabilistic Programming
- Authors: Ximing Qiao, Hai Li
- Abstract summary: We consider learning and compositionality as the key mechanisms towards simulating human-like intelligence.
We propose Connectionist Probabilistic Program ( CPP), a framework that connects connectionist structures (for learning) and probabilistic program semantics (for compositionality)
- Score: 11.06543250284755
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider learning and compositionality as the key mechanisms towards
simulating human-like intelligence. While each mechanism is successfully
achieved by neural networks and symbolic AIs, respectively, it is the
combination of the two mechanisms that makes human-like intelligence possible.
Despite the numerous attempts on building hybrid neuralsymbolic systems, we
argue that our true goal should be unifying learning and compositionality, the
core mechanisms, instead of neural and symbolic methods, the surface approaches
to achieve them. In this work, we review and analyze the strengths and
weaknesses of neural and symbolic methods by separating their forms and
meanings (structures and semantics), and propose Connectionist Probabilistic
Program (CPPs), a framework that connects connectionist structures (for
learning) and probabilistic program semantics (for compositionality). Under the
framework, we design a CPP extension for small scale sequence modeling and
provide a learning algorithm based on Bayesian inference. Although challenges
exist in learning complex patterns without supervision, our early results
demonstrate CPP's successful extraction of concepts and relations from raw
sequential data, an initial step towards compositional learning.
Related papers
- Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning [86.59849798539312]
We present Neuro-Symbolic Predicates, a first-order abstraction language that combines the strengths of symbolic and neural knowledge representations.
We show that our approach offers better sample complexity, stronger out-of-distribution generalization, and improved interpretability.
arXiv Detail & Related papers (2024-10-30T16:11:05Z) - Reasoning Algorithmically in Graph Neural Networks [1.8130068086063336]
We aim to integrate the structured and rule-based reasoning of algorithms with adaptive learning capabilities of neural networks.
This dissertation provides theoretical and practical contributions to this area of research.
arXiv Detail & Related papers (2024-02-21T12:16:51Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - A Novel Neural-symbolic System under Statistical Relational Learning [50.747658038910565]
We propose a general bi-level probabilistic graphical reasoning framework called GBPGR.
In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models.
Our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - AI Centered on Scene Fitting and Dynamic Cognitive Network [4.228224431041357]
This paper briefly analyzes the advantages and problems of AI mainstream technology and puts forward: To achieve stronger Artificial Intelligence, the end-to-end function calculation must be changed.
It also discusses the concrete scheme named Dynamic Cognitive Network model (DC Net)
arXiv Detail & Related papers (2020-10-02T06:13:41Z) - Compositional Generalization by Learning Analytical Expressions [87.15737632096378]
A memory-augmented neural model is connected with analytical expressions to achieve compositional generalization.
Experiments on the well-known benchmark SCAN demonstrate that our model seizes a great ability of compositional generalization.
arXiv Detail & Related papers (2020-06-18T15:50:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.