Generating new concepts with hybrid neuro-symbolic models
- URL: http://arxiv.org/abs/2003.08978v3
- Date: Tue, 9 Jun 2020 01:31:57 GMT
- Title: Generating new concepts with hybrid neuro-symbolic models
- Authors: Reuben Feinman, Brenden M. Lake
- Abstract summary: Human conceptual knowledge supports the ability to generate novel yet highly structured concepts.
One tradition has emphasized structured knowledge, viewing concepts as embedded in intuitive theories or organized in complex symbolic knowledge structures.
A second tradition has emphasized statistical knowledge, viewing conceptual knowledge as an emerging from the rich correlational structure captured by training neural networks and other statistical models.
- Score: 22.336243882030026
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human conceptual knowledge supports the ability to generate novel yet highly
structured concepts, and the form of this conceptual knowledge is of great
interest to cognitive scientists. One tradition has emphasized structured
knowledge, viewing concepts as embedded in intuitive theories or organized in
complex symbolic knowledge structures. A second tradition has emphasized
statistical knowledge, viewing conceptual knowledge as an emerging from the
rich correlational structure captured by training neural networks and other
statistical models. In this paper, we explore a synthesis of these two
traditions through a novel neuro-symbolic model for generating new concepts.
Using simple visual concepts as a testbed, we bring together neural networks
and symbolic probabilistic programs to learn a generative model of novel
handwritten characters. Two alternative models are explored with more generic
neural network architectures. We compare each of these three models for their
likelihoods on held-out character classes and for the quality of their
productions, finding that our hybrid model learns the most convincing
representation and generalizes further from the training observations.
Related papers
- Aligning Knowledge Graphs Provided by Humans and Generated from Neural Networks in Specific Tasks [5.791414814676125]
This paper develops an innovative method that enables neural networks to generate and utilize knowledge graphs.
Our approach eschews traditional dependencies on or word embedding models, mining concepts from neural networks and directly aligning them with human knowledge.
Experiments show that our method consistently captures network-generated concepts that align closely with human knowledge and can even uncover new, useful concepts not previously identified by humans.
arXiv Detail & Related papers (2024-04-23T20:33:17Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Formal Conceptual Views in Neural Networks [0.0]
We introduce two notions for conceptual views of a neural network, specifically a many-valued and a symbolic view.
We test the conceptual expressivity of our novel views through different experiments on the ImageNet and Fruit-360 data sets.
We demonstrate how conceptual views can be applied for abductive learning of human comprehensible rules from neurons.
arXiv Detail & Related papers (2022-09-27T16:38:24Z) - ZeroC: A Neuro-Symbolic Model for Zero-shot Concept Recognition and
Acquisition at Inference Time [49.067846763204564]
Humans have the remarkable ability to recognize and acquire novel visual concepts in a zero-shot manner.
We introduce Zero-shot Concept Recognition and Acquisition (ZeroC), a neuro-symbolic architecture that can recognize and acquire novel concepts in a zero-shot way.
arXiv Detail & Related papers (2022-06-30T06:24:45Z) - Discovering Latent Concepts Learned in BERT [21.760620298330235]
We study what latent concepts exist in the pre-trained BERT model.
We also release a novel BERT ConceptNet dataset (BCN) consisting of 174 concept labels and 1M annotated instances.
arXiv Detail & Related papers (2022-05-15T09:45:34Z) - Towards a Predictive Processing Implementation of the Common Model of
Cognition [79.63867412771461]
We describe an implementation of the common model of cognition grounded in neural generative coding and holographic associative memory.
The proposed system creates the groundwork for developing agents that learn continually from diverse tasks as well as model human performance at larger scales.
arXiv Detail & Related papers (2021-05-15T22:55:23Z) - Learning Task-General Representations with Generative Neuro-Symbolic
Modeling [22.336243882030026]
We develop a generative neuro-symbolic (GNS) model of handwritten character concepts.
The correlations between parts are modeled with neural network subroutines, allowing the model to learn directly from raw data.
In a subsequent evaluation, our GNS model uses probabilistic inference to learn rich conceptual representations from a single training image.
arXiv Detail & Related papers (2020-06-25T14:41:27Z) - Compositional Generalization by Learning Analytical Expressions [87.15737632096378]
A memory-augmented neural model is connected with analytical expressions to achieve compositional generalization.
Experiments on the well-known benchmark SCAN demonstrate that our model seizes a great ability of compositional generalization.
arXiv Detail & Related papers (2020-06-18T15:50:57Z) - Revisit Systematic Generalization via Meaningful Learning [15.90288956294373]
Recent studies argue that neural networks appear inherently ineffective in such cognitive capacity.
We reassess the compositional skills of sequence-to-sequence models conditioned on the semantic links between new and old concepts.
arXiv Detail & Related papers (2020-03-14T15:27:29Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.