Learning Task-General Representations with Generative Neuro-Symbolic
Modeling
- URL: http://arxiv.org/abs/2006.14448v2
- Date: Sat, 23 Jan 2021 16:40:59 GMT
- Title: Learning Task-General Representations with Generative Neuro-Symbolic
Modeling
- Authors: Reuben Feinman, Brenden M. Lake
- Abstract summary: We develop a generative neuro-symbolic (GNS) model of handwritten character concepts.
The correlations between parts are modeled with neural network subroutines, allowing the model to learn directly from raw data.
In a subsequent evaluation, our GNS model uses probabilistic inference to learn rich conceptual representations from a single training image.
- Score: 22.336243882030026
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: People can learn rich, general-purpose conceptual representations from only
raw perceptual inputs. Current machine learning approaches fall well short of
these human standards, although different modeling traditions often have
complementary strengths. Symbolic models can capture the compositional and
causal knowledge that enables flexible generalization, but they struggle to
learn from raw inputs, relying on strong abstractions and simplifying
assumptions. Neural network models can learn directly from raw data, but they
struggle to capture compositional and causal structure and typically must
retrain to tackle new tasks. We bring together these two traditions to learn
generative models of concepts that capture rich compositional and causal
structure, while learning from raw data. We develop a generative neuro-symbolic
(GNS) model of handwritten character concepts that uses the control flow of a
probabilistic program, coupled with symbolic stroke primitives and a symbolic
image renderer, to represent the causal and compositional processes by which
characters are formed. The distributions of parts (strokes), and correlations
between parts, are modeled with neural network subroutines, allowing the model
to learn directly from raw data and express nonparametric statistical
relationships. We apply our model to the Omniglot challenge of human-level
concept learning, using a background set of alphabets to learn an expressive
prior distribution over character drawings. In a subsequent evaluation, our GNS
model uses probabilistic inference to learn rich conceptual representations
from a single training image that generalize to 4 unique tasks, succeeding
where previous work has fallen short.
Related papers
- On the Transition from Neural Representation to Symbolic Knowledge [2.2528422603742304]
We propose a Neural-Symbolic Transitional Dictionary Learning (TDL) framework that employs an EM algorithm to learn a transitional representation of data.
We implement the framework with a diffusion model by regarding the decomposition of input as a cooperative game.
We additionally use RL enabled by the Markovian of diffusion models to further tune the learned prototypes.
arXiv Detail & Related papers (2023-08-03T19:29:35Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Neural-Symbolic Recursive Machine for Systematic Generalization [113.22455566135757]
We introduce the Neural-Symbolic Recursive Machine (NSR), whose core is a Grounded Symbol System (GSS)
NSR integrates neural perception, syntactic parsing, and semantic reasoning.
We evaluate NSR's efficacy across four challenging benchmarks designed to probe systematic generalization capabilities.
arXiv Detail & Related papers (2022-10-04T13:27:38Z) - Dependency-based Mixture Language Models [53.152011258252315]
We introduce the Dependency-based Mixture Language Models.
In detail, we first train neural language models with a novel dependency modeling objective.
We then formulate the next-token probability by mixing the previous dependency modeling probability distributions with self-attention.
arXiv Detail & Related papers (2022-03-19T06:28:30Z) - Towards Learning a Vocabulary of Visual Concepts and Operators using
Deep Neural Networks [0.0]
We analyze the learned feature maps of trained models using MNIST images for achieving more explainable predictions.
We illustrate the idea by generating visual concepts from a Variational Autoencoder trained using MNIST images.
We were able to reduce the reconstruction loss (mean square error) from an initial value of 120 without augmentation to 60 with augmentation.
arXiv Detail & Related papers (2021-09-01T16:34:57Z) - pix2rule: End-to-end Neuro-symbolic Rule Learning [84.76439511271711]
This paper presents a complete neuro-symbolic method for processing images into objects, learning relations and logical rules.
The main contribution is a differentiable layer in a deep learning architecture from which symbolic relations and rules can be extracted.
We demonstrate that our model scales beyond state-of-the-art symbolic learners and outperforms deep relational neural network architectures.
arXiv Detail & Related papers (2021-06-14T15:19:06Z) - Learning Evolved Combinatorial Symbols with a Neuro-symbolic Generative
Model [35.341634678764066]
Humans have the ability to rapidly understand rich concepts from limited data.
We propose a neuro-symbolic generative model which combines the strengths of previous approaches to concept learning.
arXiv Detail & Related papers (2021-04-16T17:57:51Z) - Counterfactual Generative Networks [59.080843365828756]
We propose to decompose the image generation process into independent causal mechanisms that we train without direct supervision.
By exploiting appropriate inductive biases, these mechanisms disentangle object shape, object texture, and background.
We show that the counterfactual images can improve out-of-distribution with a marginal drop in performance on the original classification task.
arXiv Detail & Related papers (2021-01-15T10:23:12Z) - Compositional Generalization by Learning Analytical Expressions [87.15737632096378]
A memory-augmented neural model is connected with analytical expressions to achieve compositional generalization.
Experiments on the well-known benchmark SCAN demonstrate that our model seizes a great ability of compositional generalization.
arXiv Detail & Related papers (2020-06-18T15:50:57Z) - Generating new concepts with hybrid neuro-symbolic models [22.336243882030026]
Human conceptual knowledge supports the ability to generate novel yet highly structured concepts.
One tradition has emphasized structured knowledge, viewing concepts as embedded in intuitive theories or organized in complex symbolic knowledge structures.
A second tradition has emphasized statistical knowledge, viewing conceptual knowledge as an emerging from the rich correlational structure captured by training neural networks and other statistical models.
arXiv Detail & Related papers (2020-03-19T18:45:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.