Closed Loop Neural-Symbolic Learning via Integrating Neural Perception,
Grammar Parsing, and Symbolic Reasoning
- URL: http://arxiv.org/abs/2006.06649v2
- Date: Mon, 27 Jul 2020 22:17:10 GMT
- Title: Closed Loop Neural-Symbolic Learning via Integrating Neural Perception,
Grammar Parsing, and Symbolic Reasoning
- Authors: Qing Li, Siyuan Huang, Yining Hong, Yixin Chen, Ying Nian Wu,
Song-Chun Zhu
- Abstract summary: Prior methods learn the neural-symbolic models using reinforcement learning approaches.
We introduce the textbfgrammar model as a textitsymbolic prior to bridge neural perception and symbolic reasoning.
We propose a novel textbfback-search algorithm which mimics the top-down human-like learning procedure to propagate the error.
- Score: 134.77207192945053
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of neural-symbolic computation is to integrate the connectionist and
symbolist paradigms. Prior methods learn the neural-symbolic models using
reinforcement learning (RL) approaches, which ignore the error propagation in
the symbolic reasoning module and thus converge slowly with sparse rewards. In
this paper, we address these issues and close the loop of neural-symbolic
learning by (1) introducing the \textbf{grammar} model as a \textit{symbolic
prior} to bridge neural perception and symbolic reasoning, and (2) proposing a
novel \textbf{back-search} algorithm which mimics the top-down human-like
learning procedure to propagate the error through the symbolic reasoning module
efficiently. We further interpret the proposed learning framework as maximum
likelihood estimation using Markov chain Monte Carlo sampling and the
back-search algorithm as a Metropolis-Hastings sampler. The experiments are
conducted on two weakly-supervised neural-symbolic tasks: (1) handwritten
formula recognition on the newly introduced HWF dataset; (2) visual question
answering on the CLEVR dataset. The results show that our approach
significantly outperforms the RL methods in terms of performance, converging
speed, and data efficiency. Our code and data are released at
\url{https://liqing-ustc.github.io/NGS}.
Related papers
- Learning a Neural Association Network for Self-supervised Multi-Object Tracking [34.07776597698471]
This paper introduces a novel framework to learn data association for multi-object tracking in a self-supervised manner.
Motivated by the fact that in real-world scenarios object motion can be usually represented by a Markov process, we present a novel expectation (EM) algorithm that trains a neural network to associate detections for tracking.
We evaluate our approach on the challenging MOT17 and MOT20 datasets and achieve state-of-the-art results in comparison to self-supervised trackers.
arXiv Detail & Related papers (2024-11-18T12:22:29Z) - The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning [54.56905063752427]
Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
arXiv Detail & Related papers (2024-02-02T20:33:14Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - Deep Generative Symbolic Regression with Monte-Carlo-Tree-Search [29.392036559507755]
Symbolic regression is a problem of learning a symbolic expression from numerical data.
Deep neural models trained on procedurally-generated synthetic datasets showed competitive performance.
We propose a novel method which provides the best of both worlds, based on a Monte-Carlo Tree Search procedure.
arXiv Detail & Related papers (2023-02-22T09:10:20Z) - Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural
Networks [89.28881869440433]
This paper provides the first theoretical characterization of joint edge-model sparse learning for graph neural networks (GNNs)
It proves analytically that both sampling important nodes and pruning neurons with the lowest-magnitude can reduce the sample complexity and improve convergence without compromising the test accuracy.
arXiv Detail & Related papers (2023-02-06T16:54:20Z) - Learning Signal Temporal Logic through Neural Network for Interpretable
Classification [13.829082181692872]
We propose an explainable neural-symbolic framework for the classification of time-series behaviors.
We demonstrate the computational efficiency, compactness, and interpretability of the proposed method through driving scenarios and naval surveillance case studies.
arXiv Detail & Related papers (2022-10-04T21:11:54Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - Learning with Holographic Reduced Representations [28.462635977110413]
Holographic Reduced Representations (HRR) are a method for performing symbolic AI on top of real-valued vectors.
This paper revisits this approach to understand if it is viable for enabling a hybrid neural-symbolic approach to learning.
arXiv Detail & Related papers (2021-09-05T19:37:34Z) - Neural Unsupervised Semantic Role Labeling [48.69930912510414]
We present the first neural unsupervised model for semantic role labeling.
We decompose the task as two argument related subtasks, identification and clustering.
Experiments on CoNLL-2009 English dataset demonstrate that our model outperforms previous state-of-the-art baseline.
arXiv Detail & Related papers (2021-04-19T04:50:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.