Neuro-Symbolic Forward Reasoning
- URL: http://arxiv.org/abs/2110.09383v1
- Date: Mon, 18 Oct 2021 15:14:58 GMT
- Title: Neuro-Symbolic Forward Reasoning
- Authors: Hikaru Shindo, Devendra Singh Dhami, Kristian Kersting
- Abstract summary: Neuro-Symbolic Forward Reasoner (NSFR) is a new approach for reasoning tasks taking advantage of differentiable forward-chaining using first-order logic.
The key idea is to combine differentiable forward-chaining reasoning with object-centric (deep) learning.
- Score: 19.417231973682366
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reasoning is an essential part of human intelligence and thus has been a
long-standing goal in artificial intelligence research. With the recent success
of deep learning, incorporating reasoning with deep learning systems, i.e.,
neuro-symbolic AI has become a major field of interest. We propose the
Neuro-Symbolic Forward Reasoner (NSFR), a new approach for reasoning tasks
taking advantage of differentiable forward-chaining using first-order logic.
The key idea is to combine differentiable forward-chaining reasoning with
object-centric (deep) learning. Differentiable forward-chaining reasoning
computes logical entailments smoothly, i.e., it deduces new facts from given
facts and rules in a differentiable manner. The object-centric learning
approach factorizes raw inputs into representations in terms of objects. Thus,
it allows us to provide a consistent framework to perform the forward-chaining
inference from raw inputs. NSFR factorizes the raw inputs into the
object-centric representations, converts them into probabilistic ground atoms,
and finally performs differentiable forward-chaining inference using weighted
rules for inference. Our comprehensive experimental evaluations on
object-centric reasoning data sets, 2D Kandinsky patterns and 3D CLEVR-Hans,
and a variety of tasks show the effectiveness and advantage of our approach.
Related papers
- Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z) - Statistical relational learning and neuro-symbolic AI: what does
first-order logic offer? [12.47276164048813]
Our aim is to briefly survey and articulate the logical and philosophical foundations of using (first-order) logic to represent (probabilistic) knowledge in a non-technical fashion.
For machine learning researchers unaware of why the research community cares about relational representations, this article can serve as a gentle introduction.
For logical experts who are newcomers to the learning area, such an article can help in navigating the differences between finite vs infinite, and subjective probabilities vs random-world semantics.
arXiv Detail & Related papers (2023-06-08T12:34:31Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - LogiGAN: Learning Logical Reasoning via Adversarial Pre-training [58.11043285534766]
We present LogiGAN, an unsupervised adversarial pre-training framework for improving logical reasoning abilities of language models.
Inspired by the facilitation effect of reflective thinking in human learning, we simulate the learning-thinking process with an adversarial Generator-Verifier architecture.
Both base and large size language models pre-trained with LogiGAN demonstrate obvious performance improvement on 12 datasets.
arXiv Detail & Related papers (2022-05-18T08:46:49Z) - Abstract Spatial-Temporal Reasoning via Probabilistic Abduction and
Execution [97.50813120600026]
Spatial-temporal reasoning is a challenging task in Artificial Intelligence (AI)
Recent works have focused on an abstract reasoning task of this kind -- Raven's Progressive Matrices ( RPM)
We propose a neuro-symbolic Probabilistic Abduction and Execution learner (PrAE) learner.
arXiv Detail & Related papers (2021-03-26T02:42:18Z) - Object-based attention for spatio-temporal reasoning: Outperforming
neuro-symbolic models with flexible distributed architectures [15.946511512356878]
We show that a fully-learned neural network with the right inductive biases can perform substantially better than all previous neural-symbolic models.
Our model makes critical use of both self-attention and learned "soft" object-centric representations.
arXiv Detail & Related papers (2020-12-15T18:57:40Z) - Neural Logic Reasoning [47.622957656745356]
We propose Logic-Integrated Neural Network (LINN) to integrate the power of deep learning and logic reasoning.
LINN learns basic logical operations such as AND, OR, NOT as neural modules, and conducts propositional logical reasoning through the network for inference.
Experiments show that LINN significantly outperforms state-of-the-art recommendation models in Top-K recommendation.
arXiv Detail & Related papers (2020-08-20T14:53:23Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z) - Neural Collaborative Reasoning [31.03627817834551]
We propose Collaborative Filtering (CF) to Collaborative Reasoning (CR)
CR means that each user knows part of the reasoning space, and they collaborate for reasoning in the space to estimate preferences for each other.
We integrate the power of representation learning and logical reasoning, where representations capture similarity patterns in data from perceptual perspectives.
arXiv Detail & Related papers (2020-05-16T23:29:31Z) - Relational Neural Machines [19.569025323453257]
This paper presents a novel framework allowing jointly train the parameters of the learners and of a First-Order Logic based reasoner.
A Neural Machine is able recover both classical learning results in case of pure sub-symbolic learning, and Markov Logic Networks.
Proper algorithmic solutions are devised to make learning and inference tractable in large-scale problems.
arXiv Detail & Related papers (2020-02-06T10:53:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.