A Neural-Symbolic Approach to Natural Language Understanding
- URL: http://arxiv.org/abs/2203.10557v1
- Date: Sun, 20 Mar 2022 14:12:44 GMT
- Title: A Neural-Symbolic Approach to Natural Language Understanding
- Authors: Zhixuan Liu, Zihao Wang, Yuan Lin, Hang Li
- Abstract summary: We present a novel framework for NLU called Neural-Symbolic Processor (NSP)
NSP performs analogical reasoning based on neural processing and performs logical reasoning based on both neural and symbolic processing.
As a case study, we conduct experiments on two NLU tasks, question answering (QA) and natural language inference (NLI)
- Score: 12.752124450670602
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks, empowered by pre-trained language models, have achieved
remarkable results in natural language understanding (NLU) tasks. However,
their performances can deteriorate drastically when logical reasoning is needed
in the process. This is because, ideally, NLU needs to depend on not only
analogical reasoning, which deep neural networks are good at, but also logical
reasoning. According to the dual-process theory, analogical reasoning and
logical reasoning are respectively carried out by System 1 and System 2 in the
human brain. Inspired by the theory, we present a novel framework for NLU
called Neural-Symbolic Processor (NSP), which performs analogical reasoning
based on neural processing and performs logical reasoning based on both neural
and symbolic processing. As a case study, we conduct experiments on two NLU
tasks, question answering (QA) and natural language inference (NLI), when
numerical reasoning (a type of logical reasoning) is necessary. The
experimental results show that our method significantly outperforms
state-of-the-art methods in both tasks.
Related papers
- Formal Explanations for Neuro-Symbolic AI [28.358183683756028]
This paper proposes a formal approach to explaining the decisions of neuro-symbolic systems.
It first computes a formal explanation for the symbolic component of the system, which serves to identify a subset of the individual parts of neural information that needs to be explained.
This is followed by explaining only those individual neural inputs, independently of each other, which facilitates succinctness of hierarchical formal explanations.
arXiv Detail & Related papers (2024-10-18T07:08:31Z) - Continual Reasoning: Non-Monotonic Reasoning in Neurosymbolic AI using
Continual Learning [2.912595438026074]
We show that by combining a neural-symbolic system with methods from continual learning, Logic Networks can obtain a higher level of accuracy.
Continual learning is added to LTNs by adopting a curriculum of learning from knowledge and data with recall.
Results indicate significant improvement on the non-monotonic reasoning problem.
arXiv Detail & Related papers (2023-05-03T15:11:34Z) - Deep Learning Models to Study Sentence Comprehension in the Human Brain [0.1503974529275767]
Recent artificial neural networks that process natural language achieve unprecedented performance in tasks requiring sentence-level understanding.
We review works that compare these artificial language models with human brain activity and we assess the extent to which this approach has improved our understanding of the neural processes involved in natural language comprehension.
arXiv Detail & Related papers (2023-01-16T10:31:25Z) - NELLIE: A Neuro-Symbolic Inference Engine for Grounded, Compositional, and Explainable Reasoning [59.16962123636579]
This paper proposes a new take on Prolog-based inference engines.
We replace handcrafted rules with a combination of neural language modeling, guided generation, and semi dense retrieval.
Our implementation, NELLIE, is the first system to demonstrate fully interpretable, end-to-end grounded QA.
arXiv Detail & Related papers (2022-09-16T00:54:44Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Improving Coherence and Consistency in Neural Sequence Models with
Dual-System, Neuro-Symbolic Reasoning [49.6928533575956]
We use neural inference to mediate between the neural System 1 and the logical System 2.
Results in robust story generation and grounded instruction-following show that this approach can increase the coherence and accuracy of neurally-based generations.
arXiv Detail & Related papers (2021-07-06T17:59:49Z) - Logic Tensor Networks [9.004005678155023]
We present Logic Networks (LTN), a neurosymbolic formalism and computational model that supports learning and reasoning.
We show that LTN provides a uniform language for the specification and the computation of several AI tasks.
arXiv Detail & Related papers (2020-12-25T22:30:18Z) - Question Answering over Knowledge Bases by Leveraging Semantic Parsing
and Neuro-Symbolic Reasoning [73.00049753292316]
We propose a semantic parsing and reasoning-based Neuro-Symbolic Question Answering(NSQA) system.
NSQA achieves state-of-the-art performance on QALD-9 and LC-QuAD 1.0.
arXiv Detail & Related papers (2020-12-03T05:17:55Z) - Neural Logic Reasoning [47.622957656745356]
We propose Logic-Integrated Neural Network (LINN) to integrate the power of deep learning and logic reasoning.
LINN learns basic logical operations such as AND, OR, NOT as neural modules, and conducts propositional logical reasoning through the network for inference.
Experiments show that LINN significantly outperforms state-of-the-art recommendation models in Top-K recommendation.
arXiv Detail & Related papers (2020-08-20T14:53:23Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.