Is Neuro-Symbolic AI Meeting its Promise in Natural Language Processing?
A Structured Review
- URL: http://arxiv.org/abs/2202.12205v1
- Date: Thu, 24 Feb 2022 17:13:33 GMT
- Title: Is Neuro-Symbolic AI Meeting its Promise in Natural Language Processing?
A Structured Review
- Authors: Kyle Hamilton, Aparna Nayak, Bojan Bo\v{z}i\'c, Luca Longo
- Abstract summary: Advocates for Neuro-Symbolic AI (NeSy) assert that combining deep learning with symbolic reasoning will lead to stronger AI.
We conduct a structured review of studies implementing NeSy for NLP, challenges and future directions.
We aim to answer the question of whether NeSy is indeed meeting its promises: reasoning, out-of-distribution generalization, interpretability, learning and reasoning from small data, and transferability to new domains.
- Score: 2.064612766965483
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advocates for Neuro-Symbolic AI (NeSy) assert that combining deep learning
with symbolic reasoning will lead to stronger AI than either paradigm on its
own. As successful as deep learning has been, it is generally accepted that
even our best deep learning systems are not very good at abstract reasoning.
And since reasoning is inextricably linked to language, it makes intuitive
sense that Natural Language Processing (NLP), would be a particularly
well-suited candidate for NeSy. We conduct a structured review of studies
implementing NeSy for NLP, challenges and future directions, and aim to answer
the question of whether NeSy is indeed meeting its promises: reasoning,
out-of-distribution generalization, interpretability, learning and reasoning
from small data, and transferability to new domains. We examine the impact of
knowledge representation, such as rules and semantic networks, language
structure and relational structure, and whether implicit or explicit reasoning
contributes to higher promise scores. We find that knowledge encoded in
relational structures and explicit reasoning tend to lead to more NeSy goals
being satisfied. We also advocate for a more methodical approach to the
application of theories of reasoning, which we hope can reduce some of the
friction between the symbolic and sub-symbolic schools of AI.
Related papers
- ULLER: A Unified Language for Learning and Reasoning [7.689000976615671]
We propose a unified language for neuro-symbolic artificial intelligence (NeSy)
We call it ULLER, a Unified Language for LEarning and Reasoning.
arXiv Detail & Related papers (2024-05-01T14:05:52Z) - Weakly Supervised Reasoning by Neuro-Symbolic Approaches [28.98845133698169]
We will introduce our progress on neuro-symbolic approaches to NLP.
We will design a neural system with symbolic latent structures for an NLP task.
We will apply reinforcement learning or its relaxation to perform weakly supervised reasoning in the downstream task.
arXiv Detail & Related papers (2023-09-19T06:10:51Z) - Large Language Models are In-Context Semantic Reasoners rather than
Symbolic Reasoners [75.85554779782048]
Large Language Models (LLMs) have excited the natural language and machine learning community over recent years.
Despite of numerous successful applications, the underlying mechanism of such in-context capabilities still remains unclear.
In this work, we hypothesize that the learned textitsemantics of language tokens do the most heavy lifting during the reasoning process.
arXiv Detail & Related papers (2023-05-24T07:33:34Z) - Understanding Natural Language Understanding Systems. A Critical
Analysis [91.81211519327161]
The development of machines that guillemotlefttalk like usguillemotright, also known as Natural Language Understanding (NLU) systems, is the Holy Grail of Artificial Intelligence (AI)
But never has the trust that we can build guillemotlefttalking machinesguillemotright been stronger than the one engendered by the last generation of NLU systems.
Are we at the dawn of a new era, in which the Grail is finally closer to us?
arXiv Detail & Related papers (2023-03-01T08:32:55Z) - NELLIE: A Neuro-Symbolic Inference Engine for Grounded, Compositional, and Explainable Reasoning [59.16962123636579]
This paper proposes a new take on Prolog-based inference engines.
We replace handcrafted rules with a combination of neural language modeling, guided generation, and semi dense retrieval.
Our implementation, NELLIE, is the first system to demonstrate fully interpretable, end-to-end grounded QA.
arXiv Detail & Related papers (2022-09-16T00:54:44Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - A Neural-Symbolic Approach to Natural Language Understanding [12.752124450670602]
We present a novel framework for NLU called Neural-Symbolic Processor (NSP)
NSP performs analogical reasoning based on neural processing and performs logical reasoning based on both neural and symbolic processing.
As a case study, we conduct experiments on two NLU tasks, question answering (QA) and natural language inference (NLI)
arXiv Detail & Related papers (2022-03-20T14:12:44Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - Target Languages (vs. Inductive Biases) for Learning to Act and Plan [13.820550902006078]
I articulate a different learning approach where representations do not emerge from biases in a neural architecture but are learned over a given target language with a known semantics.
The goals of the paper and talk are to make these ideas explicit, to place them in a broader context where the design of the target language is crucial, and to illustrate them in the context of learning to act and plan.
arXiv Detail & Related papers (2021-09-15T10:24:13Z) - Logic Tensor Networks [9.004005678155023]
We present Logic Networks (LTN), a neurosymbolic formalism and computational model that supports learning and reasoning.
We show that LTN provides a uniform language for the specification and the computation of several AI tasks.
arXiv Detail & Related papers (2020-12-25T22:30:18Z) - Question Answering over Knowledge Bases by Leveraging Semantic Parsing
and Neuro-Symbolic Reasoning [73.00049753292316]
We propose a semantic parsing and reasoning-based Neuro-Symbolic Question Answering(NSQA) system.
NSQA achieves state-of-the-art performance on QALD-9 and LC-QuAD 1.0.
arXiv Detail & Related papers (2020-12-03T05:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.