Knowledge-driven Natural Language Understanding of English Text and its
Applications
- URL: http://arxiv.org/abs/2101.11707v1
- Date: Wed, 27 Jan 2021 22:02:50 GMT
- Title: Knowledge-driven Natural Language Understanding of English Text and its
Applications
- Authors: Kinjal Basu, Sarat Varanasi, Farhad Shakerin, Joaquin Arias, Gopal
Gupta
- Abstract summary: We introduce a knowledge driven semantic representation approach for English text.
We present two NLU applications: SQuARE (Semantic-based Question Answering and Reasoning Engine) and StaCACK (Stateful Conversational Agent using Commonsense Knowledge)
- Score: 8.417188296231059
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding the meaning of a text is a fundamental challenge of natural
language understanding (NLU) research. An ideal NLU system should process a
language in a way that is not exclusive to a single task or a dataset. Keeping
this in mind, we have introduced a novel knowledge driven semantic
representation approach for English text. By leveraging the VerbNet lexicon, we
are able to map syntax tree of the text to its commonsense meaning represented
using basic knowledge primitives. The general purpose knowledge represented
from our approach can be used to build any reasoning based NLU system that can
also provide justification. We applied this approach to construct two NLU
applications that we present here: SQuARE (Semantic-based Question Answering
and Reasoning Engine) and StaCACK (Stateful Conversational Agent using
Commonsense Knowledge). Both these systems work by "truly understanding" the
natural language text they process and both provide natural language
explanations for their responses while maintaining high accuracy.
Related papers
- Teacher Perception of Automatically Extracted Grammar Concepts for L2
Language Learning [66.79173000135717]
We apply this work to teaching two Indian languages, Kannada and Marathi, which do not have well-developed resources for second language learning.
We extract descriptions from a natural text corpus that answer questions about morphosyntax (learning of word order, agreement, case marking, or word formation) and semantics (learning of vocabulary).
We enlist the help of language educators from schools in North America to perform a manual evaluation, who find the materials have potential to be used for their lesson preparation and learner evaluation.
arXiv Detail & Related papers (2023-10-27T18:17:29Z) - Reasoning about Ambiguous Definite Descriptions [2.5398014196797605]
Natural language reasoning plays an important role in improving language models' ability to solve complex language understanding tasks.
No resources exist to evaluate how well Large Language Models can use explicit reasoning to resolve ambiguity in language.
We propose to use ambiguous definite descriptions for this purpose and create and publish the first benchmark dataset consisting of such phrases.
arXiv Detail & Related papers (2023-10-23T07:52:38Z) - ChatABL: Abductive Learning via Natural Language Interaction with
ChatGPT [72.83383437501577]
Large language models (LLMs) have recently demonstrated significant potential in mathematical abilities.
LLMs currently have difficulty in bridging perception, language understanding and reasoning capabilities.
This paper presents a novel method for integrating LLMs into the abductive learning framework.
arXiv Detail & Related papers (2023-04-21T16:23:47Z) - Reliable Natural Language Understanding with Large Language Models and
Answer Set Programming [0.0]
Large language models (LLMs) are able to leverage patterns in the text to solve a variety of NLP tasks, but fall short in problems that require reasoning.
We propose STAR, a framework that combines LLMs with Answer Set Programming (ASP)
Goal-directed ASP is then employed to reliably reason over this knowledge.
arXiv Detail & Related papers (2023-02-07T22:37:21Z) - Language Models as Inductive Reasoners [125.99461874008703]
We propose a new paradigm (task) for inductive reasoning, which is to induce natural language rules from natural language facts.
We create a dataset termed DEER containing 1.2k rule-fact pairs for the task, where rules and facts are written in natural language.
We provide the first and comprehensive analysis of how well pretrained language models can induce natural language rules from natural language facts.
arXiv Detail & Related papers (2022-12-21T11:12:14Z) - A Survey of Knowledge Enhanced Pre-trained Language Models [78.56931125512295]
We present a comprehensive review of Knowledge Enhanced Pre-trained Language Models (KE-PLMs)
For NLU, we divide the types of knowledge into four categories: linguistic knowledge, text knowledge, knowledge graph (KG) and rule knowledge.
The KE-PLMs for NLG are categorized into KG-based and retrieval-based methods.
arXiv Detail & Related papers (2022-11-11T04:29:02Z) - An ASP-based Approach to Answering Natural Language Questions for Texts [8.417188296231059]
An approach based on answer set programming (ASP) is proposed in this paper for representing knowledge generated from natural language texts.
ASP can facilitate many natural language tasks such as automated question answering, text summarization, and automated question generation.
In this paper, we describe the CASPR system that we have developed to automate the task of answering natural language questions given English text.
arXiv Detail & Related papers (2021-12-21T14:13:06Z) - Towards More Robust Natural Language Understanding [0.0]
Natural Language Understanding (NLU) is branch of Natural Language Processing (NLP)
Recent years have witnessed notable progress across various NLU tasks with deep learning techniques.
It's worth noting that the human ability of understanding natural language is flexible and robust.
arXiv Detail & Related papers (2021-12-01T17:27:19Z) - Question Answering over Knowledge Bases by Leveraging Semantic Parsing
and Neuro-Symbolic Reasoning [73.00049753292316]
We propose a semantic parsing and reasoning-based Neuro-Symbolic Question Answering(NSQA) system.
NSQA achieves state-of-the-art performance on QALD-9 and LC-QuAD 1.0.
arXiv Detail & Related papers (2020-12-03T05:17:55Z) - SQuARE: Semantics-based Question Answering and Reasoning Engine [9.902883278247726]
We introduce a general semantics-based framework for natural language QA.
We also describe the SQuARE system, an application of this framework.
SQuARE achieves 100% accuracy on all the five datasets that we have tested.
arXiv Detail & Related papers (2020-09-22T00:48:18Z) - Unsupervised Commonsense Question Answering with Self-Talk [71.63983121558843]
We propose an unsupervised framework based on self-talk as a novel alternative to commonsense tasks.
Inspired by inquiry-based discovery learning, our approach inquires language models with a number of information seeking questions.
Empirical results demonstrate that the self-talk procedure substantially improves the performance of zero-shot language model baselines.
arXiv Detail & Related papers (2020-04-11T20:43:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.