Knowledge Authoring with Factual English
- URL: http://arxiv.org/abs/2208.03094v1
- Date: Fri, 5 Aug 2022 10:49:41 GMT
- Title: Knowledge Authoring with Factual English
- Authors: Yuheng Wang (Department of Computer Science, Stony Brook University),
Giorgian Borca-Tasciuc (Department of Computer Science, Stony Brook
University), Nikhil Goel (Department of Computer Science, Stony Brook
University), Paul Fodor (Department of Computer Science, Stony Brook
University), Michael Kifer (Department of Computer Science, Stony Brook
University)
- Abstract summary: Knowledge representation and reasoning (KRR) systems represent knowledge as collections of facts and rules.
One solution could be to extract knowledge from English text, and a number of works have attempted to do so.
Unfortunately, extraction of logical facts from unrestricted natural language is still too inaccurate to be used for reasoning.
Recent CNL-based approaches, such as the Knowledge Authoring Logic Machine (KALM), have shown to have very high accuracy compared to others.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge representation and reasoning (KRR) systems represent knowledge as
collections of facts and rules. Like databases, KRR systems contain information
about domains of human activities like industrial enterprises, science, and
business. KRRs can represent complex concepts and relations, and they can query
and manipulate information in sophisticated ways. Unfortunately, the KRR
technology has been hindered by the fact that specifying the requisite
knowledge requires skills that most domain experts do not have, and
professional knowledge engineers are hard to find. One solution could be to
extract knowledge from English text, and a number of works have attempted to do
so (OpenSesame, Google's Sling, etc.). Unfortunately, at present, extraction of
logical facts from unrestricted natural language is still too inaccurate to be
used for reasoning, while restricting the grammar of the language (so-called
controlled natural language, or CNL) is hard for the users to learn and use.
Nevertheless, some recent CNL-based approaches, such as the Knowledge Authoring
Logic Machine (KALM), have shown to have very high accuracy compared to others,
and a natural question is to what extent the CNL restrictions can be lifted. In
this paper, we address this issue by transplanting the KALM framework to a
neural natural language parser, mStanza. Here we limit our attention to
authoring facts and queries and therefore our focus is what we call factual
English statements. Authoring other types of knowledge, such as rules, will be
considered in our followup work. As it turns out, neural network based parsers
have problems of their own and the mistakes they make range from part-of-speech
tagging to lemmatization to dependency errors. We present a number of
techniques for combating these problems and test the new system, KALMFL (i.e.,
KALM for factual language), on a number of benchmarks, which show KALMFL
achieves correctness in excess of 95%.
Related papers
- General Reasoning Requires Learning to Reason from the Get-go [19.90997698310839]
Large Language Models (LLMs) have demonstrated impressive real-world utility.
But their ability to reason adaptively and robustly remains fragile.
We propose disangling knowledge and reasoning through three key directions.
arXiv Detail & Related papers (2025-02-26T18:51:12Z) - Knowledge Authoring with Factual English, Rules, and Actions [1.2110885481490308]
CNL-based approaches have shown to have very high accuracy compared to others.
We propose KALM for Rules and Actions (KALMR) to represent and reason with rules and actions.
When used for authoring and reasoning with actions, our approach achieves more than 99.3% correctness.
arXiv Detail & Related papers (2024-11-09T19:01:34Z) - Chain-of-Knowledge: Integrating Knowledge Reasoning into Large Language Models by Learning from Knowledge Graphs [55.317267269115845]
Chain-of-Knowledge (CoK) is a comprehensive framework for knowledge reasoning.
CoK includes methodologies for both dataset construction and model learning.
We conduct extensive experiments with KnowReason.
arXiv Detail & Related papers (2024-06-30T10:49:32Z) - Large Language Models are Limited in Out-of-Context Knowledge Reasoning [65.72847298578071]
Large Language Models (LLMs) possess extensive knowledge and strong capabilities in performing in-context reasoning.
This paper focuses on a significant aspect of out-of-context reasoning: Out-of-Context Knowledge Reasoning (OCKR), which is to combine multiple knowledge to infer new knowledge.
arXiv Detail & Related papers (2024-06-11T15:58:59Z) - LLMs' Reading Comprehension Is Affected by Parametric Knowledge and Struggles with Hypothetical Statements [59.71218039095155]
Task of reading comprehension (RC) provides a primary means to assess language models' natural language understanding (NLU) capabilities.
If the context aligns with the models' internal knowledge, it is hard to discern whether the models' answers stem from context comprehension or from internal information.
To address this issue, we suggest to use RC on imaginary data, based on fictitious facts and entities.
arXiv Detail & Related papers (2024-04-09T13:08:56Z) - KnowTuning: Knowledge-aware Fine-tuning for Large Language Models [83.5849717262019]
We propose a knowledge-aware fine-tuning (KnowTuning) method to improve fine-grained and coarse-grained knowledge awareness of LLMs.
KnowTuning generates more facts with less factual error rate under fine-grained facts evaluation.
arXiv Detail & Related papers (2024-02-17T02:54:32Z) - Is Knowledge All Large Language Models Needed for Causal Reasoning? [11.476877330365664]
This paper explores the causal reasoning of large language models (LLMs) to enhance their interpretability and reliability in advancing artificial intelligence.
We propose a novel causal attribution model that utilizes do-operators" for constructing counterfactual scenarios.
arXiv Detail & Related papers (2023-12-30T04:51:46Z) - Knowledge Authoring for Rules and Actions [1.942275677807562]
We propose KALMRA to enable authoring of rules and actions.
Our evaluation shows that KALMRA achieves a high level of correctness (100%) on rule authoring.
We illustrate the logical reasoning capabilities of KALMRA by drawing attention to the problems faced by the recently made famous AI, ChatGPT.
arXiv Detail & Related papers (2023-05-12T21:08:35Z) - A Survey of Knowledge Enhanced Pre-trained Language Models [78.56931125512295]
We present a comprehensive review of Knowledge Enhanced Pre-trained Language Models (KE-PLMs)
For NLU, we divide the types of knowledge into four categories: linguistic knowledge, text knowledge, knowledge graph (KG) and rule knowledge.
The KE-PLMs for NLG are categorized into KG-based and retrieval-based methods.
arXiv Detail & Related papers (2022-11-11T04:29:02Z) - Why Do Neural Language Models Still Need Commonsense Knowledge to Handle
Semantic Variations in Question Answering? [22.536777694218593]
Masked neural language models (MNLMs) are made up of huge neural network structures and trained to restore the masked text.
This paper provides new insights and empirical analyses on commonsense knowledge included in pretrained MNLMs.
arXiv Detail & Related papers (2022-09-01T17:15:02Z) - GreaseLM: Graph REASoning Enhanced Language Models for Question
Answering [159.9645181522436]
GreaseLM is a new model that fuses encoded representations from pretrained LMs and graph neural networks over multiple layers of modality interaction operations.
We show that GreaseLM can more reliably answer questions that require reasoning over both situational constraints and structured knowledge, even outperforming models 8x larger.
arXiv Detail & Related papers (2022-01-21T19:00:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.