Knowledge Authoring with Factual English, Rules, and Actions
- URL: http://arxiv.org/abs/2411.06253v1
- Date: Sat, 09 Nov 2024 19:01:34 GMT
- Title: Knowledge Authoring with Factual English, Rules, and Actions
- Authors: Yuheng Wang,
- Abstract summary: CNL-based approaches have shown to have very high accuracy compared to others.
We propose KALM for Rules and Actions (KALMR) to represent and reason with rules and actions.
When used for authoring and reasoning with actions, our approach achieves more than 99.3% correctness.
- Score: 1.2110885481490308
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Knowledge representation and reasoning systems represent knowledge as collections of facts and rules. KRRs can represent complex concepts and relations, and they can query and manipulate information in sophisticated ways. Unfortunately, the KRR technology has been hindered by the fact that specifying the requisite knowledge requires skills that most domain experts do not have, and professional knowledge engineers are hard to find. Some recent CNL-based approaches, such as the Knowledge Authoring Logic Machine (KALM), have shown to have very high accuracy compared to others, and a natural question is to what extent the CNL restrictions can be lifted. Besides the CNL restrictions, KALM has limitations in terms of the types of knowledge it can represent. To address these issues, we propose an extension of KALM called KALM for Factual Language (KALMF). KALMF uses a neural parser for natural language, MS, to parse what we call factual English sentences, which require little grammar training to use. Building upon KALMF, we propose KALM for Rules and Actions (KALMR), to represent and reason with rules and actions. Furthermore, we identify the reasons behind the slow speed of KALM and make optimizations to address this issue. Our evaluation using multiple benchmarks shows that our approaches achieve a high level of correctness on fact and query authoring (95%) and on rule authoring (100%). When used for authoring and reasoning with actions, our approach achieves more than 99.3% correctness, demonstrating its effectiveness in enabling more sophisticated knowledge representation and reasoning. We also illustrate the logical reasoning capabilities of our approach by drawing attention to the problems faced by the famous AI, ChatGPT. Finally, the evaluation of the newly proposed speed optimization points not only to a 68% runtime improvement but also yields better accuracy of the overall system.
Related papers
- KnowLogic: A Benchmark for Commonsense Reasoning via Knowledge-Driven Data Synthesis [33.72114830484246]
We introduce KnowLogic, a benchmark generated through a knowledge-driven synthetic data strategy.
KnowLogic integrates diverse commonsense knowledge, plausible scenarios, and various types of logical reasoning.
Our benchmark consists of 3,000 bilingual (Chinese and English) questions across various domains.
arXiv Detail & Related papers (2025-03-08T13:40:10Z) - LINKED: Eliciting, Filtering and Integrating Knowledge in Large Language Model for Commonsense Reasoning [21.12539851761666]
Large language models (LLMs) sometimes demonstrate poor performance on knowledge-intensive tasks.
We propose a novel method named eliciting, filtering and integrating knowledge in large language model (LINKED)
With our comprehensive experiments on two complex commonsense reasoning benchmarks, our method outperforms SOTA baselines (up to 9.0% improvement of accuracy)
arXiv Detail & Related papers (2024-10-12T14:12:22Z) - Chain-of-Knowledge: Integrating Knowledge Reasoning into Large Language Models by Learning from Knowledge Graphs [55.317267269115845]
Chain-of-Knowledge (CoK) is a comprehensive framework for knowledge reasoning.
CoK includes methodologies for both dataset construction and model learning.
We conduct extensive experiments with KnowReason.
arXiv Detail & Related papers (2024-06-30T10:49:32Z) - Large Language Models are Limited in Out-of-Context Knowledge Reasoning [65.72847298578071]
Large Language Models (LLMs) possess extensive knowledge and strong capabilities in performing in-context reasoning.
This paper focuses on a significant aspect of out-of-context reasoning: Out-of-Context Knowledge Reasoning (OCKR), which is to combine multiple knowledge to infer new knowledge.
arXiv Detail & Related papers (2024-06-11T15:58:59Z) - A Knowledge-Injected Curriculum Pretraining Framework for Question Answering [70.13026036388794]
We propose a general Knowledge-Injected Curriculum Pretraining framework (KICP) to achieve comprehensive KG learning and exploitation for Knowledge-based question answering tasks.
The KI module first injects knowledge into the LM by generating KG-centered pretraining corpus, and generalizes the process into three key steps.
The KA module learns knowledge from the generated corpus with LM equipped with an adapter as well as keeps its original natural language understanding ability.
The CR module follows human reasoning patterns to construct three corpora with increasing difficulties of reasoning, and further trains the LM from easy to hard in a curriculum manner.
arXiv Detail & Related papers (2024-03-11T03:42:03Z) - DeepEdit: Knowledge Editing as Decoding with Constraints [118.78008395850888]
How to edit the knowledge in multi-step reasoning has become the major challenge in the knowledge editing (KE) of large language models (LLMs)
We propose a new KE framework: DEEPEDIT, which enhances LLMs's ability to generate coherent reasoning chains with new knowledge through depth-first search.
In addition to DEEPEDIT, we propose two new KE benchmarks: MQUAKE-2002 and MQUAKE-HARD, which provide more precise and challenging assessments of KE approaches.
arXiv Detail & Related papers (2024-01-19T03:48:27Z) - KnowledgeNavigator: Leveraging Large Language Models for Enhanced
Reasoning over Knowledge Graph [11.808990571175269]
Large language model (LLM) has achieved outstanding performance on various downstream tasks with its powerful natural language understanding and zero-shot capability, but LLM still suffers from knowledge limitation.
We propose a novel framework KnowledgeNavigator to address these challenges by efficiently and accurately retrieving external knowledge from knowledge graph.
We evaluate KnowledgeNavigator on multiple public KGQA benchmarks, the experiments show the framework has great effectiveness and generalization.
arXiv Detail & Related papers (2023-12-26T04:22:56Z) - Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z) - ChatRule: Mining Logical Rules with Large Language Models for Knowledge
Graph Reasoning [107.61997887260056]
We propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs.
Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs.
To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs.
arXiv Detail & Related papers (2023-09-04T11:38:02Z) - Knowledge Authoring for Rules and Actions [1.942275677807562]
We propose KALMRA to enable authoring of rules and actions.
Our evaluation shows that KALMRA achieves a high level of correctness (100%) on rule authoring.
We illustrate the logical reasoning capabilities of KALMRA by drawing attention to the problems faced by the recently made famous AI, ChatGPT.
arXiv Detail & Related papers (2023-05-12T21:08:35Z) - Knowledge Authoring with Factual English [0.0]
Knowledge representation and reasoning (KRR) systems represent knowledge as collections of facts and rules.
One solution could be to extract knowledge from English text, and a number of works have attempted to do so.
Unfortunately, extraction of logical facts from unrestricted natural language is still too inaccurate to be used for reasoning.
Recent CNL-based approaches, such as the Knowledge Authoring Logic Machine (KALM), have shown to have very high accuracy compared to others.
arXiv Detail & Related papers (2022-08-05T10:49:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.