Extending Automated Deduction for Commonsense Reasoning
- URL: http://arxiv.org/abs/2003.13159v1
- Date: Sun, 29 Mar 2020 23:17:16 GMT
- Title: Extending Automated Deduction for Commonsense Reasoning
- Authors: Tanel Tammet
- Abstract summary: The paper argues that the methods and algorithms used by automated reasoners for classical first-order logic can be extended towards commonsense reasoning.
The proposed extensions mostly rely on ordinary proof trees and are devised to handle commonsense knowledge bases containing inconsistencies, default rules, operating on topics, relevance, confidence and similarity measures.
We claim that machine learning is best suited for the construction of commonsense knowledge bases while the extended logic-based methods would be well-suited for actually answering queries from these knowledge bases.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Commonsense reasoning has long been considered as one of the holy grails of
artificial intelligence. Most of the recent progress in the field has been
achieved by novel machine learning algorithms for natural language processing.
However, without incorporating logical reasoning, these algorithms remain
arguably shallow. With some notable exceptions, developers of practical
automated logic-based reasoners have mostly avoided focusing on the problem.
The paper argues that the methods and algorithms used by existing automated
reasoners for classical first-order logic can be extended towards commonsense
reasoning. Instead of devising new specialized logics we propose a framework of
extensions to the mainstream resolution-based search methods to make these
capable of performing search tasks for practical commonsense reasoning with
reasonable efficiency. The proposed extensions mostly rely on operating on
ordinary proof trees and are devised to handle commonsense knowledge bases
containing inconsistencies, default rules, taxonomies, topics, relevance,
confidence and similarity measures. We claim that machine learning is best
suited for the construction of commonsense knowledge bases while the extended
logic-based methods would be well-suited for actually answering queries from
these knowledge bases.
Related papers
- Improving Complex Reasoning over Knowledge Graph with Logic-Aware Curriculum Tuning [89.89857766491475]
We propose a complex reasoning schema over KG upon large language models (LLMs)
We augment the arbitrary first-order logical queries via binary tree decomposition to stimulate the reasoning capability of LLMs.
Experiments across widely used datasets demonstrate that LACT has substantial improvements(brings an average +5.5% MRR score) over advanced methods.
arXiv Detail & Related papers (2024-05-02T18:12:08Z) - LogicBench: Towards Systematic Evaluation of Logical Reasoning Ability of Large Language Models [52.03659714625452]
Recently developed large language models (LLMs) have been shown to perform remarkably well on a wide range of language understanding tasks.
But, can they really "reason" over the natural language?
This question has been receiving significant research attention and many reasoning skills such as commonsense, numerical, and qualitative have been studied.
arXiv Detail & Related papers (2024-04-23T21:08:49Z) - Soft Reasoning on Uncertain Knowledge Graphs [85.1968214421899]
We study the setting of soft queries on uncertain knowledge, which is motivated by the establishment of soft constraint programming.
We propose an ML-based approach with both forward inference and backward calibration to answer soft queries on large-scale, incomplete, and uncertain knowledge graphs.
arXiv Detail & Related papers (2024-03-03T13:13:53Z) - Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic [19.476840373850653]
Large language models show hallucinations as their reasoning procedures are unconstrained by logical principles.
We propose LoT (Logical Thoughts), a self-improvement prompting framework that leverages principles rooted in symbolic logic.
Experimental evaluations conducted on language tasks in diverse domains, including arithmetic, commonsense, symbolic, causal inference, and social problems, demonstrate the efficacy of enhanced reasoning by logic.
arXiv Detail & Related papers (2023-09-23T11:21:12Z) - When Do Program-of-Thoughts Work for Reasoning? [51.2699797837818]
We propose complexity-impacted reasoning score (CIRS) to measure correlation between code and reasoning abilities.
Specifically, we use the abstract syntax tree to encode the structural information and calculate logical complexity.
Code will be integrated into the EasyInstruct framework at https://github.com/zjunlp/EasyInstruct.
arXiv Detail & Related papers (2023-08-29T17:22:39Z) - Some Preliminary Steps Towards Metaverse Logic [0.8594140167290096]
We look in the present work for a logic that would be powerful enough to handle the situations arising both in the real and in the fictional underlying application domains.
The discussion was kept at a rather informal level, always trying to convey the intuition behind the theoretical notions in natural language terms.
arXiv Detail & Related papers (2023-07-10T09:13:22Z) - Connecting Proof Theory and Knowledge Representation: Sequent Calculi
and the Chase with Existential Rules [1.8275108630751844]
We show that the chase mechanism in the context of existential rules is in essence the same as proof-search in an extension of Gentzen's sequent calculus for first-order logic.
We formally connect a central tool for establishing decidability proof-theoretically with a central decidability tool in the context of knowledge representation.
arXiv Detail & Related papers (2023-06-05T01:10:23Z) - LAMBADA: Backward Chaining for Automated Reasoning in Natural Language [11.096348678079574]
Backward Chaining algorithm, called LAMBADA, decomposes reasoning into four sub-modules.
We show that LAMBADA achieves sizable accuracy boosts over state-of-the-art forward reasoning methods.
arXiv Detail & Related papers (2022-12-20T18:06:03Z) - An Extensible Logic Embedding Tool for Lightweight Non-Classical
Reasoning [91.3755431537592]
The logic embedding tool provides a procedural encoding for non-classical reasoning problems into classical higher-order logic.
It can support an increasing number of different non-classical logics as reasoning targets.
arXiv Detail & Related papers (2022-03-23T12:08:51Z) - Modeling and Automating Public Announcement Logic with Relativized
Common Knowledge as a Fragment of HOL in LogiKEy [0.0]
This article presents a semantical embedding for public announcement logic with relativized common knowledge.
It enables the first-time automation of this logic with off-the-shelf theorem provers for classical higher-order logic.
The work constitutes an important addition to the pluralist LogiKEy knowledge engineering methodology.
arXiv Detail & Related papers (2021-11-02T15:14:52Z) - Fact-driven Logical Reasoning for Machine Reading Comprehension [82.58857437343974]
We are motivated to cover both commonsense and temporary knowledge clues hierarchically.
Specifically, we propose a general formalism of knowledge units by extracting backbone constituents of the sentence.
We then construct a supergraph on top of the fact units, allowing for the benefit of sentence-level (relations among fact groups) and entity-level interactions.
arXiv Detail & Related papers (2021-05-21T13:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.