Fact-driven Logical Reasoning for Machine Reading Comprehension
- URL: http://arxiv.org/abs/2105.10334v2
- Date: Fri, 26 May 2023 05:42:15 GMT
- Title: Fact-driven Logical Reasoning for Machine Reading Comprehension
- Authors: Siru Ouyang, Zhuosheng Zhang and Hai Zhao
- Abstract summary: We are motivated to cover both commonsense and temporary knowledge clues hierarchically.
Specifically, we propose a general formalism of knowledge units by extracting backbone constituents of the sentence.
We then construct a supergraph on top of the fact units, allowing for the benefit of sentence-level (relations among fact groups) and entity-level interactions.
- Score: 82.58857437343974
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have witnessed an increasing interest in training machines with
reasoning ability, which deeply relies on accurately and clearly presented clue
forms. The clues are usually modeled as entity-aware knowledge in existing
studies. However, those entity-aware clues are primarily focused on
commonsense, making them insufficient for tasks that require knowledge of
temporary facts or events, particularly in logical reasoning for reading
comprehension. To address this challenge, we are motivated to cover both
commonsense and temporary knowledge clues hierarchically. Specifically, we
propose a general formalism of knowledge units by extracting backbone
constituents of the sentence, such as the subject-verb-object formed ``facts''.
We then construct a supergraph on top of the fact units, allowing for the
benefit of sentence-level (relations among fact groups) and entity-level
interactions (concepts or actions inside a fact). Experimental results on
logical reasoning benchmarks and dialogue modeling datasets show that our
approach improves the baselines substantially, and it is general across
backbone models. Code is available at
\url{https://github.com/ozyyshr/FocalReasoner}.
Related papers
- SOK-Bench: A Situated Video Reasoning Benchmark with Aligned Open-World Knowledge [60.76719375410635]
We propose a new benchmark (SOK-Bench) consisting of 44K questions and 10K situations with instance-level annotations depicted in the videos.
The reasoning process is required to understand and apply situated knowledge and general knowledge for problem-solving.
We generate associated question-answer pairs and reasoning processes, finally followed by manual reviews for quality assurance.
arXiv Detail & Related papers (2024-05-15T21:55:31Z) - EventGround: Narrative Reasoning by Grounding to Eventuality-centric Knowledge Graphs [41.928535719157054]
We propose an initial comprehensive framework called EventGround to tackle the problem of grounding free-texts to eventuality-centric knowledge graphs.
We provide simple yet effective parsing and partial information extraction methods to tackle these problems.
Our framework, incorporating grounded knowledge, achieves state-of-the-art performance while providing interpretable evidence.
arXiv Detail & Related papers (2024-03-30T01:16:37Z) - An Overview Of Temporal Commonsense Reasoning and Acquisition [20.108317515225504]
Temporal commonsense reasoning refers to the ability to understand the typical temporal context of phrases, actions, and events.
Recent research on the performance of large language models suggests that they often take shortcuts in their reasoning and fall prey to simple linguistic traps.
arXiv Detail & Related papers (2023-07-28T01:30:15Z) - Object Topological Character Acquisition by Inductive Learning [0.0]
In this paper, a formal representation of topological structure based on object's skeleton (RTS) was proposed and the induction process of "seeking common ground" is realized.
It is clear that implementing object recognition is not based on simple physical features such as colors, edges, textures, etc., but on their common geometry, such as topologies.
arXiv Detail & Related papers (2023-06-19T01:19:37Z) - RECKONING: Reasoning through Dynamic Knowledge Encoding [51.076603338764706]
We show that language models can answer questions by reasoning over knowledge provided as part of the context.
In these situations, the model fails to distinguish the knowledge that is necessary to answer the question.
We propose teaching the model to reason more robustly by folding the provided contextual knowledge into the model's parameters.
arXiv Detail & Related papers (2023-05-10T17:54:51Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Generated Knowledge Prompting for Commonsense Reasoning [53.88983683513114]
We propose generating knowledge statements directly from a language model with a generic prompt format.
This approach improves performance of both off-the-shelf and finetuned language models on four commonsense reasoning tasks.
Notably, we find that a model's predictions can improve when using its own generated knowledge.
arXiv Detail & Related papers (2021-10-15T21:58:03Z) - A Knowledge-Enhanced Pretraining Model for Commonsense Story Generation [98.25464306634758]
We propose to utilize commonsense knowledge from external knowledge bases to generate reasonable stories.
We employ multi-task learning which combines a discriminative objective to distinguish true and fake stories.
Our model can generate more reasonable stories than state-of-the-art baselines, particularly in terms of logic and global coherence.
arXiv Detail & Related papers (2020-01-15T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.