Think about it! Improving defeasible reasoning by first modeling the
question scenario
- URL: http://arxiv.org/abs/2110.12349v1
- Date: Sun, 24 Oct 2021 04:13:52 GMT
- Title: Think about it! Improving defeasible reasoning by first modeling the
question scenario
- Authors: Aman Madaan, Niket Tandon, Dheeraj Rajagopal, Peter Clark, Yiming
Yang, Eduard Hovy
- Abstract summary: Defeasible reasoning is the mode of reasoning where conclusions can be overturned by taking into account new evidence.
Our research goal asks whether neural models can similarly benefit from envisioning the question scenario before answering a defeasible query.
Our system, CURIOUS, achieves a new state-of-the-art on three different defeasible reasoning datasets.
- Score: 35.6110036360506
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Defeasible reasoning is the mode of reasoning where conclusions can be
overturned by taking into account new evidence. Existing cognitive science
literature on defeasible reasoning suggests that a person forms a mental model
of the problem scenario before answering questions. Our research goal asks
whether neural models can similarly benefit from envisioning the question
scenario before answering a defeasible query. Our approach is, given a
question, to have a model first create a graph of relevant influences, and then
leverage that graph as an additional input when answering the question. Our
system, CURIOUS, achieves a new state-of-the-art on three different defeasible
reasoning datasets. This result is significant as it illustrates that
performance can be improved by guiding a system to "think about" a question and
explicitly model the scenario, rather than answering reflexively. Code, data,
and pre-trained models are located at https://github.com/madaan/thinkaboutit.
Related papers
- Open-Set Knowledge-Based Visual Question Answering with Inference Paths [79.55742631375063]
The purpose of Knowledge-Based Visual Question Answering (KB-VQA) is to provide a correct answer to the question with the aid of external knowledge bases.
We propose a new retriever-ranker paradigm of KB-VQA, Graph pATH rankER (GATHER for brevity)
Specifically, it contains graph constructing, pruning, and path-level ranking, which not only retrieves accurate answers but also provides inference paths that explain the reasoning process.
arXiv Detail & Related papers (2023-10-12T09:12:50Z) - RECKONING: Reasoning through Dynamic Knowledge Encoding [51.076603338764706]
We show that language models can answer questions by reasoning over knowledge provided as part of the context.
In these situations, the model fails to distinguish the knowledge that is necessary to answer the question.
We propose teaching the model to reason more robustly by folding the provided contextual knowledge into the model's parameters.
arXiv Detail & Related papers (2023-05-10T17:54:51Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Entailer: Answering Questions with Faithful and Truthful Chains of
Reasoning [26.715242799194908]
We show how a question-answering system can show how its answers are implied by its own internal beliefs via a systematic chain of reasoning.
Our approach is to combine a trained backward-chaining model, capable of generating a set of premises entailing an answer hypothesis, with a verifier that checks that the model itself believes those premises.
To our knowledge, this is the first system to generate multistep chains that are both faithful (the answer follows from the reasoning) and truthful (the chain reflects the system's own internal beliefs)
arXiv Detail & Related papers (2022-10-21T19:51:56Z) - PQA: Perceptual Question Answering [35.051664704756995]
Perceptual organization remains one of the very few established theories on the human visual system.
In this paper, we rejuvenate the study of perceptual organization, by advocating two positional changes.
We examine purposefully generated synthetic data, instead of complex real imagery.
We then borrow insights from human psychology to design an agent that casts perceptual organization as a self-attention problem.
arXiv Detail & Related papers (2021-04-08T08:06:21Z) - Answering Ambiguous Questions through Generative Evidence Fusion and
Round-Trip Prediction [46.38201136570501]
We present a model that aggregates and combines evidence from multiple passages to adaptively predict a single answer or a set of question-answer pairs for ambiguous questions.
Our model, named Refuel, achieves a new state-of-the-art performance on the AmbigQA dataset, and shows competitive performance on NQ-Open and TriviaQA.
arXiv Detail & Related papers (2020-11-26T05:48:55Z) - Explain by Evidence: An Explainable Memory-based Neural Network for
Question Answering [41.73026155036886]
This paper proposes an explainable, evidence-based memory network architecture.
It learns to summarize the dataset and extract supporting evidences to make its decision.
Our model achieves state-of-the-art performance on two popular question answering datasets.
arXiv Detail & Related papers (2020-11-05T21:18:21Z) - Neuro-Symbolic Visual Reasoning: Disentangling "Visual" from "Reasoning" [49.76230210108583]
We propose a framework to isolate and evaluate the reasoning aspect of visual question answering (VQA) separately from its perception.
We also propose a novel top-down calibration technique that allows the model to answer reasoning questions even with imperfect perception.
On the challenging GQA dataset, this framework is used to perform in-depth, disentangled comparisons between well-known VQA models.
arXiv Detail & Related papers (2020-06-20T08:48:29Z) - SQuINTing at VQA Models: Introspecting VQA Models with Sub-Questions [66.86887670416193]
We show that state-of-the-art VQA models have comparable performance in answering perception and reasoning questions, but suffer from consistency problems.
To address this shortcoming, we propose an approach called Sub-Question-aware Network Tuning (SQuINT)
We show that SQuINT improves model consistency by 5%, also marginally improving performance on the Reasoning questions in VQA, while also displaying better attention maps.
arXiv Detail & Related papers (2020-01-20T01:02:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.