Visual Abductive Reasoning
- URL: http://arxiv.org/abs/2203.14040v1
- Date: Sat, 26 Mar 2022 10:17:03 GMT
- Title: Visual Abductive Reasoning
- Authors: Chen Liang, Wenguan Wang, Tianfei Zhou and Yi Yang
- Abstract summary: Abductive reasoning seeks the likeliest possible explanation for partial observations.
We propose a new task and dataset, Visual Abductive Reasoning ( VAR), for examining abductive reasoning ability of machine intelligence in everyday visual situations.
- Score: 85.17040703205608
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Abductive reasoning seeks the likeliest possible explanation for partial
observations. Although abduction is frequently employed in human daily
reasoning, it is rarely explored in computer vision literature. In this paper,
we propose a new task and dataset, Visual Abductive Reasoning (VAR), for
examining abductive reasoning ability of machine intelligence in everyday
visual situations. Given an incomplete set of visual events, AI systems are
required to not only describe what is observed, but also infer the hypothesis
that can best explain the visual premise. Based on our large-scale VAR dataset,
we devise a strong baseline model, Reasoner (causal-and-cascaded reasoning
Transformer). First, to capture the causal structure of the observations, a
contextualized directional position embedding strategy is adopted in the
encoder, that yields discriminative representations for the premise and
hypothesis. Then, multiple decoders are cascaded to generate and progressively
refine the premise and hypothesis sentences. The prediction scores of the
sentences are used to guide cross-sentence information flow in the cascaded
reasoning procedure. Our VAR benchmarking results show that Reasoner surpasses
many famous video-language models, while still being far behind human
performance. This work is expected to foster future efforts in the
reasoning-beyond-observation paradigm.
Related papers
- UNcommonsense Reasoning: Abductive Reasoning about Uncommon Situations [62.71847873326847]
We investigate the ability to model unusual, unexpected, and unlikely situations.
Given a piece of context with an unexpected outcome, this task requires reasoning abductively to generate an explanation.
We release a new English language corpus called UNcommonsense.
arXiv Detail & Related papers (2023-11-14T19:00:55Z) - Abductive Commonsense Reasoning Exploiting Mutually Exclusive
Explanations [118.0818807474809]
Abductive reasoning aims to find plausible explanations for an event.
Existing approaches for abductive reasoning in natural language processing often rely on manually generated annotations for supervision.
This work proposes an approach for abductive commonsense reasoning that exploits the fact that only a subset of explanations is correct for a given context.
arXiv Detail & Related papers (2023-05-24T01:35:10Z) - Parallel Sentence-Level Explanation Generation for Real-World
Low-Resource Scenarios [18.5713713816771]
This paper is the first to explore the problem smoothly from weak-supervised learning to unsupervised learning.
We propose a non-autoregressive interpretable model to facilitate parallel explanation generation and simultaneous prediction.
arXiv Detail & Related papers (2023-02-21T14:52:21Z) - CLEAR: Generative Counterfactual Explanations on Graphs [60.30009215290265]
We study the problem of counterfactual explanation generation on graphs.
A few studies have explored counterfactual explanations on graphs, but many challenges of this problem are still not well-addressed.
We propose a novel framework CLEAR which aims to generate counterfactual explanations on graphs for graph-level prediction models.
arXiv Detail & Related papers (2022-10-16T04:35:32Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Abstract Spatial-Temporal Reasoning via Probabilistic Abduction and
Execution [97.50813120600026]
Spatial-temporal reasoning is a challenging task in Artificial Intelligence (AI)
Recent works have focused on an abstract reasoning task of this kind -- Raven's Progressive Matrices ( RPM)
We propose a neuro-symbolic Probabilistic Abduction and Execution learner (PrAE) learner.
arXiv Detail & Related papers (2021-03-26T02:42:18Z) - Explain and Predict, and then Predict Again [6.865156063241553]
We propose ExPred, that uses multi-task learning in the explanation generation phase effectively trading-off explanation and prediction losses.
We conduct an extensive evaluation of our approach on three diverse language datasets.
arXiv Detail & Related papers (2021-01-11T19:36:52Z) - On Generating Plausible Counterfactual and Semi-Factual Explanations for
Deep Learning [15.965337956587373]
PlausIble Exceptionality-based Contrastive Explanations (PIECE), modifies all exceptional features in a test image to be normal from the perspective of the counterfactual class.
Two controlled experiments compare PIECE to others in the literature, showing that PIECE not only generates the most plausible counterfactuals on several measures, but also the best semifactuals.
arXiv Detail & Related papers (2020-09-10T14:48:12Z) - Explanations of Black-Box Model Predictions by Contextual Importance and
Utility [1.7188280334580195]
We present the Contextual Importance (CI) and Contextual Utility (CU) concepts to extract explanations easily understandable by experts as well as novice users.
This method explains the prediction results without transforming the model into an interpretable one.
We show the utility of explanations in car selection example and Iris flower classification by presenting complete (i.e. the causes of an individual prediction) and contrastive explanation.
arXiv Detail & Related papers (2020-05-30T06:49:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.