ExplaGraphs: An Explanation Graph Generation Task for Structured
Commonsense Reasoning
- URL: http://arxiv.org/abs/2104.07644v2
- Date: Sat, 17 Apr 2021 23:34:27 GMT
- Title: ExplaGraphs: An Explanation Graph Generation Task for Structured
Commonsense Reasoning
- Authors: Swarnadeep Saha, Prateek Yadav, Lisa Bauer, Mohit Bansal
- Abstract summary: We present a new generative and structured commonsense-reasoning task (and an associated dataset) of explanation graph generation for stance prediction.
Specifically, given a belief and an argument, a model has to predict whether the argument supports or counters the belief and also generate a commonsense-augmented graph that serves as non-trivial, complete, and unambiguous explanation for the predicted stance.
A significant 83% of our graphs contain external commonsense nodes with diverse structures and reasoning depths.
- Score: 65.15423587105472
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent commonsense-reasoning tasks are typically discriminative in nature,
where a model answers a multiple-choice question for a certain context.
Discriminative tasks are limiting because they fail to adequately evaluate the
model's ability to reason and explain predictions with underlying commonsense
knowledge. They also allow such models to use reasoning shortcuts and not be
"right for the right reasons". In this work, we present ExplaGraphs, a new
generative and structured commonsense-reasoning task (and an associated
dataset) of explanation graph generation for stance prediction. Specifically,
given a belief and an argument, a model has to predict whether the argument
supports or counters the belief and also generate a commonsense-augmented graph
that serves as non-trivial, complete, and unambiguous explanation for the
predicted stance. The explanation graphs for our dataset are collected via
crowdsourcing through a novel Collect-Judge-And-Refine graph collection
framework that improves the graph quality via multiple rounds of verification
and refinement. A significant 83% of our graphs contain external commonsense
nodes with diverse structures and reasoning depths. We also propose a
multi-level evaluation framework that checks for the structural and semantic
correctness of the generated graphs and their plausibility with human-written
graphs. We experiment with state-of-the-art text generation models like BART
and T5 to generate explanation graphs and observe that there is a large gap
with human performance, thereby encouraging useful future work for this new
commonsense graph-based explanation generation task.
Related papers
- Motif-Consistent Counterfactuals with Adversarial Refinement for Graph-Level Anomaly Detection [30.618065157205507]
We propose a novel approach, Motif-consistent Counterfactuals with Adversarial Refinement (MotifCAR) for graph-level anomaly detection.
The model combines the motif of one graph, the core subgraph containing the identification (category) information, and the contextual subgraph of another graph to produce a raw counterfactual graph.
MotifCAR can generate high-quality counterfactual graphs.
arXiv Detail & Related papers (2024-07-18T08:04:57Z) - CLEAR: Generative Counterfactual Explanations on Graphs [60.30009215290265]
We study the problem of counterfactual explanation generation on graphs.
A few studies have explored counterfactual explanations on graphs, but many challenges of this problem are still not well-addressed.
We propose a novel framework CLEAR which aims to generate counterfactual explanations on graphs for graph-level prediction models.
arXiv Detail & Related papers (2022-10-16T04:35:32Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - Explanation Graph Generation via Pre-trained Language Models: An
Empirical Study with Contrastive Learning [84.35102534158621]
We study pre-trained language models that generate explanation graphs in an end-to-end manner.
We propose simple yet effective ways of graph perturbations via node and edge edit operations.
Our methods lead to significant improvements in both structural and semantic accuracy of explanation graphs.
arXiv Detail & Related papers (2022-04-11T00:58:27Z) - GraphOpt: Learning Optimization Models of Graph Formation [72.75384705298303]
We propose an end-to-end framework that learns an implicit model of graph structure formation and discovers an underlying optimization mechanism.
The learned objective can serve as an explanation for the observed graph properties, thereby lending itself to transfer across different graphs within a domain.
GraphOpt poses link formation in graphs as a sequential decision-making process and solves it using maximum entropy inverse reinforcement learning algorithm.
arXiv Detail & Related papers (2020-07-07T16:51:39Z) - Out-of-Sample Representation Learning for Multi-Relational Graphs [8.956321788625894]
We study the out-of-sample representation learning problem for non-attributed knowledge graphs.
We create benchmark datasets for this task, develop several models and baselines, and provide empirical analyses and comparisons of the proposed models and baselines.
arXiv Detail & Related papers (2020-04-28T00:53:01Z) - A Survey of Adversarial Learning on Graphs [59.21341359399431]
We investigate and summarize the existing works on graph adversarial learning tasks.
Specifically, we survey and unify the existing works w.r.t. attack and defense in graph analysis tasks.
We emphasize the importance of related evaluation metrics, investigate and summarize them comprehensively.
arXiv Detail & Related papers (2020-03-10T12:48:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.