Generating Commonsense Explanation by Extracting Bridge Concepts from
Reasoning Paths
- URL: http://arxiv.org/abs/2009.11753v1
- Date: Thu, 24 Sep 2020 15:27:20 GMT
- Title: Generating Commonsense Explanation by Extracting Bridge Concepts from
Reasoning Paths
- Authors: Haozhe Ji, Pei Ke, Shaohan Huang, Furu Wei, Minlie Huang
- Abstract summary: We propose a method that first extracts the underlying concepts which are served as textitbridges in the reasoning chain.
To facilitate the reasoning process, we utilize external commonsense knowledge to build the connection between a statement and the bridge concepts.
We design a bridge concept extraction model that first scores the triples, routes the paths in the subgraph, and further selects bridge concepts with weak supervision.
- Score: 128.13034600968257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Commonsense explanation generation aims to empower the machine's sense-making
capability by generating plausible explanations to statements against
commonsense. While this task is easy to human, the machine still struggles to
generate reasonable and informative explanations. In this work, we propose a
method that first extracts the underlying concepts which are served as
\textit{bridges} in the reasoning chain and then integrates these concepts to
generate the final explanation. To facilitate the reasoning process, we utilize
external commonsense knowledge to build the connection between a statement and
the bridge concepts by extracting and pruning multi-hop paths to build a
subgraph. We design a bridge concept extraction model that first scores the
triples, routes the paths in the subgraph, and further selects bridge concepts
with weak supervision at both the triple level and the concept level. We
conduct experiments on the commonsense explanation generation task and our
model outperforms the state-of-the-art baselines in both automatic and human
evaluation.
Related papers
- Conceptual and Unbiased Reasoning in Language Models [98.90677711523645]
We propose a novel conceptualization framework that forces models to perform conceptual reasoning on abstract questions.
We show that existing large language models fall short on conceptual reasoning, dropping 9% to 28% on various benchmarks.
We then discuss how models can improve since high-level abstract reasoning is key to unbiased and generalizable decision-making.
arXiv Detail & Related papers (2024-03-30T00:53:53Z) - DiConStruct: Causal Concept-based Explanations through Black-Box
Distillation [9.735426765564474]
We present DiConStruct, an explanation method that is both concept-based and causal.
Our explainer works as a distillation model to any black-box machine learning model by approximating its predictions while producing the respective explanations.
arXiv Detail & Related papers (2024-01-16T17:54:02Z) - Interpretable Neural-Symbolic Concept Reasoning [7.1904050674791185]
Concept-based models aim to address this issue by learning tasks based on a set of human-understandable concepts.
We propose the Deep Concept Reasoner (DCR), the first interpretable concept-based model that builds upon concept embeddings.
arXiv Detail & Related papers (2023-04-27T09:58:15Z) - Automatic Concept Extraction for Concept Bottleneck-based Video
Classification [58.11884357803544]
We present an automatic Concept Discovery and Extraction module that rigorously composes a necessary and sufficient set of concept abstractions for concept-based video classification.
Our method elicits inherent complex concept abstractions in natural language to generalize concept-bottleneck methods to complex tasks.
arXiv Detail & Related papers (2022-06-21T06:22:35Z) - Human-Centered Concept Explanations for Neural Networks [47.71169918421306]
We introduce concept explanations including the class of Concept Activation Vectors (CAV)
We then discuss approaches to automatically extract concepts, and approaches to address some of their caveats.
Finally, we discuss some case studies that showcase the utility of such concept-based explanations in synthetic settings and real world applications.
arXiv Detail & Related papers (2022-02-25T01:27:31Z) - Abstract Reasoning via Logic-guided Generation [65.92805601327649]
Abstract reasoning, i.e., inferring complicated patterns from given observations, is a central building block of artificial general intelligence.
This paper aims to design a framework for the latter approach and bridge the gap between artificial and human intelligence.
We propose logic-guided generation (LoGe), a novel generative DNN framework that reduces abstract reasoning as an optimization problem in propositional logic.
arXiv Detail & Related papers (2021-07-22T07:28:24Z) - Rationalization through Concepts [27.207067974031805]
We present a novel self-interpretable model called ConRAT.
Inspired by how human explanations for high-level decisions are often based on key concepts, ConRAT infers which ones are described in the document.
Two regularizers drive ConRAT to build interpretable concepts.
arXiv Detail & Related papers (2021-05-11T07:46:48Z) - What Did You Think Would Happen? Explaining Agent Behaviour Through
Intended Outcomes [30.056732656973637]
We present a novel form of explanation for Reinforcement Learning, based around the notion of intended outcome.
These explanations describe the outcome an agent is trying to achieve by its actions.
We provide a simple proof that general methods for post-hoc explanations of this nature are impossible in traditional reinforcement learning.
arXiv Detail & Related papers (2020-11-10T12:05:08Z) - Commonsense Evidence Generation and Injection in Reading Comprehension [57.31927095547153]
We propose a Commonsense Evidence Generation and Injection framework in reading comprehension, named CEGI.
The framework injects two kinds of auxiliary commonsense evidence into comprehensive reading to equip the machine with the ability of rational thinking.
Experiments on the CosmosQA dataset demonstrate that the proposed CEGI model outperforms the current state-of-the-art approaches.
arXiv Detail & Related papers (2020-05-11T16:31:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.