e-CARE: a New Dataset for Exploring Explainable Causal Reasoning
- URL: http://arxiv.org/abs/2205.05849v1
- Date: Thu, 12 May 2022 02:41:48 GMT
- Title: e-CARE: a New Dataset for Exploring Explainable Causal Reasoning
- Authors: Li Du, Xiao Ding, Kai Xiong, Ting Liu, and Bing Qin
- Abstract summary: We present a human-annotated explainable CAusal REasoning dataset (e-CARE) with over 21K causal reasoning questions.
We show that generating valid explanations for causal facts still remains especially challenging for the state-of-the-art models.
- Score: 28.412572027774573
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding causality has vital importance for various Natural Language
Processing (NLP) applications. Beyond the labeled instances, conceptual
explanations of the causality can provide deep understanding of the causal
facts to facilitate the causal reasoning process. However, such explanation
information still remains absent in existing causal reasoning resources. In
this paper, we fill this gap by presenting a human-annotated explainable CAusal
REasoning dataset (e-CARE), which contains over 21K causal reasoning questions,
together with natural language formed explanations of the causal questions.
Experimental results show that generating valid explanations for causal facts
still remains especially challenging for the state-of-the-art models, and the
explanation information can be helpful for promoting the accuracy and stability
of causal reasoning models.
Related papers
- Do Large Language Models Show Biases in Causal Learning? Insights from Contingency Judgment [0.1547863211792184]
Causal learning is the cognitive process of developing the capability of making causal inferences.<n>This process is prone to errors and biases, such as the illusion of causality.<n>This cognitive bias has been proposed to underlie many societal problems.
arXiv Detail & Related papers (2025-10-15T18:09:00Z) - Reasoning-Grounded Natural Language Explanations for Language Models [2.7855886538423182]
We propose a large language model explainability technique for obtaining faithful natural language explanations.
When converted to a sequence of tokens, the outputs of the reasoning process can become part of the model context.
We show that the proposed use of reasoning can also improve the quality of the answers.
arXiv Detail & Related papers (2025-03-14T10:00:03Z) - Failure Modes of LLMs for Causal Reasoning on Narratives [51.19592551510628]
We investigate the interaction between world knowledge and logical reasoning.<n>We find that state-of-the-art large language models (LLMs) often rely on superficial generalizations.<n>We show that simple reformulations of the task can elicit more robust reasoning behavior.
arXiv Detail & Related papers (2024-10-31T12:48:58Z) - CELLO: Causal Evaluation of Large Vision-Language Models [9.928321287432365]
Causal reasoning is fundamental to human intelligence and crucial for effective decision-making in real-world environments.
We introduce a fine-grained and unified definition of causality involving interactions between humans and objects.
We construct a novel dataset, CELLO, consisting of 14,094 causal questions across all four levels of causality.
arXiv Detail & Related papers (2024-06-27T12:34:52Z) - Cause and Effect: Can Large Language Models Truly Understand Causality? [1.2334534968968969]
This research proposes a novel architecture called Context Aware Reasoning Enhancement with Counterfactual Analysis(CARE CA) framework.
The proposed framework incorporates an explicit causal detection module with ConceptNet and counterfactual statements, as well as implicit causal detection through Large Language Models.
The knowledge from ConceptNet enhances the performance of multiple causal reasoning tasks such as causal discovery, causal identification and counterfactual reasoning.
arXiv Detail & Related papers (2024-02-28T08:02:14Z) - Fundamental Properties of Causal Entropy and Information Gain [0.22252684361733285]
Recent developments enable the quantification of causal control given a structural causal model (SCM)
Measures, named causal entropy and causal information gain, aim to address limitations in existing information theoretical approaches for machine learning tasks where causality plays a crucial role.
arXiv Detail & Related papers (2024-02-02T11:55:57Z) - Towards More Faithful Natural Language Explanation Using Multi-Level
Contrastive Learning in VQA [7.141288053123662]
Natural language explanation in visual question answer (VQA-NLE) aims to explain the decision-making process of models by generating natural language sentences to increase users' trust in the black-box systems.
Existing post-hoc explanations are not always aligned with human logical inference, suffering from the issues on: 1) Deductive unsatisfiability, the generated explanations do not logically lead to the answer; 2) Factual inconsistency, the model falsifies its counterfactual explanation for answers without considering the facts in images; and 3) Semantic perturbation insensitivity, the model can not recognize the semantic changes caused by small perturbations
arXiv Detail & Related papers (2023-12-21T05:51:55Z) - Inducing Causal Structure for Abstractive Text Summarization [76.1000380429553]
We introduce a Structural Causal Model (SCM) to induce the underlying causal structure of the summarization data.
We propose a Causality Inspired Sequence-to-Sequence model (CI-Seq2Seq) to learn the causal representations that can mimic the causal factors.
Experimental results on two widely used text summarization datasets demonstrate the advantages of our approach.
arXiv Detail & Related papers (2023-08-24T16:06:36Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Active Bayesian Causal Inference [72.70593653185078]
We propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning.
ABCI jointly infers a posterior over causal models and queries of interest.
We show that our approach is more data-efficient than several baselines that only focus on learning the full causal graph.
arXiv Detail & Related papers (2022-06-04T22:38:57Z) - Explaining Causal Models with Argumentation: the Case of Bi-variate
Reinforcement [15.947501347927687]
We introduce a conceptualisation for generating argumentation frameworks (AFs) from causal models.
The conceptualisation is based on reinterpreting desirable properties of semantics of AFs as explanation moulds.
We perform a theoretical evaluation of these argumentative explanations, examining whether they satisfy a range of desirable explanatory and argumentative properties.
arXiv Detail & Related papers (2022-05-23T19:39:51Z) - Scientific Explanation and Natural Language: A Unified
Epistemological-Linguistic Perspective for Explainable AI [2.7920304852537536]
This paper focuses on the scientific domain, aiming to bridge the gap between theory and practice on the notion of a scientific explanation.
Through a mixture of quantitative and qualitative methodologies, the presented study allows deriving the following main conclusions.
arXiv Detail & Related papers (2022-05-03T22:31:42Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Everything Has a Cause: Leveraging Causal Inference in Legal Text
Analysis [62.44432226563088]
Causal inference is the process of capturing cause-effect relationship among variables.
We propose a novel Graph-based Causal Inference framework, which builds causal graphs from fact descriptions without much human involvement.
We observe that the causal knowledge contained in GCI can be effectively injected into powerful neural networks for better performance and interpretability.
arXiv Detail & Related papers (2021-04-19T16:13:10Z) - Thinking About Causation: A Causal Language with Epistemic Operators [58.720142291102135]
We extend the notion of a causal model with a representation of the state of an agent.
On the side of the object language, we add operators to express knowledge and the act of observing new information.
We provide a sound and complete axiomatization of the logic, and discuss the relation of this framework to causal team semantics.
arXiv Detail & Related papers (2020-10-30T12:16:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.