CRAB: Assessing the Strength of Causal Relationships Between Real-world
Events
- URL: http://arxiv.org/abs/2311.04284v1
- Date: Tue, 7 Nov 2023 19:00:44 GMT
- Title: CRAB: Assessing the Strength of Causal Relationships Between Real-world
Events
- Authors: Angelika Romanou, Syrielle Montariol, Debjit Paul, Leo Laugier, Karl
Aberer, Antoine Bosselut
- Abstract summary: We present CRAB, a new Causal Reasoning Assessment Benchmark designed to evaluate causal understanding of events in real-world narratives.
We measure the performance of several large language models, demonstrating that most systems achieve poor performance on the task.
Motivated by classical causal principles, we analyze the causal structures of groups of events in CRAB, and find that models perform worse on causal reasoning when events are derived from complex causal structures.
- Score: 20.74723427835013
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding narratives requires reasoning about the cause-and-effect
relationships between events mentioned in the text. While existing foundation
models yield impressive results in many NLP tasks requiring reasoning, it is
unclear whether they understand the complexity of the underlying network of
causal relationships of events in narratives. In this work, we present CRAB, a
new Causal Reasoning Assessment Benchmark designed to evaluate causal
understanding of events in real-world narratives. CRAB contains fine-grained,
contextual causality annotations for ~2.7K pairs of real-world events that
describe various newsworthy event timelines (e.g., the acquisition of Twitter
by Elon Musk). Using CRAB, we measure the performance of several large language
models, demonstrating that most systems achieve poor performance on the task.
Motivated by classical causal principles, we also analyze the causal structures
of groups of events in CRAB, and find that models perform worse on causal
reasoning when events are derived from complex causal structures compared to
simple linear causal chains. We make our dataset and code available to the
research community.
Related papers
- Failure Modes of LLMs for Causal Reasoning on Narratives [51.19592551510628]
We investigate the causal reasoning abilities of large language models (LLMs) through the representative problem of inferring causal relationships from narratives.
We find that even state-of-the-art language models rely on unreliable shortcuts, both in terms of the narrative presentation and their parametric knowledge.
arXiv Detail & Related papers (2024-10-31T12:48:58Z) - CELLO: Causal Evaluation of Large Vision-Language Models [9.928321287432365]
Causal reasoning is fundamental to human intelligence and crucial for effective decision-making in real-world environments.
We introduce a fine-grained and unified definition of causality involving interactions between humans and objects.
We construct a novel dataset, CELLO, consisting of 14,094 causal questions across all four levels of causality.
arXiv Detail & Related papers (2024-06-27T12:34:52Z) - Enhancing Event Causality Identification with Rationale and Structure-Aware Causal Question Answering [30.000134835133522]
Event Causality Identification (DECI) aims to identify causal relations between two events in documents.
Recent research tends to use pre-trained language models to generate the event causal relations.
We propose a multi-task learning framework to enhance event causality identification with rationale and structure-aware causal question answering.
arXiv Detail & Related papers (2024-03-17T07:41:58Z) - Complex Reasoning over Logical Queries on Commonsense Knowledge Graphs [61.796960984541464]
We present COM2 (COMplex COMmonsense), a new dataset created by sampling logical queries.
We verbalize them using handcrafted rules and large language models into multiple-choice and text generation questions.
Experiments show that language models trained on COM2 exhibit significant improvements in complex reasoning ability.
arXiv Detail & Related papers (2024-03-12T08:13:52Z) - Cause and Effect: Can Large Language Models Truly Understand Causality? [1.2334534968968969]
This research proposes a novel architecture called Context Aware Reasoning Enhancement with Counterfactual Analysis(CARE CA) framework.
The proposed framework incorporates an explicit causal detection module with ConceptNet and counterfactual statements, as well as implicit causal detection through Large Language Models.
The knowledge from ConceptNet enhances the performance of multiple causal reasoning tasks such as causal discovery, causal identification and counterfactual reasoning.
arXiv Detail & Related papers (2024-02-28T08:02:14Z) - Inducing Causal Structure for Abstractive Text Summarization [76.1000380429553]
We introduce a Structural Causal Model (SCM) to induce the underlying causal structure of the summarization data.
We propose a Causality Inspired Sequence-to-Sequence model (CI-Seq2Seq) to learn the causal representations that can mimic the causal factors.
Experimental results on two widely used text summarization datasets demonstrate the advantages of our approach.
arXiv Detail & Related papers (2023-08-24T16:06:36Z) - COLA: Contextualized Commonsense Causal Reasoning from the Causal
Inference Perspective [38.49046289133713]
This paper proposes a new task to detect commonsense causation between two events in an event sequence (i.e., context)
We also design a zero-shot framework: COLA (Contextualized Commonsense Causality Reasoner) to solve the task from the causal inference perspective.
Our extensive experiments show that COLA can detect commonsense causality more accurately than baselines.
arXiv Detail & Related papers (2023-05-09T05:56:58Z) - Causal schema induction for knowledge discovery [21.295680010103602]
We present Torquestra, a dataset of text-graph-schema units integrating temporal, event, and causal structures.
We benchmark our dataset on three knowledge discovery tasks, building and evaluating models for each.
Results show that systems that harness causal structure are effective at identifying texts sharing similar causal meaning components.
arXiv Detail & Related papers (2023-03-27T16:55:49Z) - Active Bayesian Causal Inference [72.70593653185078]
We propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning.
ABCI jointly infers a posterior over causal models and queries of interest.
We show that our approach is more data-efficient than several baselines that only focus on learning the full causal graph.
arXiv Detail & Related papers (2022-06-04T22:38:57Z) - Causal Inference Principles for Reasoning about Commonsense Causality [93.19149325083968]
Commonsense causality reasoning aims at identifying plausible causes and effects in natural language descriptions that are deemed reasonable by an average person.
Existing work usually relies on deep language models wholeheartedly, and is potentially susceptible to confounding co-occurrences.
Motivated by classical causal principles, we articulate the central question of CCR and draw parallels between human subjects in observational studies and natural languages.
We propose a novel framework, ROCK, to Reason O(A)bout Commonsense K(C)ausality, which utilizes temporal signals as incidental supervision.
arXiv Detail & Related papers (2022-01-31T06:12:39Z) - Everything Has a Cause: Leveraging Causal Inference in Legal Text
Analysis [62.44432226563088]
Causal inference is the process of capturing cause-effect relationship among variables.
We propose a novel Graph-based Causal Inference framework, which builds causal graphs from fact descriptions without much human involvement.
We observe that the causal knowledge contained in GCI can be effectively injected into powerful neural networks for better performance and interpretability.
arXiv Detail & Related papers (2021-04-19T16:13:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.