CELLO: Causal Evaluation of Large Vision-Language Models
- URL: http://arxiv.org/abs/2406.19131v1
- Date: Thu, 27 Jun 2024 12:34:52 GMT
- Title: CELLO: Causal Evaluation of Large Vision-Language Models
- Authors: Meiqi Chen, Bo Peng, Yan Zhang, Chaochao Lu,
- Abstract summary: Causal reasoning is fundamental to human intelligence and crucial for effective decision-making in real-world environments.
We introduce a fine-grained and unified definition of causality involving interactions between humans and objects.
We construct a novel dataset, CELLO, consisting of 14,094 causal questions across all four levels of causality.
- Score: 9.928321287432365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal reasoning is fundamental to human intelligence and crucial for effective decision-making in real-world environments. Despite recent advancements in large vision-language models (LVLMs), their ability to comprehend causality remains unclear. Previous work typically focuses on commonsense causality between events and/or actions, which is insufficient for applications like embodied agents and lacks the explicitly defined causal graphs required for formal causal reasoning. To overcome these limitations, we introduce a fine-grained and unified definition of causality involving interactions between humans and/or objects. Building on the definition, we construct a novel dataset, CELLO, consisting of 14,094 causal questions across all four levels of causality: discovery, association, intervention, and counterfactual. This dataset surpasses traditional commonsense causality by including explicit causal graphs that detail the interactions between humans and objects. Extensive experiments on CELLO reveal that current LVLMs still struggle with causal reasoning tasks, but they can benefit significantly from our proposed CELLO-CoT, a causally inspired chain-of-thought prompting strategy. Both quantitative and qualitative analyses from this study provide valuable insights for future research. Our project page is at https://github.com/OpenCausaLab/CELLO.
Related papers
- Hypothesizing Missing Causal Variables with LLMs [55.28678224020973]
We formulate a novel task where the input is a partial causal graph with missing variables, and the output is a hypothesis about the missing variables to complete the partial graph.
We show the strong ability of LLMs to hypothesize the mediation variables between a cause and its effect.
We also observe surprising results where some of the open-source models outperform the closed GPT-4 model.
arXiv Detail & Related papers (2024-09-04T10:37:44Z) - Cause and Effect: Can Large Language Models Truly Understand Causality? [1.2334534968968969]
This research proposes a novel architecture called Context Aware Reasoning Enhancement with Counterfactual Analysis(CARE CA) framework.
The proposed framework incorporates an explicit causal detection module with ConceptNet and counterfactual statements, as well as implicit causal detection through Large Language Models.
The knowledge from ConceptNet enhances the performance of multiple causal reasoning tasks such as causal discovery, causal identification and counterfactual reasoning.
arXiv Detail & Related papers (2024-02-28T08:02:14Z) - Emergence and Causality in Complex Systems: A Survey on Causal Emergence
and Related Quantitative Studies [12.78006421209864]
Causal emergence theory employs measures of causality to quantify emergence.
Two key problems are addressed: quantifying causal emergence and identifying it in data.
We highlighted that the architectures used for identifying causal emergence are shared by causal representation learning, causal model abstraction, and world model-based reinforcement learning.
arXiv Detail & Related papers (2023-12-28T04:20:46Z) - CausalDialogue: Modeling Utterance-level Causality in Conversations [83.03604651485327]
We have compiled and expanded upon a new dataset called CausalDialogue through crowd-sourcing.
This dataset includes multiple cause-effect pairs within a directed acyclic graph (DAG) structure.
We propose a causality-enhanced method called Exponential Average Treatment Effect (ExMATE) to enhance the impact of causality at the utterance level in training neural conversation models.
arXiv Detail & Related papers (2022-12-20T18:31:50Z) - Causal Discovery for Fairness [3.3861246056563616]
We show how different causal discovery approaches may result in different causal models and how even slight differences between causal models can have significant impact on fairness/discrimination conclusions.
Main goal of this study is to highlight the importance of the causal discovery step to appropriately address fairness using causality.
arXiv Detail & Related papers (2022-06-14T08:40:40Z) - Active Bayesian Causal Inference [72.70593653185078]
We propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning.
ABCI jointly infers a posterior over causal models and queries of interest.
We show that our approach is more data-efficient than several baselines that only focus on learning the full causal graph.
arXiv Detail & Related papers (2022-06-04T22:38:57Z) - Systematic Evaluation of Causal Discovery in Visual Model Based
Reinforcement Learning [76.00395335702572]
A central goal for AI and causality is the joint discovery of abstract representations and causal structure.
Existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs.
In this work, our goal is to facilitate research in learning representations of high-level variables as well as causal structures among them.
arXiv Detail & Related papers (2021-07-02T05:44:56Z) - To do or not to do: finding causal relations in smart homes [2.064612766965483]
This paper introduces a new way to learn causal models from a mixture of experiments on the environment and observational data.
The core of our method is the use of selected interventions, especially our learning takes into account the variables where it is impossible to intervene.
We use our method on a smart home simulation, a use case where knowing causal relations pave the way towards explainable systems.
arXiv Detail & Related papers (2021-05-20T22:36:04Z) - ACRE: Abstract Causal REasoning Beyond Covariation [90.99059920286484]
We introduce the Abstract Causal REasoning dataset for systematic evaluation of current vision systems in causal induction.
Motivated by the stream of research on causal discovery in Blicket experiments, we query a visual reasoning system with the following four types of questions in either an independent scenario or an interventional scenario.
We notice that pure neural models tend towards an associative strategy under their chance-level performance, whereas neuro-symbolic combinations struggle in backward-blocking reasoning.
arXiv Detail & Related papers (2021-03-26T02:42:38Z) - Towards Causal Representation Learning [96.110881654479]
The two fields of machine learning and graphical causality arose and developed separately.
There is now cross-pollination and increasing interest in both fields to benefit from the advances of the other.
arXiv Detail & Related papers (2021-02-22T15:26:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.