Causal reasoning in typical computer vision tasks
- URL: http://arxiv.org/abs/2307.13992v2
- Date: Mon, 31 Jul 2023 02:53:44 GMT
- Title: Causal reasoning in typical computer vision tasks
- Authors: Kexuan Zhang, Qiyu Sun, Chaoqiang Zhao and Yang Tang
- Abstract summary: Causal theory models the intrinsic causal structure unaffected by data bias and is effective in avoiding spurious correlations.
This paper aims to comprehensively review the existing causal methods in typical vision and vision-language tasks such as semantic segmentation, object detection, and image captioning.
Future roadmaps are also proposed, including facilitating the development of causal theory and its application in other complex scenes and systems.
- Score: 11.95181390654463
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has revolutionized the field of artificial intelligence. Based
on the statistical correlations uncovered by deep learning-based methods,
computer vision has contributed to tremendous growth in areas like autonomous
driving and robotics. Despite being the basis of deep learning, such
correlation is not stable and is susceptible to uncontrolled factors. In the
absence of the guidance of prior knowledge, statistical correlations can easily
turn into spurious correlations and cause confounders. As a result, researchers
are now trying to enhance deep learning methods with causal theory. Causal
theory models the intrinsic causal structure unaffected by data bias and is
effective in avoiding spurious correlations. This paper aims to comprehensively
review the existing causal methods in typical vision and vision-language tasks
such as semantic segmentation, object detection, and image captioning. The
advantages of causality and the approaches for building causal paradigms will
be summarized. Future roadmaps are also proposed, including facilitating the
development of causal theory and its application in other complex scenes and
systems.
Related papers
- Missed Causes and Ambiguous Effects: Counterfactuals Pose Challenges for Interpreting Neural Networks [14.407025310553225]
Interpretability research takes counterfactual theories of causality for granted.
Counterfactual theories have problems that bias our findings in specific and predictable ways.
We discuss the implications of these challenges for interpretability researchers.
arXiv Detail & Related papers (2024-07-05T17:53:03Z) - Multi-modal Causal Structure Learning and Root Cause Analysis [67.67578590390907]
We propose Mulan, a unified multi-modal causal structure learning method for root cause localization.
We leverage a log-tailored language model to facilitate log representation learning, converting log sequences into time-series data.
We also introduce a novel key performance indicator-aware attention mechanism for assessing modality reliability and co-learning a final causal graph.
arXiv Detail & Related papers (2024-02-04T05:50:38Z) - Emergence and Causality in Complex Systems: A Survey on Causal Emergence
and Related Quantitative Studies [12.78006421209864]
Causal emergence theory employs measures of causality to quantify emergence.
Two key problems are addressed: quantifying causal emergence and identifying it in data.
We highlighted that the architectures used for identifying causal emergence are shared by causal representation learning, causal model abstraction, and world model-based reinforcement learning.
arXiv Detail & Related papers (2023-12-28T04:20:46Z) - Interpretable Imitation Learning with Dynamic Causal Relations [65.18456572421702]
We propose to expose captured knowledge in the form of a directed acyclic causal graph.
We also design this causal discovery process to be state-dependent, enabling it to model the dynamics in latent causal graphs.
The proposed framework is composed of three parts: a dynamic causal discovery module, a causality encoding module, and a prediction module, and is trained in an end-to-end manner.
arXiv Detail & Related papers (2023-09-30T20:59:42Z) - Hierarchical Graph Neural Networks for Causal Discovery and Root Cause
Localization [52.72490784720227]
REASON consists of Topological Causal Discovery and Individual Causal Discovery.
The Topological Causal Discovery component aims to model the fault propagation in order to trace back to the root causes.
The Individual Causal Discovery component focuses on capturing abrupt change patterns of a single system entity.
arXiv Detail & Related papers (2023-02-03T20:17:45Z) - Systematic Evaluation of Causal Discovery in Visual Model Based
Reinforcement Learning [76.00395335702572]
A central goal for AI and causality is the joint discovery of abstract representations and causal structure.
Existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs.
In this work, our goal is to facilitate research in learning representations of high-level variables as well as causal structures among them.
arXiv Detail & Related papers (2021-07-02T05:44:56Z) - To do or not to do: finding causal relations in smart homes [2.064612766965483]
This paper introduces a new way to learn causal models from a mixture of experiments on the environment and observational data.
The core of our method is the use of selected interventions, especially our learning takes into account the variables where it is impossible to intervene.
We use our method on a smart home simulation, a use case where knowing causal relations pave the way towards explainable systems.
arXiv Detail & Related papers (2021-05-20T22:36:04Z) - ACRE: Abstract Causal REasoning Beyond Covariation [90.99059920286484]
We introduce the Abstract Causal REasoning dataset for systematic evaluation of current vision systems in causal induction.
Motivated by the stream of research on causal discovery in Blicket experiments, we query a visual reasoning system with the following four types of questions in either an independent scenario or an interventional scenario.
We notice that pure neural models tend towards an associative strategy under their chance-level performance, whereas neuro-symbolic combinations struggle in backward-blocking reasoning.
arXiv Detail & Related papers (2021-03-26T02:42:38Z) - Towards Causal Representation Learning [96.110881654479]
The two fields of machine learning and graphical causality arose and developed separately.
There is now cross-pollination and increasing interest in both fields to benefit from the advances of the other.
arXiv Detail & Related papers (2021-02-22T15:26:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.