Deep Causal Learning: Representation, Discovery and Inference
- URL: http://arxiv.org/abs/2211.03374v1
- Date: Mon, 7 Nov 2022 09:00:33 GMT
- Title: Deep Causal Learning: Representation, Discovery and Inference
- Authors: Zizhen Deng, Xiaolong Zheng, Hu Tian, and Daniel Dajun Zeng
- Abstract summary: This article comprehensively reviews how deep learning can contribute to causal learning.
We point out that deep causal learning is important for the theoretical extension and application expansion of causal science.
- Score: 4.667493820893912
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal learning has attracted much attention in recent years because
causality reveals the essential relationship between things and indicates how
the world progresses. However, there are many problems and bottlenecks in
traditional causal learning methods, such as high-dimensional unstructured
variables, combinatorial optimization problems, unknown intervention,
unobserved confounders, selection bias and estimation bias. Deep causal
learning, that is, causal learning based on deep neural networks, brings new
insights for addressing these problems. While many deep learning-based causal
discovery and causal inference methods have been proposed, there is a lack of
reviews exploring the internal mechanism of deep learning to improve causal
learning. In this article, we comprehensively review how deep learning can
contribute to causal learning by addressing conventional challenges from three
aspects: representation, discovery, and inference. We point out that deep
causal learning is important for the theoretical extension and application
expansion of causal science and is also an indispensable part of general
artificial intelligence. We conclude the article with a summary of open issues
and potential directions for future work.
Related papers
- Neural Causal Abstractions [63.21695740637627]
We develop a new family of causal abstractions by clustering variables and their domains.
We show that such abstractions are learnable in practical settings through Neural Causal Models.
Our experiments support the theory and illustrate how to scale causal inferences to high-dimensional settings involving image data.
arXiv Detail & Related papers (2024-01-05T02:00:27Z) - Causal reasoning in typical computer vision tasks [11.95181390654463]
Causal theory models the intrinsic causal structure unaffected by data bias and is effective in avoiding spurious correlations.
This paper aims to comprehensively review the existing causal methods in typical vision and vision-language tasks such as semantic segmentation, object detection, and image captioning.
Future roadmaps are also proposed, including facilitating the development of causal theory and its application in other complex scenes and systems.
arXiv Detail & Related papers (2023-07-26T07:01:57Z) - Causal Reinforcement Learning: A Survey [57.368108154871]
Reinforcement learning is an essential paradigm for solving sequential decision problems under uncertainty.
One of the main obstacles is that reinforcement learning agents lack a fundamental understanding of the world.
Causality offers a notable advantage as it can formalize knowledge in a systematic manner.
arXiv Detail & Related papers (2023-07-04T03:00:43Z) - Causal Deep Learning [77.49632479298745]
Causality has the potential to transform the way we solve real-world problems.
But causality often requires crucial assumptions which cannot be tested in practice.
We propose a new way of thinking about causality -- we call this causal deep learning.
arXiv Detail & Related papers (2023-03-03T19:19:18Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Towards Causal Representation Learning [96.110881654479]
The two fields of machine learning and graphical causality arose and developed separately.
There is now cross-pollination and increasing interest in both fields to benefit from the advances of the other.
arXiv Detail & Related papers (2021-02-22T15:26:57Z) - Optimism in the Face of Adversity: Understanding and Improving Deep
Learning through Adversarial Robustness [63.627760598441796]
We provide an in-depth review of the field of adversarial robustness in deep learning.
We highlight the intuitive connection between adversarial examples and the geometry of deep neural networks.
We provide an overview of the main emerging applications of adversarial robustness beyond security.
arXiv Detail & Related papers (2020-10-19T16:03:46Z) - Off-the-shelf deep learning is not enough: parsimony, Bayes and
causality [0.8602553195689513]
We discuss opportunities and roadblocks to implementation of deep learning within materials science.
We argue that deep learning and AI are now well positioned to revolutionize fields where causal links are known.
arXiv Detail & Related papers (2020-05-04T15:16:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.