Emerging Synergies in Causality and Deep Generative Models: A Survey
- URL: http://arxiv.org/abs/2301.12351v3
- Date: Thu, 14 Sep 2023 18:41:03 GMT
- Title: Emerging Synergies in Causality and Deep Generative Models: A Survey
- Authors: Guanglin Zhou and Shaoan Xie and Guangyuan Hao and Shiming Chen and
Biwei Huang and Xiwei Xu and Chen Wang and Liming Zhu and Lina Yao and Kun
Zhang
- Abstract summary: Deep generative models (DGMs) have proven adept in capturing complex data distributions but often fall short in generalization and interpretability.
causality offers a structured lens to comprehend the mechanisms driving data generation and highlights the causal-effect dynamics inherent in these processes.
We elucidate the integration of causal principles within DGMs, investigate causal identification using DGMs, and navigate an emerging research frontier of causality in large-scale generative models.
- Score: 35.62192474181619
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the field of artificial intelligence (AI), the quest to understand and
model data-generating processes (DGPs) is of paramount importance. Deep
generative models (DGMs) have proven adept in capturing complex data
distributions but often fall short in generalization and interpretability. On
the other hand, causality offers a structured lens to comprehend the mechanisms
driving data generation and highlights the causal-effect dynamics inherent in
these processes. While causality excels in interpretability and the ability to
extrapolate, it grapples with intricacies of high-dimensional spaces.
Recognizing the synergistic potential, we delve into the confluence of
causality and DGMs. We elucidate the integration of causal principles within
DGMs, investigate causal identification using DGMs, and navigate an emerging
research frontier of causality in large-scale generative models, particularly
generative large language models (LLMs). We offer insights into methodologies,
highlight open challenges, and suggest future directions, positioning our
comprehensive review as an essential guide in this swiftly emerging and
evolving area.
Related papers
- A Survey of Out-of-distribution Generalization for Graph Machine Learning from a Causal View [5.651037052334014]
Graph machine learning (GML) has been successfully applied across a wide range of tasks.
GML faces significant challenges in generalizing over out-of-distribution (OOD) data.
Recent advancements have underscored the crucial role of causality-driven approaches in overcoming these generalization challenges.
arXiv Detail & Related papers (2024-09-15T20:41:18Z) - Deep Generative Models through the Lens of the Manifold Hypothesis: A Survey and New Connections [15.191007332508198]
We show that numerical instability of likelihoods in high ambient dimensions is unavoidable when modelling data with low intrinsic dimension.
We then show that DGMs on learned representations of autoencoders can be interpreted as approximately minimizing Wasserstein distance.
arXiv Detail & Related papers (2024-04-03T18:00:00Z) - On the Challenges and Opportunities in Generative AI [135.2754367149689]
We argue that current large-scale generative AI models do not sufficiently address several fundamental issues that hinder their widespread adoption across domains.
In this work, we aim to identify key unresolved challenges in modern generative AI paradigms that should be tackled to further enhance their capabilities, versatility, and reliability.
arXiv Detail & Related papers (2024-02-28T15:19:33Z) - The Essential Role of Causality in Foundation World Models for Embodied AI [102.75402420915965]
Embodied AI agents will require the ability to perform new tasks in many different real-world environments.
Current foundation models fail to accurately model physical interactions and are therefore insufficient for Embodied AI.
The study of causality lends itself to the construction of veridical world models.
arXiv Detail & Related papers (2024-02-06T17:15:33Z) - Targeted Reduction of Causal Models [55.11778726095353]
Causal Representation Learning offers a promising avenue to uncover interpretable causal patterns in simulations.
We introduce Targeted Causal Reduction (TCR), a method for condensing complex intervenable models into a concise set of causal factors.
Its ability to generate interpretable high-level explanations from complex models is demonstrated on toy and mechanical systems.
arXiv Detail & Related papers (2023-11-30T15:46:22Z) - Causal machine learning for single-cell genomics [94.28105176231739]
We discuss the application of machine learning techniques to single-cell genomics and their challenges.
We first present the model that underlies most of current causal approaches to single-cell biology.
We then identify open problems in the application of causal approaches to single-cell data.
arXiv Detail & Related papers (2023-10-23T13:35:24Z) - From Identifiable Causal Representations to Controllable Counterfactual Generation: A Survey on Causal Generative Modeling [17.074858228123706]
We focus on fundamental theory, methodology, drawbacks, datasets, and metrics.
We cover applications of causal generative models in fairness, privacy, out-of-distribution generalization, precision medicine, and biological sciences.
arXiv Detail & Related papers (2023-10-17T05:45:32Z) - On the causality-preservation capabilities of generative modelling [0.0]
We study the causal preservation capabilities of GANs and whether the produced synthetic data can reliably be used to answer causal questions.
This is done by performing causal analyses on the synthetic data, produced by a GAN, with increasingly more lenient assumptions.
arXiv Detail & Related papers (2023-01-03T14:09:15Z) - Understanding Overparameterization in Generative Adversarial Networks [56.57403335510056]
Generative Adversarial Networks (GANs) are used to train non- concave mini-max optimization problems.
A theory has shown the importance of the gradient descent (GD) to globally optimal solutions.
We show that in an overized GAN with a $1$-layer neural network generator and a linear discriminator, the GDA converges to a global saddle point of the underlying non- concave min-max problem.
arXiv Detail & Related papers (2021-04-12T16:23:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.