Learning General Causal Structures with Hidden Dynamic Process for Climate Analysis
- URL: http://arxiv.org/abs/2501.12500v2
- Date: Thu, 09 Oct 2025 07:34:57 GMT
- Title: Learning General Causal Structures with Hidden Dynamic Process for Climate Analysis
- Authors: Minghao Fu, Biwei Huang, Zijian Li, Yujia Zheng, Ignavier Ng, Guangyi Chen, Yingyao Hu, Kun Zhang,
- Abstract summary: We introduce a unified framework that jointly uncovers (i) causal relations among observed variables and (ii) latent driving forces together with their interactions.<n>We propose CaDRe (Causal Discovery and Representation learning), a time-series generative model with structural constraints that integrates CRL and causal discovery.<n>On real-world climate datasets, CaDRe not only delivers competitive forecasting accuracy but also recovers visualized causal graphs aligned with domain expertise.
- Score: 39.69577035318778
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding climate dynamics requires going beyond correlations in observational data to uncover their underlying causal process. Latent drivers, such as atmospheric processes, play a critical role in temporal dynamics, while direct causal influences also exist among geographically proximate observed variables. Traditional Causal Representation Learning (CRL) typically focuses on latent factors but overlooks such observable-to-observable causal relations, limiting its applicability to climate analysis. In this paper, we introduce a unified framework that jointly uncovers (i) causal relations among observed variables and (ii) latent driving forces together with their interactions. We establish conditions under which both the hidden dynamic processes and the causal structure among observed variables are simultaneously identifiable from time-series data. Remarkably, our guarantees hold even in the nonparametric setting, leveraging contextual information to recover latent variables and causal relations. Building on these insights, we propose CaDRe (Causal Discovery and Representation learning), a time-series generative model with structural constraints that integrates CRL and causal discovery. Experiments on synthetic datasets validate our theoretical results. On real-world climate datasets, CaDRe not only delivers competitive forecasting accuracy but also recovers visualized causal graphs aligned with domain expertise, thereby offering interpretable insights into climate systems.
Related papers
- Temporal Latent Variable Structural Causal Model for Causal Discovery under External Interferences [53.308122815325326]
We introduce latent variables to represent unobserved factors that affect the observed data.<n>Specifically, to capture the causal strength and adjacency information, we propose a new temporal latent variable structural causal model.<n>Considering that expert knowledge can provide information about unknown interferences in certain scenarios, we develop a method that facilitates the incorporation of prior knowledge into parameter learning.
arXiv Detail & Related papers (2025-11-13T07:10:10Z) - Bridging Prediction and Attribution: Identifying Forward and Backward Causal Influence Ranges Using Assimilative Causal Inference [7.915816961228985]
Causal inference identifies cause-and-effect relationships between variables.<n>A recently developed method, assimilative causal inference (ACI), integrates observations with dynamical models.<n> ACI advances the detection of instantaneous causal relationships and the intermittent reversal of causal roles over time.
arXiv Detail & Related papers (2025-10-24T02:04:56Z) - A Generative Framework for Probabilistic, Spatiotemporally Coherent Downscaling of Climate Simulation [18.881422165965017]
We present a novel generative framework that uses a score-based diffusion model trained on high-resolution reanalysis data to capture the statistical properties of local weather dynamics.<n>We demonstrate that the model generates spatially and temporally coherent weather dynamics that align with global climate output.
arXiv Detail & Related papers (2024-12-19T19:47:35Z) - A Practical Approach to Causal Inference over Time [17.660953125689105]
We define causal interventions and their effects over time on discrete-time processes (DSPs)
We show under which conditions the equilibrium states of a DSP, both before and after a causal intervention, can be captured by a structural causal model (SCM)
The resulting causal VAR framework allows us to perform causal inference over time from observational time series data.
arXiv Detail & Related papers (2024-10-14T13:45:20Z) - Causal Representation Learning in Temporal Data via Single-Parent Decoding [66.34294989334728]
Scientific research often seeks to understand the causal structure underlying high-level variables in a system.
Scientists typically collect low-level measurements, such as geographically distributed temperature readings.
We propose a differentiable method, Causal Discovery with Single-parent Decoding, that simultaneously learns the underlying latents and a causal graph over them.
arXiv Detail & Related papers (2024-10-09T15:57:50Z) - Causal Temporal Representation Learning with Nonstationary Sparse Transition [22.6420431022419]
Causal Temporal Representation Learning (Ctrl) methods aim to identify the temporal causal dynamics of complex nonstationary temporal sequences.
This work adopts a sparse transition assumption, aligned with intuitive human understanding, and presents identifiability results from a theoretical perspective.
We introduce a novel framework, Causal Temporal Representation Learning with Nonstationary Sparse Transition (CtrlNS), designed to leverage the constraints on transition sparsity.
arXiv Detail & Related papers (2024-09-05T00:38:27Z) - Local Causal Structure Learning in the Presence of Latent Variables [16.88791886307876]
We present a principled method for determining whether a variable is a direct cause or effect of a target.
Experimental results on both synthetic and real-world data validate the effectiveness and efficiency of our approach.
arXiv Detail & Related papers (2024-05-25T13:31:05Z) - On the Identification of Temporally Causal Representation with Instantaneous Dependence [50.14432597910128]
Temporally causal representation learning aims to identify the latent causal process from time series observations.
Most methods require the assumption that the latent causal processes do not have instantaneous relations.
We propose an textbfIDentification framework for instantanetextbfOus textbfLatent dynamics.
arXiv Detail & Related papers (2024-05-24T08:08:05Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.<n>One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Nonlinearity, Feedback and Uniform Consistency in Causal Structural
Learning [0.8158530638728501]
Causal Discovery aims to find automated search methods for learning causal structures from observational data.
This thesis focuses on two questions in causal discovery: (i) providing an alternative definition of k-Triangle Faithfulness that (i) is weaker than strong faithfulness when applied to the Gaussian family of distributions, and (ii) under the assumption that the modified version of Strong Faithfulness holds.
arXiv Detail & Related papers (2023-08-15T01:23:42Z) - Pitfalls of Climate Network Construction: A Statistical Perspective [13.623860700196625]
We simulate time-dependent isotropic random fields on the sphere and apply common network construction techniques.
We find several ways in which the uncertainty stemming from the estimation procedure has major impact on network characteristics.
arXiv Detail & Related papers (2022-11-05T11:59:55Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z) - Causal Inference in Geoscience and Remote Sensing from Observational
Data [9.800027003240674]
We try to estimate the correct direction of causation using a finite set of empirical data.
We illustrate performance in a collection of 28 geoscience causal inference problems.
The criterion achieves state-of-the-art detection rates in all cases, it is generally robust to noise sources and distortions.
arXiv Detail & Related papers (2020-12-07T22:56:55Z) - On Disentangled Representations Learned From Correlated Data [59.41587388303554]
We bridge the gap to real-world scenarios by analyzing the behavior of the most prominent disentanglement approaches on correlated data.
We show that systematically induced correlations in the dataset are being learned and reflected in the latent representations.
We also demonstrate how to resolve these latent correlations, either using weak supervision during training or by post-hoc correcting a pre-trained model with a small number of labels.
arXiv Detail & Related papers (2020-06-14T12:47:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.