Uncertainty in Federated Granger Causality: From Origins to Systemic Consequences
- URL: http://arxiv.org/abs/2602.13004v1
- Date: Fri, 13 Feb 2026 15:12:18 GMT
- Title: Uncertainty in Federated Granger Causality: From Origins to Systemic Consequences
- Authors: Ayush Mohanty, Nazal Mohamed, Nagi Gebraeel,
- Abstract summary: Granger Causality (GC) provides a rigorous framework for learning causal structures from time-series data.<n> Federated GC algorithms only yield deterministic point estimates of causality and neglect uncertainty.<n>This paper establishes the first methodology for rigorously quantifying uncertainty.
- Score: 3.122408196953971
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Granger Causality (GC) provides a rigorous framework for learning causal structures from time-series data. Recent federated variants of GC have targeted distributed infrastructure applications (e.g., smart grids) with distributed clients that generate high-dimensional data bound by data-sovereignty constraints. However, Federated GC algorithms only yield deterministic point estimates of causality and neglect uncertainty. This paper establishes the first methodology for rigorously quantifying uncertainty and its propagation within federated GC frameworks. We systematically classify sources of uncertainty, explicitly differentiating aleatoric (data noise) from epistemic (model variability) effects. We derive closed-form recursions that model the evolution of uncertainty through client-server interactions and identify four novel cross-covariance components that couple data uncertainties with model parameter uncertainties across the federated architecture. We also define rigorous convergence conditions for these uncertainty recursions and obtain explicit steady-state variances for both server and client model parameters. Our convergence analysis demonstrates that steady-state variances depend exclusively on client data statistics, thus eliminating dependence on initial epistemic priors and enhancing robustness. Empirical evaluations on synthetic benchmarks and real-world industrial datasets demonstrate that explicitly characterizing uncertainty significantly improves the reliability and interpretability of federated causal inference.
Related papers
- ProbFM: Probabilistic Time Series Foundation Model with Uncertainty Decomposition [0.12489632787815884]
Time Series Foundation Models (TSFMs) have emerged as a promising approach for zero-shot financial forecasting.<n>Current approaches either rely on restrictive distributional assumptions, conflate different sources of uncertainty, or lack principled calibration mechanisms.<n>We present a novel transformer-based probabilistic framework, ProbFM, that leverages Deep Evidential Regression (DER) to provide principled uncertainty quantification.
arXiv Detail & Related papers (2026-01-15T17:02:06Z) - Uncertainty-driven Embedding Convolution [16.523816971857787]
We propose Uncertainty-driven Embedding Convolution (UEC)<n>UEC transforms deterministic embeddings into probabilistic ones in a post-hoc manner.<n>It then computes adaptive ensemble weights based on embedding uncertainty.
arXiv Detail & Related papers (2025-07-28T11:15:25Z) - Causality-Inspired Robustness for Nonlinear Models via Representation Learning [4.64479351797195]
Distributional robustness is a central goal of prediction algorithms due to the prevalent distribution shifts in real-world data.<n>We propose a nonlinear method under a causal framework by incorporating recent developments in identifiable representation learning.<n>To our best knowledge, this is the first causality-inspired robustness method with such a finite-radius robustness guarantee in nonlinear settings.
arXiv Detail & Related papers (2025-05-19T08:52:15Z) - dcFCI: Robust Causal Discovery Under Latent Confounding, Unfaithfulness, and Mixed Data [1.9797215742507548]
We introduce the first nonparametric score to assess a Partial Ancestral Graph's compatibility with observed data.<n>We then propose data-compatible Fast Causal Inference (dcFCI) to jointly address latent confounding, empirical unfaithfulness, and mixed data types.
arXiv Detail & Related papers (2025-05-10T07:05:19Z) - SConU: Selective Conformal Uncertainty in Large Language Models [59.25881667640868]
We propose a novel approach termed Selective Conformal Uncertainty (SConU)<n>We develop two conformal p-values that are instrumental in determining whether a given sample deviates from the uncertainty distribution of the calibration set at a specific manageable risk level.<n>Our approach not only facilitates rigorous management of miscoverage rates across both single-domain and interdisciplinary contexts, but also enhances the efficiency of predictions.
arXiv Detail & Related papers (2025-04-19T03:01:45Z) - Uncertainty separation via ensemble quantile regression [23.667247644930708]
This paper introduces a novel and scalable framework for uncertainty estimation and separation.<n>Our framework is scalable to large datasets and demonstrates superior performance on synthetic benchmarks.
arXiv Detail & Related papers (2024-12-18T11:15:32Z) - Federated Causal Discovery from Heterogeneous Data [70.31070224690399]
We propose a novel FCD method attempting to accommodate arbitrary causal models and heterogeneous data.
These approaches involve constructing summary statistics as a proxy of the raw data to protect data privacy.
We conduct extensive experiments on synthetic and real datasets to show the efficacy of our method.
arXiv Detail & Related papers (2024-02-20T18:53:53Z) - Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling [69.83976050879318]
In large language models (LLMs), identifying sources of uncertainty is an important step toward improving reliability, trustworthiness, and interpretability.
In this paper, we introduce an uncertainty decomposition framework for LLMs, called input clarification ensembling.
Our approach generates a set of clarifications for the input, feeds them into an LLM, and ensembles the corresponding predictions.
arXiv Detail & Related papers (2023-11-15T05:58:35Z) - Distributional Shift-Aware Off-Policy Interval Estimation: A Unified
Error Quantification Framework [8.572441599469597]
We study high-confidence off-policy evaluation in the context of infinite-horizon Markov decision processes.
The objective is to establish a confidence interval (CI) for the target policy value using only offline data pre-collected from unknown behavior policies.
We show that our algorithm is sample-efficient, error-robust, and provably convergent even in non-linear function approximation settings.
arXiv Detail & Related papers (2023-09-23T06:35:44Z) - Variational Causal Networks: Approximate Bayesian Inference over Causal
Structures [132.74509389517203]
We introduce a parametric variational family modelled by an autoregressive distribution over the space of discrete DAGs.
In experiments, we demonstrate that the proposed variational posterior is able to provide a good approximation of the true posterior.
arXiv Detail & Related papers (2021-06-14T17:52:49Z) - BayesIMP: Uncertainty Quantification for Causal Data Fusion [52.184885680729224]
We study the causal data fusion problem, where datasets pertaining to multiple causal graphs are combined to estimate the average treatment effect of a target variable.
We introduce a framework which combines ideas from probabilistic integration and kernel mean embeddings to represent interventional distributions in the reproducing kernel Hilbert space.
arXiv Detail & Related papers (2021-06-07T10:14:18Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.