Federated Causal Representation Learning in State-Space Systems for Decentralized Counterfactual Reasoning
- URL: http://arxiv.org/abs/2602.19414v1
- Date: Mon, 23 Feb 2026 01:12:21 GMT
- Title: Federated Causal Representation Learning in State-Space Systems for Decentralized Counterfactual Reasoning
- Authors: Nazal Mohamed, Ayush Mohanty, Nagi Gebraeel,
- Abstract summary: Networks of interdependent industrial assets (clients) are tightly coupled through physical processes and control inputs.<n>How would the output of one client change if another client were operated differently?<n>This is difficult to answer because client-specific data are high-dimensional and private, making centralization of raw data infeasible.<n>We propose a federated framework for causal representation learning in state-space systems.
- Score: 3.122408196953971
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Networks of interdependent industrial assets (clients) are tightly coupled through physical processes and control inputs, raising a key question: how would the output of one client change if another client were operated differently? This is difficult to answer because client-specific data are high-dimensional and private, making centralization of raw data infeasible. Each client also maintains proprietary local models that cannot be modified. We propose a federated framework for causal representation learning in state-space systems that captures interdependencies among clients under these constraints. Each client maps high-dimensional observations into low-dimensional latent states that disentangle intrinsic dynamics from control-driven influences. A central server estimates the global state-transition and control structure. This enables decentralized counterfactual reasoning where clients predict how outputs would change under alternative control inputs at others while only exchanging compact latent states. We prove convergence to a centralized oracle and provide privacy guarantees. Our experiments demonstrate scalability, and accurate cross-client counterfactual inference on synthetic and real-world industrial control system datasets.
Related papers
- Learning Unknown Interdependencies for Decentralized Root Cause Analysis in Nonlinear Dynamical Systems [3.122408196953971]
Root cause analysis (RCA) in networked industrial systems is difficult due to unknown and dynamically evolving interdependencies among geographically distributed clients.<n>This paper presents a federated cross-client interdependency learning methodology for feature-partitioned, nonlinear time-series data.<n>We establish theoretical convergence guarantees and validate our approach on extensive simulations and a real-world industrial cybersecurity dataset.
arXiv Detail & Related papers (2026-02-25T14:05:38Z) - Towards Federated Clustering: A Client-wise Private Graph Aggregation Framework [57.04850867402913]
Federated clustering addresses the challenge of extracting patterns from decentralized, unlabeled data.<n>We propose Structural Privacy-Preserving Federated Graph Clustering (SPP-FGC), a novel algorithm that innovatively leverages local structural graphs as the primary medium for privacy-preserving knowledge sharing.<n>Our framework achieves state-of-the-art performance, improving clustering accuracy by up to 10% (NMI) over federated baselines while maintaining provable privacy guarantees.
arXiv Detail & Related papers (2025-11-14T03:05:22Z) - ZORRO: Zero-Knowledge Robustness and Privacy for Split Learning (Full Version) [58.595691399741646]
Split Learning (SL) is a distributed learning approach that enables resource-constrained clients to collaboratively train deep neural networks (DNNs)<n>This setup enables SL to leverage server capacities without sharing data, making it highly effective in resource-constrained environments dealing with sensitive data.<n>We present ZORRO, a private, verifiable, and robust SL defense scheme.
arXiv Detail & Related papers (2025-09-11T18:44:09Z) - Don't Reach for the Stars: Rethinking Topology for Resilient Federated Learning [1.3270838622986498]
Federated learning (FL) enables collaborative model training across distributed clients while preserving data privacy by keeping data local.<n>Traditional FL approaches rely on a centralized, star-shaped topology, where a central server aggregates model updates from clients.<n>We propose a decentralized, peer-to-peer (P2P) FL framework to enable each client to identify and aggregate a personalized set of trustworthy and beneficial updates.
arXiv Detail & Related papers (2025-08-07T10:10:37Z) - Federated Granger Causality Learning for Interdependent Clients with State Space Representation [0.6499759302108926]
We develop a federated approach to learning Granger causality.<n>We propose augmenting the client models with the Granger causality information learned by the server.<n>We also study the convergence of the framework to a centralized oracle model.
arXiv Detail & Related papers (2025-01-23T18:04:21Z) - TRAIL: Trust-Aware Client Scheduling for Semi-Decentralized Federated Learning [13.144501509175985]
We propose a TRust-Aware clIent scheduLing mechanism called TRAIL, which assesses client states and contributions.<n>We focus on a semi-decentralized FL framework where edge servers and clients train a shared global model using unreliable intra-cluster model aggregation and inter-cluster model consensus.<n>Experiments conducted on real-world datasets demonstrate that TRAIL outperforms state-of-the-art baselines, achieving an improvement of 8.7% in test accuracy and a reduction of 15.3% in training loss.
arXiv Detail & Related papers (2024-12-16T05:02:50Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - DYNAFED: Tackling Client Data Heterogeneity with Global Dynamics [60.60173139258481]
Local training on non-iid distributed data results in deflected local optimum.
A natural solution is to gather all client data onto the server, such that the server has a global view of the entire data distribution.
In this paper, we put forth an idea to collect and leverage global knowledge on the server without hindering data privacy.
arXiv Detail & Related papers (2022-11-20T06:13:06Z) - Decentralized Federated Averaging [17.63112147669365]
Federated averaging (FedAvg) is a communication efficient algorithm for the distributed training with an enormous number of clients.
We study the decentralized FedAvg with momentum (DFedAvgM), which is implemented on clients that are connected by an undirected graph.
arXiv Detail & Related papers (2021-04-23T02:01:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.