Causal Mechanism Estimation in Multi-Sensor Systems Across Multiple Domains
- URL: http://arxiv.org/abs/2507.17792v2
- Date: Fri, 25 Jul 2025 07:07:32 GMT
- Title: Causal Mechanism Estimation in Multi-Sensor Systems Across Multiple Domains
- Authors: Jingyi Yu, Tim Pychynski, Marco F. Huber,
- Abstract summary: We present a novel three-step approach to inferring causal mechanisms from heterogeneous data collected across multiple domains.<n>By leveraging the principle of Causal Transfer Learning (CTL), CICME is able to reliably detect domain-invariant causal mechanisms when provided with sufficient samples.<n>We show that CICME leverages the benefits of applying causal discovery on the pooled data and repeatedly on data from individual domains, and it even outperforms both baseline methods under certain scenarios.
- Score: 34.02899134427814
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To gain deeper insights into a complex sensor system through the lens of causality, we present common and individual causal mechanism estimation (CICME), a novel three-step approach to inferring causal mechanisms from heterogeneous data collected across multiple domains. By leveraging the principle of Causal Transfer Learning (CTL), CICME is able to reliably detect domain-invariant causal mechanisms when provided with sufficient samples. The identified common causal mechanisms are further used to guide the estimation of the remaining causal mechanisms in each domain individually. The performance of CICME is evaluated on linear Gaussian models under scenarios inspired from a manufacturing process. Building upon existing continuous optimization-based causal discovery methods, we show that CICME leverages the benefits of applying causal discovery on the pooled data and repeatedly on data from individual domains, and it even outperforms both baseline methods under certain scenarios.
Related papers
- Detecting and Pruning Prominent but Detrimental Neurons in Large Language Models [68.57424628540907]
Large language models (LLMs) often develop learned mechanisms specialized to specific datasets.<n>We introduce a fine-tuning approach designed to enhance generalization by identifying and pruning neurons associated with dataset-specific mechanisms.<n>Our method employs Integrated Gradients to quantify each neuron's influence on high-confidence predictions, pinpointing those that disproportionately contribute to dataset-specific performance.
arXiv Detail & Related papers (2025-07-12T08:10:10Z) - Quantifying Classifier Utility under Local Differential Privacy [5.90975025491779]
Local differential privacy (LDP) provides a quantifiable privacy guarantee for personal data by introducing perturbation at the data source.<n>This paper presents a framework for theoretically quantifying classifier utility under LDP mechanisms.
arXiv Detail & Related papers (2025-07-03T15:42:10Z) - Identifiable Multi-View Causal Discovery Without Non-Gaussianity [63.217175519436125]
We propose a novel approach to linear causal discovery in the framework of multi-view Structural Equation Models (SEM)<n>We prove the identifiability of all the parameters of the model without any further assumptions on the structure of the SEM other than it being acyclic.<n>The proposed methodology is validated through simulations and application on real data, where it enables the estimation of causal graphs between brain regions.
arXiv Detail & Related papers (2025-02-27T14:06:14Z) - Generative Intervention Models for Causal Perturbation Modeling [80.72074987374141]
In many applications, it is a priori unknown which mechanisms of a system are modified by an external perturbation.<n>We propose a generative intervention model (GIM) that learns to map these perturbation features to distributions over atomic interventions.
arXiv Detail & Related papers (2024-11-21T10:37:57Z) - Revisiting Spurious Correlation in Domain Generalization [12.745076668687748]
We build a structural causal model (SCM) to describe the causality within data generation process.
We further conduct a thorough analysis of the mechanisms underlying spurious correlation.
In this regard, we propose to control confounding bias in OOD generalization by introducing a propensity score weighted estimator.
arXiv Detail & Related papers (2024-06-17T13:22:00Z) - Learning Invariant Causal Mechanism from Vision-Language Models [14.0158707862717]
We show that the causal mechanism involving both invariant and variant factors in training environments differs from that in test environments.<n>We propose the Invariant Causal Mechanism of CLIP (CLIP-ICM) framework.<n>Our method offers a simple but powerful enhancement, boosting the reliability of CLIP in real-world applications.
arXiv Detail & Related papers (2024-05-24T07:22:35Z) - iSCAN: Identifying Causal Mechanism Shifts among Nonlinear Additive
Noise Models [48.33685559041322]
This paper focuses on identifying the causal mechanism shifts in two or more related datasets over the same set of variables.
Code implementing the proposed method is open-source and publicly available at https://github.com/kevinsbello/iSCAN.
arXiv Detail & Related papers (2023-06-30T01:48:11Z) - Causality-Based Multivariate Time Series Anomaly Detection [63.799474860969156]
We formulate the anomaly detection problem from a causal perspective and view anomalies as instances that do not follow the regular causal mechanism to generate the multivariate data.
We then propose a causality-based anomaly detection approach, which first learns the causal structure from data and then infers whether an instance is an anomaly relative to the local causal mechanism.
We evaluate our approach with both simulated and public datasets as well as a case study on real-world AIOps applications.
arXiv Detail & Related papers (2022-06-30T06:00:13Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Near Instance-Optimality in Differential Privacy [38.8726789833284]
We develop notions of instance optimality in differential privacy inspired by classical statistical theory.
We also develop inverse sensitivity mechanisms, which are instance optimal (or nearly instance optimal) for a large class of estimands.
arXiv Detail & Related papers (2020-05-16T04:53:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.