Test-Time Learning of Causal Structure from Interventional Data
- URL: http://arxiv.org/abs/2602.19131v1
- Date: Sun, 22 Feb 2026 11:23:05 GMT
- Title: Test-Time Learning of Causal Structure from Interventional Data
- Authors: Wei Chen, Rui Ding, Bojun Huang, Yang Zhang, Qiang Fu, Yuxuan Liang, Han Shi, Dongmei Zhang,
- Abstract summary: We propose TICL (Test-time Interventional Causal Learning), a novel method that synergizes Test-Time Training with Joint Causal Inference.<n>Specifically, we design a self-augmentation strategy to generate instance-specific training data at test time, effectively avoiding distribution shifts.<n>By integrating joint causal inference, we developed a PC-inspired two-phase supervised learning scheme, which effectively leverages self-augmented training data while ensuring theoretical identifiability.
- Score: 50.06913286558919
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Supervised causal learning has shown promise in causal discovery, yet it often struggles with generalization across diverse interventional settings, particularly when intervention targets are unknown. To address this, we propose TICL (Test-time Interventional Causal Learning), a novel method that synergizes Test-Time Training with Joint Causal Inference. Specifically, we design a self-augmentation strategy to generate instance-specific training data at test time, effectively avoiding distribution shifts. Furthermore, by integrating joint causal inference, we developed a PC-inspired two-phase supervised learning scheme, which effectively leverages self-augmented training data while ensuring theoretical identifiability. Extensive experiments on bnlearn benchmarks demonstrate TICL's superiority in multiple aspects of causal discovery and intervention target detection.
Related papers
- On the Paradoxical Interference between Instruction-Following and Task Solving [50.75960598434753]
Instruction following aims to align Large Language Models (LLMs) with human intent by specifying explicit constraints on how tasks should be performed.<n>We reveal a counterintuitive phenomenon: instruction following can paradoxically interfere with LLMs' task-solving capability.<n>We propose a metric, SUSTAINSCORE, to quantify the interference of instruction following with task solving.
arXiv Detail & Related papers (2026-01-29T17:48:56Z) - Understanding Catastrophic Interference: On the Identifibility of Latent Representations [67.05452287233122]
Catastrophic interference, also known as catastrophic forgetting, is a fundamental challenge in machine learning.<n>We propose a novel theoretical framework that formulates catastrophic interference as an identification problem.<n>Our approach provides both theoretical guarantees and practical performance improvements across both synthetic and benchmark datasets.
arXiv Detail & Related papers (2025-09-27T00:53:32Z) - Mitigating Spurious Correlations with Causal Logit Perturbation [22.281052412112263]
This study introduces a novel Causal Logit Perturbation (CLP) framework to train classifiers with generated causal logit perturbations for individual samples.<n>The framework is optimized by an online meta-learning-based learning algorithm and leverages human causal knowledge by augmenting metadata in both counterfactual and factual manners.
arXiv Detail & Related papers (2025-05-21T08:21:02Z) - Large-Scale Targeted Cause Discovery via Learning from Simulated Data [66.51307552703685]
We propose a novel machine learning approach for inferring causal variables of a target variable from observations.<n>We train a neural network using supervised learning on simulated data to infer causality.<n> Empirical results demonstrate superior performance in identifying causal relationships within large-scale gene regulatory networks.
arXiv Detail & Related papers (2024-08-29T02:21:11Z) - Mixstyle-Entropy: Domain Generalization with Causal Intervention and Perturbation [38.97031630265987]
Domain generalization (DG) solves this issue by learning representations independent of domain-related information, thus facilitating extrapolation to unseen environments.
Existing approaches typically focus on formulating tailored training objectives to extract shared features from the source data.
We propose a novel framework based on causality, named InPer, designed to enhance model generalization by incorporating causal intervention during training and causal perturbation during testing.
arXiv Detail & Related papers (2024-08-07T07:54:19Z) - In-context Contrastive Learning for Event Causality Identification [26.132189768472067]
Event Causality Identification aims at determining the existence of a causal relation between two events.
Recent prompt learning-based approaches have shown promising improvements on the ECI task.
This paper proposes an In-Context Contrastive Learning model that utilizes contrastive learning to enhance the effectiveness of both positive and negative demonstrations.
arXiv Detail & Related papers (2024-05-17T03:32:15Z) - Unsupervised Continual Anomaly Detection with Contrastively-learned
Prompt [80.43623986759691]
We introduce a novel Unsupervised Continual Anomaly Detection framework called UCAD.
The framework equips the UAD with continual learning capability through contrastively-learned prompts.
We conduct comprehensive experiments and set the benchmark on unsupervised continual anomaly detection and segmentation.
arXiv Detail & Related papers (2024-01-02T03:37:11Z) - Regularization Through Simultaneous Learning: A Case Study on Plant
Classification [0.0]
This paper introduces Simultaneous Learning, a regularization approach drawing on principles of Transfer Learning and Multi-task Learning.
We leverage auxiliary datasets with the target dataset, the UFOP-HVD, to facilitate simultaneous classification guided by a customized loss function.
Remarkably, our approach demonstrates superior performance over models without regularization.
arXiv Detail & Related papers (2023-05-22T19:44:57Z) - Trust Your $\nabla$: Gradient-based Intervention Targeting for Causal Discovery [49.084423861263524]
In this work, we propose a novel Gradient-based Intervention Targeting method, abbreviated GIT.
GIT 'trusts' the gradient estimator of a gradient-based causal discovery framework to provide signals for the intervention acquisition function.
We provide extensive experiments in simulated and real-world datasets and demonstrate that GIT performs on par with competitive baselines.
arXiv Detail & Related papers (2022-11-24T17:04:45Z) - Where Did You Learn That From? Surprising Effectiveness of Membership
Inference Attacks Against Temporally Correlated Data in Deep Reinforcement
Learning [114.9857000195174]
A major challenge to widespread industrial adoption of deep reinforcement learning is the potential vulnerability to privacy breaches.
We propose an adversarial attack framework tailored for testing the vulnerability of deep reinforcement learning algorithms to membership inference attacks.
arXiv Detail & Related papers (2021-09-08T23:44:57Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.