Structural transfer learning of non-Gaussian DAG
- URL: http://arxiv.org/abs/2310.10239v1
- Date: Mon, 16 Oct 2023 10:01:27 GMT
- Title: Structural transfer learning of non-Gaussian DAG
- Authors: Mingyang Ren, Xin He, Junhui Wang
- Abstract summary: Directed acyclic graph (DAG) has been widely employed to represent directional relationships among a set of collected nodes.
It remains an open question how to pool the heterogeneous data together for better DAG structure reconstruction in the target study.
We introduce a novel set of structural similarity measures for DAG and then present a transfer DAG learning framework.
- Score: 24.11895013147964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Directed acyclic graph (DAG) has been widely employed to represent
directional relationships among a set of collected nodes. Yet, the available
data in one single study is often limited for accurate DAG reconstruction,
whereas heterogeneous data may be collected from multiple relevant studies. It
remains an open question how to pool the heterogeneous data together for better
DAG structure reconstruction in the target study. In this paper, we first
introduce a novel set of structural similarity measures for DAG and then
present a transfer DAG learning framework by effectively leveraging information
from auxiliary DAGs of different levels of similarities. Our theoretical
analysis shows substantial improvement in terms of DAG reconstruction in the
target study, even when no auxiliary DAG is overall similar to the target DAG,
which is in sharp contrast to most existing transfer learning methods. The
advantage of the proposed transfer DAG learning is also supported by extensive
numerical experiments on both synthetic data and multi-site brain functional
connectivity network data.
Related papers
- Convolutional Learning on Directed Acyclic Graphs [10.282099295800322]
We develop a novel convolutional architecture tailored for learning from data defined over directed acyclic graphs (DAGs)
We develop a novel convolutional graph neural network that integrates learnable DAG filters to account for the partial ordering induced by the graph topology.
arXiv Detail & Related papers (2024-05-05T21:30:18Z) - Toward Generative Data Augmentation for Traffic Classification [54.92823760790628]
Data Augmentation (DA)-augmenting training data with synthetic samples-is wildly adopted in Computer Vision (CV) to improve models performance.
DA has not been yet popularized in networking use cases, including Traffic Classification (TC)
arXiv Detail & Related papers (2023-10-21T08:08:37Z) - Discovering Dynamic Causal Space for DAG Structure Learning [64.763763417533]
We propose a dynamic causal space for DAG structure learning, coined CASPER.
It integrates the graph structure into the score function as a new measure in the causal space to faithfully reflect the causal distance between estimated and ground truth DAG.
arXiv Detail & Related papers (2023-06-05T12:20:40Z) - Learning Better with Less: Effective Augmentation for Sample-Efficient
Visual Reinforcement Learning [57.83232242068982]
Data augmentation (DA) is a crucial technique for enhancing the sample efficiency of visual reinforcement learning (RL) algorithms.
It remains unclear which attributes of DA account for its effectiveness in achieving sample-efficient visual RL.
This work conducts comprehensive experiments to assess the impact of DA's attributes on its efficacy.
arXiv Detail & Related papers (2023-05-25T15:46:20Z) - DAPDAG: Domain Adaptation via Perturbed DAG Reconstruction [78.76115370275733]
We learn an auto-encoder that undertakes inference on population statistics given features and reconstruct a directed acyclic graph (DAG) as an auxiliary task.
The underlying DAG structure is assumed invariant among observed variables whose conditional distributions are allowed to vary across domains led by a latent environmental variable $E$.
We train the encoder and decoder jointly in an end-to-end manner and conduct experiments on synthetic and real datasets with mixed variables.
arXiv Detail & Related papers (2022-08-02T11:43:03Z) - DAG-WGAN: Causal Structure Learning With Wasserstein Generative
Adversarial Networks [2.492300648514129]
This paper proposes DAG-WGAN, which combines the Wasserstein-based adversarial loss, an auto-encoder architecture together with an acyclicity constraint.
It simultaneously learns causal structures and improves its data generation capability by leveraging the strength from the Wasserstein distance metric.
Our experiments have evaluated DAG-WGAN against the state-of-the-art and demonstrated its good performance.
arXiv Detail & Related papers (2022-04-01T12:27:27Z) - Differentiable DAG Sampling [33.45069308137142]
We propose a new differentiable probabilistic model over DAGs (DP-DAG)
DP-DAG allows fast and differentiable DAG sampling suited to continuous optimization.
We propose VI-DP-DAG, a new method for DAG learning from observational data.
arXiv Detail & Related papers (2022-03-16T10:14:49Z) - BCDAG: An R package for Bayesian structure and Causal learning of
Gaussian DAGs [77.34726150561087]
We introduce the R package for causal discovery and causal effect estimation from observational data.
Our implementation scales efficiently with the number of observations and, whenever the DAGs are sufficiently sparse, the number of variables in the dataset.
We then illustrate the main functions and algorithms on both real and simulated datasets.
arXiv Detail & Related papers (2022-01-28T09:30:32Z) - BCD Nets: Scalable Variational Approaches for Bayesian Causal Discovery [97.79015388276483]
A structural equation model (SEM) is an effective framework to reason over causal relationships represented via a directed acyclic graph (DAG)
Recent advances enabled effective maximum-likelihood point estimation of DAGs from observational data.
We propose BCD Nets, a variational framework for estimating a distribution over DAGs characterizing a linear-Gaussian SEM.
arXiv Detail & Related papers (2021-12-06T03:35:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.