Efficient Learning of Quadratic Variance Function Directed Acyclic
Graphs via Topological Layers
- URL: http://arxiv.org/abs/2111.01560v1
- Date: Mon, 1 Nov 2021 07:44:16 GMT
- Title: Efficient Learning of Quadratic Variance Function Directed Acyclic
Graphs via Topological Layers
- Authors: Wei Zhou and Xin He and Wei Zhong and Junhui Wang
- Abstract summary: We study a special class of non-Gaussian DAG models, where the conditional variance of each node given its parents is a quadratic function of its conditional mean.
To facilitate learning, we introduce a novel concept of topological layers, and develop an efficient DAG learning algorithm.
- Score: 18.91995406229772
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Directed acyclic graph (DAG) models are widely used to represent causal
relationships among random variables in many application domains. This paper
studies a special class of non-Gaussian DAG models, where the conditional
variance of each node given its parents is a quadratic function of its
conditional mean. Such a class of non-Gaussian DAG models are fairly flexible
and admit many popular distributions as special cases, including Poisson,
Binomial, Geometric, Exponential, and Gamma. To facilitate learning, we
introduce a novel concept of topological layers, and develop an efficient DAG
learning algorithm. It first reconstructs the topological layers in a
hierarchical fashion and then recoveries the directed edges between nodes in
different layers, which requires much less computational cost than most
existing algorithms in literature. Its advantage is also demonstrated in a
number of simulated examples, as well as its applications to two real-life
datasets, including an NBA player statistics data and a cosmetic sales data
collected by Alibaba.
Related papers
- Latent Graph Inference using Product Manifolds [0.0]
We generalize the discrete Differentiable Graph Module (dDGM) for latent graph learning.
Our novel approach is tested on a wide range of datasets, and outperforms the original dDGM model.
arXiv Detail & Related papers (2022-11-26T22:13:06Z) - Deep Manifold Learning with Graph Mining [80.84145791017968]
We propose a novel graph deep model with a non-gradient decision layer for graph mining.
The proposed model has achieved state-of-the-art performance compared to the current models.
arXiv Detail & Related papers (2022-07-18T04:34:08Z) - From graphs to DAGs: a low-complexity model and a scalable algorithm [0.0]
This paper presents a low-complexity model, called LoRAM for Low-Rank Additive Model, which combines low-rank matrix factorization with a sparsification mechanism for the continuous optimization of DAGs.
The proposed approach achieves a reduction from a cubic complexity to quadratic complexity while handling the same DAG characteristic function as NoTears.
arXiv Detail & Related papers (2022-04-10T10:22:56Z) - Inference of Multiscale Gaussian Graphical Model [0.0]
We propose a new method allowing to simultaneously infer a hierarchical clustering structure and the graphs describing the structure of independence at each level of the hierarchy.
Results on real and synthetic data are presented.
arXiv Detail & Related papers (2022-02-11T17:11:20Z) - BCDAG: An R package for Bayesian structure and Causal learning of
Gaussian DAGs [77.34726150561087]
We introduce the R package for causal discovery and causal effect estimation from observational data.
Our implementation scales efficiently with the number of observations and, whenever the DAGs are sufficiently sparse, the number of variables in the dataset.
We then illustrate the main functions and algorithms on both real and simulated datasets.
arXiv Detail & Related papers (2022-01-28T09:30:32Z) - Learning linear non-Gaussian directed acyclic graph with diverging
number of nodes [12.49848873864773]
Acyclic model, often depicted as a directed acyclic graph (DAG), has been widely employed to represent directional causal relations among collected nodes.
We propose an efficient method to learn linear non-Gaussian DAG in high dimensional cases, where the noises can be of any continuous non-Gaussian distribution.
arXiv Detail & Related papers (2021-11-01T07:34:53Z) - A Deep Latent Space Model for Graph Representation Learning [10.914558012458425]
We propose a Deep Latent Space Model (DLSM) for directed graphs to incorporate the traditional latent variable based generative model into deep learning frameworks.
Our proposed model consists of a graph convolutional network (GCN) encoder and a decoder, which are layer-wise connected by a hierarchical variational auto-encoder architecture.
Experiments on real-world datasets show that the proposed model achieves the state-of-the-art performances on both link prediction and community detection tasks.
arXiv Detail & Related papers (2021-06-22T12:41:19Z) - Spatial-spectral Hyperspectral Image Classification via Multiple Random
Anchor Graphs Ensemble Learning [88.60285937702304]
This paper proposes a novel spatial-spectral HSI classification method via multiple random anchor graphs ensemble learning (RAGE)
Firstly, the local binary pattern is adopted to extract the more descriptive features on each selected band, which preserves local structures and subtle changes of a region.
Secondly, the adaptive neighbors assignment is introduced in the construction of anchor graph, to reduce the computational complexity.
arXiv Detail & Related papers (2021-03-25T09:31:41Z) - Embedding Graph Auto-Encoder for Graph Clustering [90.8576971748142]
Graph auto-encoder (GAE) models are based on semi-supervised graph convolution networks (GCN)
We design a specific GAE-based model for graph clustering to be consistent with the theory, namely Embedding Graph Auto-Encoder (EGAE)
EGAE consists of one encoder and dual decoders.
arXiv Detail & Related papers (2020-02-20T09:53:28Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z) - Revisiting Graph based Collaborative Filtering: A Linear Residual Graph
Convolutional Network Approach [55.44107800525776]
Graph Convolutional Networks (GCNs) are state-of-the-art graph based representation learning models.
In this paper, we revisit GCN based Collaborative Filtering (CF) based Recommender Systems (RS)
We show that removing non-linearities would enhance recommendation performance, consistent with the theories in simple graph convolutional networks.
We propose a residual network structure that is specifically designed for CF with user-item interaction modeling.
arXiv Detail & Related papers (2020-01-28T04:41:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.