Neural Graph Revealers
- URL: http://arxiv.org/abs/2302.13582v2
- Date: Tue, 28 Feb 2023 06:02:56 GMT
- Title: Neural Graph Revealers
- Authors: Harsh Shrivastava, Urszula Chajewska
- Abstract summary: We propose Neural Graph Revealers (NGRs) to efficiently merge sparse graph recovery methods with Probabilistic Graphical Models.
NGRs view the neural networks as a glass box' or more specifically as a multitask learning framework.
We show experimental results of doing sparse graph recovery and probabilistic inference on data from multimodal infant mortality dataset by Centers for Disease Control and Prevention.
- Score: 2.692919446383274
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Sparse graph recovery methods work well where the data follows their
assumptions but often they are not designed for doing downstream probabilistic
queries. This limits their adoption to only identifying connections among the
input variables. On the other hand, the Probabilistic Graphical Models (PGMs)
assume an underlying base graph between variables and learns a distribution
over them. PGM design choices are carefully made such that the inference \&
sampling algorithms are efficient. This brings in certain restrictions and
often simplifying assumptions. In this work, we propose Neural Graph Revealers
(NGRs), that are an attempt to efficiently merge the sparse graph recovery
methods with PGMs into a single flow. The problem setting consists of an input
data X with D features and M samples and the task is to recover a sparse graph
showing connection between the features and learn a probability distribution
over the D at the same time. NGRs view the neural networks as a `glass box' or
more specifically as a multitask learning framework. We introduce
`Graph-constrained path norm' that NGRs leverage to learn a graphical model
that captures complex non-linear functional dependencies between the features
in the form of an undirected sparse graph. Furthermore, NGRs can handle
multimodal inputs like images, text, categorical data, embeddings etc. which is
not straightforward to incorporate in the existing methods. We show
experimental results of doing sparse graph recovery and probabilistic inference
on data from Gaussian graphical models and a multimodal infant mortality
dataset by Centers for Disease Control and Prevention.
Related papers
- Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Data Augmentation for Supervised Graph Outlier Detection with Latent
Diffusion Models [42.19529054800729]
We introduce GODM, a novel data augmentation for mitigating class imbalance in supervised graph outlier detection with latent Diffusion Models.
Our proposed method consists of three key components: (1) Variantioanl maps the heterogeneous information inherent within the graph data into a unified latent space, (2) Graph Generator synthesizes graph data that are statistically similar to real outliers from latent space, and (3) Latent Diffusion Model learns the latent space distribution of real organic data by iterative denoising.
arXiv Detail & Related papers (2023-12-29T16:50:40Z) - Graph-in-Graph (GiG): Learning interpretable latent graphs in
non-Euclidean domain for biological and healthcare applications [52.65389473899139]
Graphs are a powerful tool for representing and analyzing unstructured, non-Euclidean data ubiquitous in the healthcare domain.
Recent works have shown that considering relationships between input data samples have a positive regularizing effect for the downstream task.
We propose Graph-in-Graph (GiG), a neural network architecture for protein classification and brain imaging applications.
arXiv Detail & Related papers (2022-04-01T10:01:37Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - OOD-GNN: Out-of-Distribution Generalized Graph Neural Network [73.67049248445277]
Graph neural networks (GNNs) have achieved impressive performance when testing and training graph data come from identical distribution.
Existing GNNs lack out-of-distribution generalization abilities so that their performance substantially degrades when there exist distribution shifts between testing and training graph data.
We propose an out-of-distribution generalized graph neural network (OOD-GNN) for achieving satisfactory performance on unseen testing graphs that have different distributions with training graphs.
arXiv Detail & Related papers (2021-12-07T16:29:10Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z) - Differentiable Graph Module (DGM) for Graph Convolutional Networks [44.26665239213658]
Differentiable Graph Module (DGM) is a learnable function that predicts edge probabilities in the graph which are optimal for the downstream task.
We provide an extensive evaluation of applications from the domains of healthcare (disease prediction), brain imaging (age prediction), computer graphics (3D point cloud segmentation), and computer vision (zero-shot learning)
arXiv Detail & Related papers (2020-02-11T12:59:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.