Neural Graphical Models
- URL: http://arxiv.org/abs/2210.00453v4
- Date: Wed, 16 Aug 2023 20:32:42 GMT
- Title: Neural Graphical Models
- Authors: Harsh Shrivastava, Urszula Chajewska
- Abstract summary: We introduce Neural Graphical Models (NGMs) to represent complex feature dependencies with reasonable computational costs.
We capture the dependency structure between the features along with their complex function representations by using a neural network as a multi-task learning framework.
NGMs can fit generic graph structures including directed, undirected and mixed-edge graphs as well as support mixed input data types.
- Score: 2.6842860806280058
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Probabilistic Graphical Models are often used to understand dynamics of a
system. They can model relationships between features (nodes) and the
underlying distribution. Theoretically these models can represent very complex
dependency functions, but in practice often simplifying assumptions are made
due to computational limitations associated with graph operations. In this work
we introduce Neural Graphical Models (NGMs) which attempt to represent complex
feature dependencies with reasonable computational costs. Given a graph of
feature relationships and corresponding samples, we capture the dependency
structure between the features along with their complex function
representations by using a neural network as a multi-task learning framework.
We provide efficient learning, inference and sampling algorithms. NGMs can fit
generic graph structures including directed, undirected and mixed-edge graphs
as well as support mixed input data types. We present empirical studies that
show NGMs' capability to represent Gaussian graphical models, perform inference
analysis of a lung cancer data and extract insights from a real world infant
mortality data provided by Centers for Disease Control and Prevention.
Related papers
- GrannGAN: Graph annotation generative adversarial networks [72.66289932625742]
We consider the problem of modelling high-dimensional distributions and generating new examples of data with complex relational feature structure coherent with a graph skeleton.
The model we propose tackles the problem of generating the data features constrained by the specific graph structure of each data point by splitting the task into two phases.
In the first it models the distribution of features associated with the nodes of the given graph, in the second it complements the edge features conditionally on the node features.
arXiv Detail & Related papers (2022-12-01T11:49:07Z) - Latent Graph Inference using Product Manifolds [0.0]
We generalize the discrete Differentiable Graph Module (dDGM) for latent graph learning.
Our novel approach is tested on a wide range of datasets, and outperforms the original dDGM model.
arXiv Detail & Related papers (2022-11-26T22:13:06Z) - Heterogeneous Graph Neural Networks using Self-supervised Reciprocally
Contrastive Learning [102.9138736545956]
Heterogeneous graph neural network (HGNN) is a very popular technique for the modeling and analysis of heterogeneous graphs.
We develop for the first time a novel and robust heterogeneous graph contrastive learning approach, namely HGCL, which introduces two views on respective guidance of node attributes and graph topologies.
In this new approach, we adopt distinct but most suitable attribute and topology fusion mechanisms in the two views, which are conducive to mining relevant information in attributes and topologies separately.
arXiv Detail & Related papers (2022-04-30T12:57:02Z) - Graph-in-Graph (GiG): Learning interpretable latent graphs in
non-Euclidean domain for biological and healthcare applications [52.65389473899139]
Graphs are a powerful tool for representing and analyzing unstructured, non-Euclidean data ubiquitous in the healthcare domain.
Recent works have shown that considering relationships between input data samples have a positive regularizing effect for the downstream task.
We propose Graph-in-Graph (GiG), a neural network architecture for protein classification and brain imaging applications.
arXiv Detail & Related papers (2022-04-01T10:01:37Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - Brain dynamics via Cumulative Auto-Regressive Self-Attention [0.0]
We present a model that is considerably shallow than deep graph neural networks (GNNs)
Our model learns the autoregressive structure of individual time series and estimates directed connectivity graphs.
We demonstrate our results on a functional neuroimaging dataset classifying schizophrenia patients and controls.
arXiv Detail & Related papers (2021-11-01T21:50:35Z) - Dynamic Adaptive Spatio-temporal Graph Convolution for fMRI Modelling [0.0]
We propose a dynamic adaptivetemporal graph convolution (DASTGCN) model to overcome the shortcomings of pre-defined static correlation-based graph structures.
The proposed approach allows end-to-end inference of dynamic connections between brain regions via layer-wise graph structure learning module.
We evaluate our pipeline on the UKBiobank for age and gender classification tasks from resting-state functional scans.
arXiv Detail & Related papers (2021-09-26T07:19:47Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - Structural Landmarking and Interaction Modelling: on Resolution Dilemmas
in Graph Classification [50.83222170524406]
We study the intrinsic difficulty in graph classification under the unified concept of resolution dilemmas''
We propose SLIM'', an inductive neural network model for Structural Landmarking and Interaction Modelling.
arXiv Detail & Related papers (2020-06-29T01:01:42Z) - Causal Inference with Deep Causal Graphs [0.0]
Parametric causal modelling techniques rarely provide functionality for counterfactual estimation.
Deep Causal Graphs is an abstract specification of the required functionality for a neural network to model causal distributions.
We demonstrate its expressive power in modelling complex interactions and showcase applications to machine learning explainability and fairness.
arXiv Detail & Related papers (2020-06-15T13:03:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.