Residual Correlation in Graph Neural Network Regression
- URL: http://arxiv.org/abs/2002.08274v2
- Date: Tue, 16 Jun 2020 22:18:57 GMT
- Title: Residual Correlation in Graph Neural Network Regression
- Authors: Junteng Jia and Austin R. Benson
- Abstract summary: We show that conditional independence assumption severely limits predictive power.
We address this problem with an interpretable and efficient framework.
Our framework achieves substantially higher accuracy than competing baselines.
- Score: 39.54530450932135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A graph neural network transforms features in each vertex's neighborhood into
a vector representation of the vertex. Afterward, each vertex's representation
is used independently for predicting its label. This standard pipeline
implicitly assumes that vertex labels are conditionally independent given their
neighborhood features. However, this is a strong assumption, and we show that
it is far from true on many real-world graph datasets. Focusing on regression
tasks, we find that this conditional independence assumption severely limits
predictive power. This should not be that surprising, given that traditional
graph-based semi-supervised learning methods such as label propagation work in
the opposite fashion by explicitly modeling the correlation in predicted
outcomes.
Here, we address this problem with an interpretable and efficient framework
that can improve any graph neural network architecture simply by exploiting
correlation structure in the regression residuals. In particular, we model the
joint distribution of residuals on vertices with a parameterized multivariate
Gaussian, and estimate the parameters by maximizing the marginal likelihood of
the observed labels. Our framework achieves substantially higher accuracy than
competing baselines, and the learned parameters can be interpreted as the
strength of correlation among connected vertices. Furthermore, we develop
linear time algorithms for low-variance, unbiased model parameter estimates,
allowing us to scale to large networks. We also provide a basic version of our
method that makes stronger assumptions on correlation structure but is painless
to implement, often leading to great practical performance with minimal
overhead.
Related papers
- Graph Structure Learning with Interpretable Bayesian Neural Networks [10.957528713294874]
We introduce novel iterations with independently interpretable parameters.
These parameters influence characteristics of the estimated graph, such as edge sparsity.
After unrolling these iterations, prior knowledge over such graph characteristics shape prior distributions.
Fast execution and parameter efficiency allow for high-fidelity posterior approximation.
arXiv Detail & Related papers (2024-06-20T23:27:41Z) - Graph Out-of-Distribution Generalization with Controllable Data
Augmentation [51.17476258673232]
Graph Neural Network (GNN) has demonstrated extraordinary performance in classifying graph properties.
Due to the selection bias of training and testing data, distribution deviation is widespread.
We propose OOD calibration to measure the distribution deviation of virtual samples.
arXiv Detail & Related papers (2023-08-16T13:10:27Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - An Interpretable Graph Generative Model with Heterophily [38.59200985962146]
We propose the first edge-independent graph generative model that is expressive enough to capture heterophily.
Our experiments demonstrate the effectiveness of our model for a variety of important application tasks.
arXiv Detail & Related papers (2021-11-04T17:34:39Z) - Convergent Boosted Smoothing for Modeling Graph Data with Tabular Node
Features [46.052312251801]
We propose a framework for iterating boosting with graph propagation steps.
Our approach is anchored in a principled meta loss function.
Across a variety of non-iid graph datasets, our method achieves comparable or superior performance.
arXiv Detail & Related papers (2021-10-26T04:53:12Z) - T-LoHo: A Bayesian Regularization Model for Structured Sparsity and
Smoothness on Graphs [0.0]
In graph-structured data, structured sparsity and smoothness tend to cluster together.
We propose a new prior for high dimensional parameters with graphical relations.
We use it to detect structured sparsity and smoothness simultaneously.
arXiv Detail & Related papers (2021-07-06T10:10:03Z) - Graph Belief Propagation Networks [34.137798598227874]
We introduce a model that combines the advantages of graph neural networks and collective classification.
In our model, potentials on each node only depend on that node's features, and edge potentials are learned via a coupling matrix.
Our approach can be viewed as either an interpretable message-passing graph neural network or a collective classification method with higher capacity and modernized training.
arXiv Detail & Related papers (2021-06-06T05:24:06Z) - Hyperbolic Graph Embedding with Enhanced Semi-Implicit Variational
Inference [48.63194907060615]
We build off of semi-implicit graph variational auto-encoders to capture higher-order statistics in a low-dimensional graph latent representation.
We incorporate hyperbolic geometry in the latent space through a Poincare embedding to efficiently represent graphs exhibiting hierarchical structure.
arXiv Detail & Related papers (2020-10-31T05:48:34Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.