The Graphical Nadaraya-Watson Estimator on Latent Position Models
- URL: http://arxiv.org/abs/2303.17229v2
- Date: Tue, 4 Apr 2023 08:33:05 GMT
- Title: The Graphical Nadaraya-Watson Estimator on Latent Position Models
- Authors: M. Gjorgjevski
- Abstract summary: We are interested in the quality of the averaging estimator which for an unlabeled node predicts the average of the observations of its labeled neighbors.
While the estimator itself is very simple we believe that our results will contribute towards the theoretical understanding of learning on graphs through more sophisticated methods such as Graph Neural Networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given a graph with a subset of labeled nodes, we are interested in the
quality of the averaging estimator which for an unlabeled node predicts the
average of the observations of its labeled neighbors. We rigorously study
concentration properties, variance bounds and risk bounds in this context.
While the estimator itself is very simple we believe that our results will
contribute towards the theoretical understanding of learning on graphs through
more sophisticated methods such as Graph Neural Networks.
Related papers
- Node Regression on Latent Position Random Graphs via Local Averaging [10.962094053749093]
We study the performance of various estimators for node regression.
An alternative consists in first estimating the true distances between the latent positions, then injecting these estimated distances into a classical Nadaraya Watson estimator.
We show that this method can achieve standard nonparametric rates in certain instances even when the graph neighborhood is too large or too small.
arXiv Detail & Related papers (2024-10-29T12:17:06Z) - Joint graph learning from Gaussian observations in the presence of
hidden nodes [26.133725549667734]
We propose a joint graph learning method that takes into account the presence of hidden (latent) variables.
We exploit the structure resulting from the previous considerations to propose a convex optimization problem.
We compare the proposed algorithm with different baselines and evaluate its performance over synthetic and real-world graphs.
arXiv Detail & Related papers (2022-12-04T13:03:41Z) - A Graph Is More Than Its Nodes: Towards Structured Uncertainty-Aware
Learning on Graphs [49.76175970328538]
We propose novel edgewise metrics, namely the edgewise expected calibration error (ECE) and the agree/disagree ECEs, which provide criteria for uncertainty estimation on graphs beyond the nodewise setting.
Our experiments demonstrate that the proposed edgewise metrics can complement the nodewise results and yield additional insights.
arXiv Detail & Related papers (2022-10-27T16:12:58Z) - Graph Condensation via Receptive Field Distribution Matching [61.71711656856704]
This paper focuses on creating a small graph to represent the original graph, so that GNNs trained on the size-reduced graph can make accurate predictions.
We view the original graph as a distribution of receptive fields and aim to synthesize a small graph whose receptive fields share a similar distribution.
arXiv Detail & Related papers (2022-06-28T02:10:05Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Graph-in-Graph (GiG): Learning interpretable latent graphs in
non-Euclidean domain for biological and healthcare applications [52.65389473899139]
Graphs are a powerful tool for representing and analyzing unstructured, non-Euclidean data ubiquitous in the healthcare domain.
Recent works have shown that considering relationships between input data samples have a positive regularizing effect for the downstream task.
We propose Graph-in-Graph (GiG), a neural network architecture for protein classification and brain imaging applications.
arXiv Detail & Related papers (2022-04-01T10:01:37Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Learning Sparse Graphs with a Core-periphery Structure [14.112444998191698]
We propose a generative model for data associated with core-periphery structured networks.
We infer a sparse graph and nodal core scores that induce dense (sparse) connections in core parts of the network.
arXiv Detail & Related papers (2021-10-08T10:41:30Z) - Posterior Consistency of Semi-Supervised Regression on Graphs [14.65047105712853]
Graph-based semi-supervised regression (SSR) is the problem of estimating the value of a function on a weighted graph from its values (labels) on a small subset of the vertices.
This paper is concerned with the consistency of SSR in the context of classification, in the setting where the labels have small noise and the underlying graph weighting is consistent with well-clustered nodes.
We present a Bayesian formulation of SSR in which the weighted graph defines a Gaussian prior, using a graph Laplacian, and the labeled data defines a likelihood.
arXiv Detail & Related papers (2020-07-25T00:00:19Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.