Learning on Random Balls is Sufficient for Estimating (Some) Graph
Parameters
- URL: http://arxiv.org/abs/2111.03317v1
- Date: Fri, 5 Nov 2021 08:32:46 GMT
- Title: Learning on Random Balls is Sufficient for Estimating (Some) Graph
Parameters
- Authors: Takanori Maehara and Hoang NT
- Abstract summary: We develop a theoretical framework for graph classification problems in the partial observation setting.
We propose a new graph classification model that works on a randomly sampled subgraph.
- Score: 28.50409304490877
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Theoretical analyses for graph learning methods often assume a complete
observation of the input graph. Such an assumption might not be useful for
handling any-size graphs due to the scalability issues in practice. In this
work, we develop a theoretical framework for graph classification problems in
the partial observation setting (i.e., subgraph samplings). Equipped with
insights from graph limit theory, we propose a new graph classification model
that works on a randomly sampled subgraph and a novel topology to characterize
the representability of the model. Our theoretical framework contributes a
theoretical validation of mini-batch learning on graphs and leads to new
learning-theoretic results on generalization bounds as well as
size-generalizability without assumptions on the input.
Related papers
- Does Graph Prompt Work? A Data Operation Perspective with Theoretical Analysis [7.309233340654514]
This paper introduces a theoretical framework that rigorously analyzes graph prompting from a data operation perspective.
We provide a formal guarantee theorem, demonstrating graph prompts capacity to approximate graph transformation operators.
We derive upper bounds on the error of these data operations by graph prompts for a single graph and extend this discussion to batches of graphs.
arXiv Detail & Related papers (2024-10-02T15:07:13Z) - GraphGLOW: Universal and Generalizable Structure Learning for Graph
Neural Networks [72.01829954658889]
This paper introduces the mathematical definition of this novel problem setting.
We devise a general framework that coordinates a single graph-shared structure learner and multiple graph-specific GNNs.
The well-trained structure learner can directly produce adaptive structures for unseen target graphs without any fine-tuning.
arXiv Detail & Related papers (2023-06-20T03:33:22Z) - Learning node embeddings via summary graphs: a brief theoretical
analysis [55.25628709267215]
Graph representation learning plays an important role in many graph mining applications, but learning embeddings of large-scale graphs remains a problem.
Recent works try to improve scalability via graph summarization -- i.e., they learn embeddings on a smaller summary graph, and then restore the node embeddings of the original graph.
We give an in-depth theoretical analysis of three specific embedding learning methods based on introduced kernel matrix.
arXiv Detail & Related papers (2022-07-04T04:09:50Z) - Beyond spectral gap: The role of the topology in decentralized learning [58.48291921602417]
In data-parallel optimization of machine learning models, workers collaborate to improve their estimates of the model.
This paper aims to paint an accurate picture of sparsely-connected distributed optimization when workers share the same data distribution.
Our theory matches empirical observations in deep learning, and accurately describes the relative merits of different graph topologies.
arXiv Detail & Related papers (2022-06-07T08:19:06Z) - Towards Explanation for Unsupervised Graph-Level Representation Learning [108.31036962735911]
Existing explanation methods focus on the supervised settings, eg, node classification and graph classification, while the explanation for unsupervised graph-level representation learning is still unexplored.
In this paper, we advance the Information Bottleneck principle (IB) to tackle the proposed explanation problem for unsupervised graph representations, which leads to a novel principle, textitUnsupervised Subgraph Information Bottleneck (USIB)
We also theoretically analyze the connection between graph representations and explanatory subgraphs on the label space, which reveals that the robustness of representations benefit the fidelity of explanatory subgraphs.
arXiv Detail & Related papers (2022-05-20T02:50:15Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Self-Supervised Representation Learning via Latent Graph Prediction [41.64774038444827]
Self-supervised learning (SSL) of graph neural networks is emerging as a promising way of leveraging unlabeled data.
We propose the LaGraph, a theoretically grounded predictive SSL framework based on latent graph prediction.
Our experimental results demonstrate the superiority of LaGraph in performance and the robustness to decreasing of training sample size on both graph-level and node-level tasks.
arXiv Detail & Related papers (2022-02-16T21:10:33Z) - Interpretable and Generalizable Graph Learning via Stochastic Attention
Mechanism [6.289180873978089]
Interpretable graph learning is in need as many scientific applications depend on learning models to collect insights from graph-structured data.
Previous works mostly focused on using post-hoc approaches to interpret a pre-trained model.
We propose Graph Attention (GSAT), an attention mechanism derived from the information bottleneck principle.
arXiv Detail & Related papers (2022-01-31T03:59:48Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - Non-Parametric Graph Learning for Bayesian Graph Neural Networks [35.88239188555398]
We propose a novel non-parametric graph model for constructing the posterior distribution of graph adjacency matrices.
We demonstrate the advantages of this model in three different problem settings: node classification, link prediction and recommendation.
arXiv Detail & Related papers (2020-06-23T21:10:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.