Hop Sampling: A Simple Regularized Graph Learning for Non-Stationary
Environments
- URL: http://arxiv.org/abs/2006.14897v2
- Date: Thu, 20 Aug 2020 05:47:55 GMT
- Title: Hop Sampling: A Simple Regularized Graph Learning for Non-Stationary
Environments
- Authors: Young-Jin Park, Kyuyong Shin, Kyung-Min Kim
- Abstract summary: Graph representation learning is gaining popularity in a wide range of applications, such as social networks analysis.
Applying graph neural networks (GNNs) in a real-world application is still challenging due to non-stationary environments.
We present Hop Sampling, a straightforward regularization method that can effectively prevent GNNs from overfishing.
- Score: 12.251253742049437
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph representation learning is gaining popularity in a wide range of
applications, such as social networks analysis, computational biology, and
recommender systems. However, different with positive results from many
academic studies, applying graph neural networks (GNNs) in a real-world
application is still challenging due to non-stationary environments. The
underlying distribution of streaming data changes unexpectedly, resulting in
different graph structures (a.k.a., concept drift). Therefore, it is essential
to devise a robust graph learning technique so that the model does not overfit
to the training graphs. In this work, we present Hop Sampling, a
straightforward regularization method that can effectively prevent GNNs from
overfishing. The hop sampling randomly selects the number of propagation steps
rather than fixing it, and by doing so, it encourages the model to learn
meaningful node representation for all intermediate propagation layers and to
experience a variety of plausible graphs that are not in the training set.
Particularly, we describe the use case of our method in recommender systems, a
representative example of the real-world non-stationary case. We evaluated hop
sampling on a large-scale real-world LINE dataset and conducted an online A/B/n
test in LINE Coupon recommender systems of LINE Wallet Tab. Experimental
results demonstrate that the proposed scheme improves the prediction accuracy
of GNNs. We observed hop sampling provides 7.97% and 16.93% improvements for
NDCG and MAP compared to non-regularized GNN models in our online service.
Furthermore, models using hop sampling alleviate the oversmoothing issue in
GNNs enabling a deeper model as well as more diversified representation.
Related papers
- GOODAT: Towards Test-time Graph Out-of-Distribution Detection [103.40396427724667]
Graph neural networks (GNNs) have found widespread application in modeling graph data across diverse domains.
Recent studies have explored graph OOD detection, often focusing on training a specific model or modifying the data on top of a well-trained GNN.
This paper introduces a data-centric, unsupervised, and plug-and-play solution that operates independently of training data and modifications of GNN architecture.
arXiv Detail & Related papers (2024-01-10T08:37:39Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - A Local Graph Limits Perspective on Sampling-Based GNNs [7.601210044390688]
We propose a theoretical framework for training Graph Neural Networks (GNNs) on large input graphs via training on small, fixed-size sampled subgraphs.
We prove that parameters learned from training sampling-based GNNs on small samples of a large input graph are within an $epsilon$-neighborhood of the outcome of training the same architecture on the whole graph.
arXiv Detail & Related papers (2023-10-17T02:58:49Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - OOD-GNN: Out-of-Distribution Generalized Graph Neural Network [73.67049248445277]
Graph neural networks (GNNs) have achieved impressive performance when testing and training graph data come from identical distribution.
Existing GNNs lack out-of-distribution generalization abilities so that their performance substantially degrades when there exist distribution shifts between testing and training graph data.
We propose an out-of-distribution generalized graph neural network (OOD-GNN) for achieving satisfactory performance on unseen testing graphs that have different distributions with training graphs.
arXiv Detail & Related papers (2021-12-07T16:29:10Z) - Scalable Consistency Training for Graph Neural Networks via
Self-Ensemble Self-Distillation [13.815063206114713]
We introduce a novel consistency training method to improve accuracy of graph neural networks (GNNs)
For a target node we generate different neighborhood expansions, and distill the knowledge of the average of the predictions to the GNN.
Our method approximates the expected prediction of the possible neighborhood samples and practically only requires a few samples.
arXiv Detail & Related papers (2021-10-12T19:24:42Z) - Stable Prediction on Graphs with Agnostic Distribution Shift [105.12836224149633]
Graph neural networks (GNNs) have been shown to be effective on various graph tasks with randomly separated training and testing data.
In real applications, however, the distribution of training graph might be different from that of the test one.
We propose a novel stable prediction framework for GNNs, which permits both locally and globally stable learning and prediction on graphs.
arXiv Detail & Related papers (2021-10-08T02:45:47Z) - Scalable Graph Neural Network Training: The Case for Sampling [4.9201378771958675]
Graph Neural Networks (GNNs) are a new and increasingly popular family of deep neural network architectures to perform learning on graphs.
Training them efficiently is challenging due to the irregular nature of graph data.
Two different approaches have emerged in the literature: whole-graph and sample-based training.
arXiv Detail & Related papers (2021-05-05T20:44:10Z) - Bayesian Graph Neural Networks with Adaptive Connection Sampling [62.51689735630133]
We propose a unified framework for adaptive connection sampling in graph neural networks (GNNs)
The proposed framework not only alleviates over-smoothing and over-fitting tendencies of deep GNNs, but also enables learning with uncertainty in graph analytic tasks with GNNs.
arXiv Detail & Related papers (2020-06-07T07:06:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.