GNN Applied to Ego-nets for Friend Suggestions
- URL: http://arxiv.org/abs/2412.11888v1
- Date: Mon, 16 Dec 2024 15:37:17 GMT
- Title: GNN Applied to Ego-nets for Friend Suggestions
- Authors: Evgeny Zamyatin,
- Abstract summary: We introduce the Generalized-network Friendship Score framework, which makes it possible to use complex supervised models without sacrificing scalability.
The underlying model takes an ego-net as input and produces a pairwise relevance matrix for its nodes.
In addition, we develop the WalkGNN model which is capable of working effectively in the social network domain.
- Score: 0.0
- License:
- Abstract: A major problem of making friend suggestions in social networks is the large size of social graphs, which can have hundreds of millions of people and tens of billions of connections. Classic methods based on heuristics or factorizations are often used to address the difficulties of scaling more complex models. However, the unsupervised nature of these methods can lead to suboptimal results. In this work, we introduce the Generalized Ego-network Friendship Score framework, which makes it possible to use complex supervised models without sacrificing scalability. The main principle of the framework is to reduce the problem of link prediction on a full graph to a series of low-scale tasks on ego-nets with subsequent aggregation of their results. Here, the underlying model takes an ego-net as input and produces a pairwise relevance matrix for its nodes. In addition, we develop the WalkGNN model which is capable of working effectively in the social network domain, where these graph-level link prediction tasks are heterogeneous, dynamic and featureless. To measure the accuracy of this model, we introduce the Ego-VK dataset that serves as an exact representation of the real-world problem that we are addressing. Offline experiments on the dataset show that our model outperforms all baseline methods, and a live A/B test demonstrates the growth of business metrics as a result of utilizing our approach.
Related papers
- Towards Graph Foundation Models: Learning Generalities Across Graphs via Task-Trees [50.78679002846741]
We introduce a novel approach for learning cross-task generalities in graphs.
We propose task-trees as basic learning instances to align task spaces on graphs.
Our findings indicate that when a graph neural network is pretrained on diverse task-trees, it acquires transferable knowledge.
arXiv Detail & Related papers (2024-12-21T02:07:43Z) - One Model for One Graph: A New Perspective for Pretraining with Cross-domain Graphs [61.9759512646523]
Graph Neural Networks (GNNs) have emerged as a powerful tool to capture intricate network patterns.
Existing GNNs require careful domain-specific architecture designs and training from scratch on each dataset.
We propose a novel cross-domain pretraining framework, "one model for one graph"
arXiv Detail & Related papers (2024-11-30T01:49:45Z) - Graph as a feature: improving node classification with non-neural graph-aware logistic regression [2.952177779219163]
Graph-aware Logistic Regression (GLR) is a non-neural model designed for node classification tasks.
Unlike traditional graph algorithms that use only a fraction of the information accessible to GNNs, our proposed model simultaneously leverages both node features and the relationships between entities.
arXiv Detail & Related papers (2024-11-19T08:32:14Z) - A parameterised model for link prediction using node centrality and
similarity measure based on graph embedding [5.507008181141738]
Link prediction is a key aspect of graph machine learning.
It involves predicting new links that may form between network nodes.
Existing models have significant shortcomings.
We present the Node Centrality and Similarity Based.
Model (NCSM), a novel method for link prediction tasks.
arXiv Detail & Related papers (2023-09-11T13:13:54Z) - Scaling Laws Do Not Scale [54.72120385955072]
Recent work has argued that as the size of a dataset increases, the performance of a model trained on that dataset will increase.
We argue that this scaling law relationship depends on metrics used to measure performance that may not correspond with how different groups of people perceive the quality of models' output.
Different communities may also have values in tension with each other, leading to difficult, potentially irreconcilable choices about metrics used for model evaluations.
arXiv Detail & Related papers (2023-07-05T15:32:21Z) - Ordinal Graph Gamma Belief Network for Social Recommender Systems [54.9487910312535]
We develop a hierarchical Bayesian model termed ordinal graph factor analysis (OGFA), which jointly models user-item and user-user interactions.
OGFA not only achieves good recommendation performance, but also extracts interpretable latent factors corresponding to representative user preferences.
We extend OGFA to ordinal graph gamma belief network, which is a multi-stochastic-layer deep probabilistic model.
arXiv Detail & Related papers (2022-09-12T09:19:22Z) - Graph Generative Model for Benchmarking Graph Neural Networks [73.11514658000547]
We introduce a novel graph generative model that learns and reproduces the distribution of real-world graphs in a privacy-controlled way.
Our model can successfully generate privacy-controlled, synthetic substitutes of large-scale real-world graphs that can be effectively used to benchmark GNN models.
arXiv Detail & Related papers (2022-07-10T06:42:02Z) - A Graph-Enhanced Click Model for Web Search [67.27218481132185]
We propose a novel graph-enhanced click model (GraphCM) for web search.
We exploit both intra-session and inter-session information for the sparsity and cold-start problems.
arXiv Detail & Related papers (2022-06-17T08:32:43Z) - Enabling the Network to Surf the Internet [13.26679087834881]
We develop a framework that enables the model to surf the Internet.
We observe that the generalization ability of the learned representation is crucial for self-supervised learning.
We demonstrate the superiority of the proposed framework with experiments on miniImageNet, tieredImageNet and Omniglot.
arXiv Detail & Related papers (2021-02-24T11:00:29Z) - Neural Stochastic Block Model & Scalable Community-Based Graph Learning [8.00785050036369]
This paper proposes a scalable community-based neural framework for graph learning.
The framework learns the graph topology through the task of community detection and link prediction.
We look into two particular applications, the graph alignment and the anomalous correlation detection.
arXiv Detail & Related papers (2020-05-16T03:28:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.