Synergistic Signals: Exploiting Co-Engagement and Semantic Links via
Graph Neural Networks
- URL: http://arxiv.org/abs/2312.04071v1
- Date: Thu, 7 Dec 2023 06:29:26 GMT
- Title: Synergistic Signals: Exploiting Co-Engagement and Semantic Links via
Graph Neural Networks
- Authors: Zijie Huang, Baolin Li, Hafez Asgharzadeh, Anne Cocos, Lingyi Liu,
Evan Cox, Colby Wise, Sudarshan Lamkhede
- Abstract summary: We study the problem in the context recommender systems at Netflix.
We propose a novel graph-based approach called SemanticGNN.
Our key technical contributions are twofold: (1) we develop a novel relation-aware attention graph neural network (GNN) to handle the imbalanced distribution of relation types in our graph; (2) to handle web-scale graph data that has millions of nodes and billions of edges, we develop a novel distributed graph training paradigm.
- Score: 4.261438296177923
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Given a set of candidate entities (e.g. movie titles), the ability to
identify similar entities is a core capability of many recommender systems.
Most often this is achieved by collaborative filtering approaches, i.e. if
users co-engage with a pair of entities frequently enough, the embeddings
should be similar. However, relying on co-engagement data alone can result in
lower-quality embeddings for new and unpopular entities. We study this problem
in the context recommender systems at Netflix. We observe that there is
abundant semantic information such as genre, content maturity level, themes,
etc. that complements co-engagement signals and provides interpretability in
similarity models. To learn entity similarities from both data sources
holistically, we propose a novel graph-based approach called SemanticGNN.
SemanticGNN models entities, semantic concepts, collaborative edges, and
semantic edges within a large-scale knowledge graph and conducts representation
learning over it. Our key technical contributions are twofold: (1) we develop a
novel relation-aware attention graph neural network (GNN) to handle the
imbalanced distribution of relation types in our graph; (2) to handle web-scale
graph data that has millions of nodes and billions of edges, we develop a novel
distributed graph training paradigm. The proposed model is successfully
deployed within Netflix and empirical experiments indicate it yields up to 35%
improvement in performance on similarity judgment tasks.
Related papers
- Federated Graph Semantic and Structural Learning [54.97668931176513]
This paper reveals that local client distortion is brought by both node-level semantics and graph-level structure.
We postulate that a well-structural graph neural network possesses similarity for neighbors due to the inherent adjacency relationships.
We transform the adjacency relationships into the similarity distribution and leverage the global model to distill the relation knowledge into the local model.
arXiv Detail & Related papers (2024-06-27T07:08:28Z) - On Discprecncies between Perturbation Evaluations of Graph Neural
Network Attributions [49.8110352174327]
We assess attribution methods from a perspective not previously explored in the graph domain: retraining.
The core idea is to retrain the network on important (or not important) relationships as identified by the attributions.
We run our analysis on four state-of-the-art GNN attribution methods and five synthetic and real-world graph classification datasets.
arXiv Detail & Related papers (2024-01-01T02:03:35Z) - Redundancy-Free Self-Supervised Relational Learning for Graph Clustering [13.176413653235311]
We propose a novel self-supervised deep graph clustering method named Redundancy-Free Graph Clustering (R$2$FGC)
It extracts the attribute- and structure-level relational information from both global and local views based on an autoencoder and a graph autoencoder.
Our experiments are performed on widely used benchmark datasets to validate the superiority of our R$2$FGC over state-of-the-art baselines.
arXiv Detail & Related papers (2023-09-09T06:18:50Z) - ConGraT: Self-Supervised Contrastive Pretraining for Joint Graph and Text Embeddings [20.25180279903009]
We propose Contrastive Graph-Text pretraining (ConGraT) for jointly learning separate representations of texts and nodes in a text-attributed graph (TAG)
Our method trains a language model (LM) and a graph neural network (GNN) to align their representations in a common latent space using a batch-wise contrastive learning objective inspired by CLIP.
Experiments demonstrate that ConGraT outperforms baselines on various downstream tasks, including node and text category classification, link prediction, and language modeling.
arXiv Detail & Related papers (2023-05-23T17:53:30Z) - TGNN: A Joint Semi-supervised Framework for Graph-level Classification [34.300070497510276]
We propose a novel semi-supervised framework called Twin Graph Neural Network (TGNN)
To explore graph structural information from complementary views, our TGNN has a message passing module and a graph kernel module.
We evaluate our TGNN on various public datasets and show that it achieves strong performance.
arXiv Detail & Related papers (2023-04-23T15:42:11Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Graph Neural Network with Curriculum Learning for Imbalanced Node
Classification [21.085314408929058]
Graph Neural Network (GNN) is an emerging technique for graph-based learning tasks such as node classification.
In this work, we reveal the vulnerability of GNN to the imbalance of node labels.
We propose a novel graph neural network framework with curriculum learning (GNN-CL) consisting of two modules.
arXiv Detail & Related papers (2022-02-05T10:46:11Z) - Learning Intents behind Interactions with Knowledge Graph for
Recommendation [93.08709357435991]
Knowledge graph (KG) plays an increasingly important role in recommender systems.
Existing GNN-based models fail to identify user-item relation at a fine-grained level of intents.
We propose a new model, Knowledge Graph-based Intent Network (KGIN)
arXiv Detail & Related papers (2021-02-14T03:21:36Z) - A Unifying Generative Model for Graph Learning Algorithms: Label
Propagation, Graph Convolutions, and Combinations [39.8498896531672]
Semi-supervised learning on graphs is a widely applicable problem in network science and machine learning.
We develop a Markov random field model for the data generation process of node attributes.
We show that label propagation, a linearized graph convolutional network, and their combination can all be derived as conditional expectations.
arXiv Detail & Related papers (2021-01-19T17:07:08Z) - Exploiting Heterogeneous Graph Neural Networks with Latent Worker/Task
Correlation Information for Label Aggregation in Crowdsourcing [72.34616482076572]
Crowdsourcing has attracted much attention for its convenience to collect labels from non-expert workers instead of experts.
We propose a novel framework based on graph neural networks for aggregating crowd labels.
arXiv Detail & Related papers (2020-10-25T10:12:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.