Lightweight Compositional Embeddings for Incremental Streaming
Recommendation
- URL: http://arxiv.org/abs/2202.02427v1
- Date: Fri, 4 Feb 2022 23:05:14 GMT
- Title: Lightweight Compositional Embeddings for Incremental Streaming
Recommendation
- Authors: Mengyue Hang, Tobias Schnabel, Longqi Yang, Jennifer Neville
- Abstract summary: We propose a graph-based recommendation model that supports incremental updates under low computational cost.
Lightweight Compositional Embedding (LCE) learns explicit embeddings for only a subset of nodes and represents the other nodes em implicitly.
LCE achieves nearly skyline performance with significantly fewer parameters than alternative graph-based models.
- Score: 21.857949766385186
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most work in graph-based recommender systems considers a {\em static} setting
where all information about test nodes (i.e., users and items) is available
upfront at training time. However, this static setting makes little sense for
many real-world applications where data comes in continuously as a stream of
new edges and nodes, and one has to update model predictions incrementally to
reflect the latest state. To fully capitalize on the newly available data in
the stream, recent graph-based recommendation models would need to be
repeatedly retrained, which is infeasible in practice.
In this paper, we study the graph-based streaming recommendation setting and
propose a compositional recommendation model -- Lightweight Compositional
Embedding (LCE) -- that supports incremental updates under low computational
cost. Instead of learning explicit embeddings for the full set of nodes, LCE
learns explicit embeddings for only a subset of nodes and represents the other
nodes {\em implicitly}, through a composition function based on their
interactions in the graph. This provides an effective, yet efficient, means to
leverage streaming graph data when one node type (e.g., items) is more amenable
to static representation. We conduct an extensive empirical study to compare
LCE to a set of competitive baselines on three large-scale user-item
recommendation datasets with interactions under a streaming setting. The
results demonstrate the superior performance of LCE, showing that it achieves
nearly skyline performance with significantly fewer parameters than alternative
graph-based models.
Related papers
- Challenging the Myth of Graph Collaborative Filtering: a Reasoned and Reproducibility-driven Analysis [50.972595036856035]
We present a code that successfully replicates results from six popular and recent graph recommendation models.
We compare these graph models with traditional collaborative filtering models that historically performed well in offline evaluations.
By investigating the information flow from users' neighborhoods, we aim to identify which models are influenced by intrinsic features in the dataset structure.
arXiv Detail & Related papers (2023-08-01T09:31:44Z) - From random-walks to graph-sprints: a low-latency node embedding
framework on continuous-time dynamic graphs [4.372841335228306]
We propose a framework for continuous-time-dynamic-graphs (CTDGs) that has low latency and is competitive with state-of-the-art, higher latency models.
In our framework, time-aware node embeddings summarizing multi-hop information are computed using only single-hop operations on the incoming edges.
We demonstrate that our graph-sprints features, combined with a machine learning, achieve competitive performance.
arXiv Detail & Related papers (2023-07-17T12:25:52Z) - Instant Representation Learning for Recommendation over Large Dynamic
Graphs [29.41179019520622]
We propose SUPA, a novel graph neural network for dynamic multiplex heterogeneous graphs.
For each new edge, SUPA samples an influenced subgraph, updates the representations of the two interactive nodes, and propagates the interaction information to the sampled subgraph.
To train SUPA incrementally online, we propose InsLearn, an efficient workflow for single-pass training of large dynamic graphs.
arXiv Detail & Related papers (2023-05-22T15:36:10Z) - LightGCL: Simple Yet Effective Graph Contrastive Learning for
Recommendation [9.181689366185038]
Graph neural clustering network (GNN) is a powerful learning approach for graph-based recommender systems.
In this paper, we propose a simple yet effective graph contrastive learning paradigm LightGCL.
arXiv Detail & Related papers (2023-02-16T10:16:21Z) - A Robust Stacking Framework for Training Deep Graph Models with
Multifaceted Node Features [61.92791503017341]
Graph Neural Networks (GNNs) with numerical node features and graph structure as inputs have demonstrated superior performance on various supervised learning tasks with graph data.
The best models for such data types in most standard supervised learning settings with IID (non-graph) data are not easily incorporated into a GNN.
Here we propose a robust stacking framework that fuses graph-aware propagation with arbitrary models intended for IID data.
arXiv Detail & Related papers (2022-06-16T22:46:33Z) - Causal Incremental Graph Convolution for Recommender System Retraining [89.25922726558875]
Real-world recommender system needs to be regularly retrained to keep with the new data.
In this work, we consider how to efficiently retrain graph convolution network (GCN) based recommender models.
arXiv Detail & Related papers (2021-08-16T04:20:09Z) - Privileged Graph Distillation for Cold Start Recommendation [57.918041397089254]
The cold start problem in recommender systems requires recommending to new users (items) based on attributes without any historical interaction records.
We propose a privileged graph distillation model(PGD)
Our proposed model is generally applicable to different cold start scenarios with new user, new item, or new user-new item.
arXiv Detail & Related papers (2021-05-31T14:05:27Z) - Heuristic Semi-Supervised Learning for Graph Generation Inspired by
Electoral College [80.67842220664231]
We propose a novel pre-processing technique, namely ELectoral COllege (ELCO), which automatically expands new nodes and edges to refine the label similarity within a dense subgraph.
In all setups tested, our method boosts the average score of base models by a large margin of 4.7 points, as well as consistently outperforms the state-of-the-art.
arXiv Detail & Related papers (2020-06-10T14:48:48Z) - Revisiting Graph based Collaborative Filtering: A Linear Residual Graph
Convolutional Network Approach [55.44107800525776]
Graph Convolutional Networks (GCNs) are state-of-the-art graph based representation learning models.
In this paper, we revisit GCN based Collaborative Filtering (CF) based Recommender Systems (RS)
We show that removing non-linearities would enhance recommendation performance, consistent with the theories in simple graph convolutional networks.
We propose a residual network structure that is specifically designed for CF with user-item interaction modeling.
arXiv Detail & Related papers (2020-01-28T04:41:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.