Towards Graph Foundation Models for Personalization
- URL: http://arxiv.org/abs/2403.07478v1
- Date: Tue, 12 Mar 2024 10:12:59 GMT
- Title: Towards Graph Foundation Models for Personalization
- Authors: Andreas Damianou, Francesco Fabbri, Paul Gigioli, Marco De Nadai,
Alice Wang, Enrico Palumbo, Mounia Lalmas
- Abstract summary: We present a graph-based foundation modeling approach tailored to personalization.
Our approach has been rigorously tested and proven effective in delivering recommendations across a diverse array of products.
- Score: 9.405827216171629
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the realm of personalization, integrating diverse information sources such
as consumption signals and content-based representations is becoming
increasingly critical to build state-of-the-art solutions. In this regard, two
of the biggest trends in research around this subject are Graph Neural Networks
(GNNs) and Foundation Models (FMs). While GNNs emerged as a popular solution in
industry for powering personalization at scale, FMs have only recently caught
attention for their promising performance in personalization tasks like ranking
and retrieval. In this paper, we present a graph-based foundation modeling
approach tailored to personalization. Central to this approach is a
Heterogeneous GNN (HGNN) designed to capture multi-hop content and consumption
relationships across a range of recommendable item types. To ensure the
generality required from a Foundation Model, we employ a Large Language Model
(LLM) text-based featurization of nodes that accommodates all item types, and
construct the graph using co-interaction signals, which inherently transcend
content specificity. To facilitate practical generalization, we further couple
the HGNN with an adaptation mechanism based on a two-tower (2T) architecture,
which also operates agnostically to content type. This multi-stage approach
ensures high scalability; while the HGNN produces general purpose embeddings,
the 2T component models in a continuous space the sheer size of user-item
interaction data. Our comprehensive approach has been rigorously tested and
proven effective in delivering recommendations across a diverse array of
products within a real-world, industrial audio streaming platform.
Related papers
- Language Models are Graph Learners [70.14063765424012]
Language Models (LMs) are challenging the dominance of domain-specific models, including Graph Neural Networks (GNNs) and Graph Transformers (GTs)
We propose a novel approach that empowers off-the-shelf LMs to achieve performance comparable to state-of-the-art GNNs on node classification tasks.
arXiv Detail & Related papers (2024-10-03T08:27:54Z) - FedSheafHN: Personalized Federated Learning on Graph-structured Data [22.825083541211168]
We propose a model called FedSheafHN, which embeds each client's local subgraph into a server-constructed collaboration graph.
Our model improves the integration and interpretation of complex client characteristics.
It also has fast model convergence and effective new clients generalization.
arXiv Detail & Related papers (2024-05-25T04:51:41Z) - An Interpretable Ensemble of Graph and Language Models for Improving
Search Relevance in E-Commerce [22.449320058423886]
We propose Plug and Play Graph LAnguage Model (PP-GLAM), an explainable ensemble of plug and play models.
Our approach uses a modular framework with uniform data processing pipelines.
We show that PP-GLAM outperforms several state-of-the-art baselines and a proprietary model on real-world multilingual, multi-regional e-commerce datasets.
arXiv Detail & Related papers (2024-03-01T19:08:25Z) - APGL4SR: A Generic Framework with Adaptive and Personalized Global
Collaborative Information in Sequential Recommendation [86.29366168836141]
We propose a graph-driven framework, named Adaptive and Personalized Graph Learning for Sequential Recommendation (APGL4SR)
APGL4SR incorporates adaptive and personalized global collaborative information into sequential recommendation systems.
As a generic framework, APGL4SR can outperform other baselines with significant margins.
arXiv Detail & Related papers (2023-11-06T01:33:24Z) - Graph Ladling: Shockingly Simple Parallel GNN Training without
Intermediate Communication [100.51884192970499]
GNNs are a powerful family of neural networks for learning over graphs.
scaling GNNs either by deepening or widening suffers from prevalent issues of unhealthy gradients, over-smoothening, information squashing.
We propose not to deepen or widen current GNNs, but instead present a data-centric perspective of model soups tailored for GNNs.
arXiv Detail & Related papers (2023-06-18T03:33:46Z) - Personalized Federated Domain Adaptation for Item-to-Item Recommendation [11.65452674504235]
Item-to-Item (I2I) recommendation is an important function in most recommendation systems.
We propose and investigate a personalized federated modeling framework based on graph neural networks (GNNs)
Our key contribution is a personalized graph adaptation model that bridges the gap between recent literature on federated GNNs and (non-graph) personalized federated learning.
arXiv Detail & Related papers (2023-06-05T19:06:18Z) - Entity-Graph Enhanced Cross-Modal Pretraining for Instance-level Product
Retrieval [152.3504607706575]
This research aims to conduct weakly-supervised multi-modal instance-level product retrieval for fine-grained product categories.
We first contribute the Product1M datasets, and define two real practical instance-level retrieval tasks.
We exploit to train a more effective cross-modal model which is adaptively capable of incorporating key concept information from the multi-modal data.
arXiv Detail & Related papers (2022-06-17T15:40:45Z) - Meta-Aggregator: Learning to Aggregate for 1-bit Graph Neural Networks [127.32203532517953]
We develop a vanilla 1-bit framework that binarizes both the GNN parameters and the graph features.
Despite the lightweight architecture, we observed that this vanilla framework suffered from insufficient discriminative power in distinguishing graph topologies.
This discovery motivates us to devise meta aggregators to improve the expressive power of vanilla binarized GNNs.
arXiv Detail & Related papers (2021-09-27T08:50:37Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z) - Principal Neighbourhood Aggregation for Graph Nets [4.339839287869653]
Graph Neural Networks (GNNs) have been shown to be effective models for different predictive tasks on graph-structured data.
Recent work on their expressive power has focused on isomorphism tasks and countable feature spaces.
We extend this theoretical framework to include continuous features which occur regularly in real-world input domains.
arXiv Detail & Related papers (2020-04-12T23:30:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.