Coarse-to-Fine Lightweight Meta-Embedding for ID-Based Recommendation
- URL: http://arxiv.org/abs/2501.11870v1
- Date: Tue, 21 Jan 2025 03:56:23 GMT
- Title: Coarse-to-Fine Lightweight Meta-Embedding for ID-Based Recommendation
- Authors: Yang Wang, Haipeng Liu, Zeqian Yi, Biao Qian, Meng Wang,
- Abstract summary: We develop a novel graph neural networks (GNNs) based recommender where each user and item serves as the node.
In contrast to coarse-grained semantics, fine-grained semantics are well captured through sparse meta-embeddings.
We propose a weight bridging update strategy that focuses on matching each coarse-grained meta-embedding with several fine-grained meta-ems based on the users/items' semantics.
- Score: 13.732081010190962
- License:
- Abstract: The state-of-the-art recommendation systems have shifted the attention to efficient recommendation, e.g., on-device recommendation, under memory constraints. To this end, the existing methods either focused on the lightweight embeddings for both users and items, or involved on-device systems enjoying the compact embeddings to enhance reusability and reduces space complexity. However, they focus solely on the coarse granularity of embedding, while overlook the fine-grained semantic nuances, to adversarially downgrade the efficacy of meta-embeddings in capturing the intricate relationship over both user and item, consequently resulting into the suboptimal recommendations. In this paper, we aim to study how the meta-embedding can efficiently learn varied grained semantics, together with how the fine-grained meta-embedding can strengthen the representation of coarse-grained meta-embedding. To answer these questions, we develop a novel graph neural networks (GNNs) based recommender where each user and item serves as the node, linked directly to coarse-grained virtual nodes and indirectly to fine-grained virtual nodes, ensuring different grained semantic learning, while disclosing: 1) In contrast to coarse-grained semantics, fine-grained semantics are well captured through sparse meta-embeddings, which adaptively 2) balance the embedding uniqueness and memory constraint. Additionally, the initialization method come up upon SparsePCA, along with a soft thresholding activation function to render the sparseness of the meta-embeddings. We propose a weight bridging update strategy that focuses on matching each coarse-grained meta-embedding with several fine-grained meta-embeddings based on the users/items' semantics. Extensive experiments substantiate our method's superiority over existing baselines. Our code is available at https://github.com/htyjers/C2F-MetaEmbed.
Related papers
- Graph-Sequential Alignment and Uniformity: Toward Enhanced Recommendation Systems [51.716704243764994]
Our framework uses Graph Neural Network (GNN)-based and sequential recommenders as separate submodules while sharing a unified embedding space optimized jointly.
Experiments on three real-world datasets demonstrate that the proposed method significantly outperforms using either approach alone.
arXiv Detail & Related papers (2024-12-05T15:59:05Z) - SLAck: Semantic, Location, and Appearance Aware Open-Vocabulary Tracking [89.43370214059955]
Open-vocabulary Multiple Object Tracking (MOT) aims to generalize trackers to novel categories not in the training set.
We present a unified framework that jointly considers semantics, location, and appearance priors in the early steps of association.
Our method eliminates complex post-processings for fusing different cues and boosts the association performance significantly for large-scale open-vocabulary tracking.
arXiv Detail & Related papers (2024-09-17T14:36:58Z) - Unleash LLMs Potential for Recommendation by Coordinating Twin-Tower Dynamic Semantic Token Generator [60.07198935747619]
We propose Twin-Tower Dynamic Semantic Recommender (T TDS), the first generative RS which adopts dynamic semantic index paradigm.
To be more specific, we for the first time contrive a dynamic knowledge fusion framework which integrates a twin-tower semantic token generator into the LLM-based recommender.
The proposed T TDS recommender achieves an average improvement of 19.41% in Hit-Rate and 20.84% in NDCG metric, compared with the leading baseline methods.
arXiv Detail & Related papers (2024-09-14T01:45:04Z) - Improving Generalization in Meta-Learning via Meta-Gradient Augmentation [42.48021701246389]
We propose a data-independent textbfMeta-textbfGradient textbfAugmentation (textbfMGAug) method to alleviate overfitting in meta-learning.
The proposed MGAug is theoretically guaranteed by the generalization bound from the PAC-Bayes framework.
arXiv Detail & Related papers (2023-06-14T12:04:28Z) - Learning Meta Word Embeddings by Unsupervised Weighted Concatenation of
Source Embeddings [15.900069711477542]
We show that weighted concatenation can be seen as a spectrum matching operation between each source embedding and the meta-embedding.
We propose two emphunsupervised methods to learn the optimal concatenation weights for creating meta-embeddings.
arXiv Detail & Related papers (2022-04-26T15:41:06Z) - Diverse Preference Augmentation with Multiple Domains for Cold-start
Recommendations [92.47380209981348]
We propose a Diverse Preference Augmentation framework with multiple source domains based on meta-learning.
We generate diverse ratings in a new domain of interest to handle overfitting on the case of sparse interactions.
These ratings are introduced into the meta-training procedure to learn a preference meta-learner, which produces good generalization ability.
arXiv Detail & Related papers (2022-04-01T10:10:50Z) - Index $t$-SNE: Tracking Dynamics of High-Dimensional Datasets with
Coherent Embeddings [1.7188280334580195]
This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved.
The proposed algorithm has the same complexity as the original $t$-SNE to embed new items, and a lower one when considering the embedding of a dataset sliced into sub-pieces.
arXiv Detail & Related papers (2021-09-22T06:45:37Z) - Hyper Meta-Path Contrastive Learning for Multi-Behavior Recommendation [61.114580368455236]
User purchasing prediction with multi-behavior information remains a challenging problem for current recommendation systems.
We propose the concept of hyper meta-path to construct hyper meta-paths or hyper meta-graphs to explicitly illustrate the dependencies among different behaviors of a user.
Thanks to the recent success of graph contrastive learning, we leverage it to learn embeddings of user behavior patterns adaptively instead of assigning a fixed scheme to understand the dependencies among different behaviors.
arXiv Detail & Related papers (2021-09-07T04:28:09Z) - mSHINE: A Multiple-meta-paths Simultaneous Learning Framework for
Heterogeneous Information Network Embedding [15.400191040779376]
Heterogeneous information networks (HINs) are used to model objects with abundant information using explicit network structure.
Traditional network embedding algorithms are sub-optimal in capturing rich while potentially incompatible semantics provided by HINs.
mSHINE is designed to simultaneously learn multiple node representations for different meta-paths.
arXiv Detail & Related papers (2021-04-06T11:35:56Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - ExchNet: A Unified Hashing Network for Large-Scale Fine-Grained Image
Retrieval [43.41089241581596]
We study the novel fine-grained hashing topic to generate compact binary codes for fine-grained images.
We propose a unified end-to-end trainable network, termed as ExchNet.
Our proposal consistently outperforms state-of-the-art generic hashing methods on five fine-grained datasets.
arXiv Detail & Related papers (2020-08-04T07:01:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.