Learning Vector-Quantized Item Representation for Transferable
Sequential Recommenders
- URL: http://arxiv.org/abs/2210.12316v1
- Date: Sat, 22 Oct 2022 00:43:14 GMT
- Title: Learning Vector-Quantized Item Representation for Transferable
Sequential Recommenders
- Authors: Yupeng Hou, Zhankui He, Julian McAuley, Wayne Xin Zhao
- Abstract summary: VQ-Rec is a novel approach to learning Vector-Quantized item representations for transferable sequential Recommender.
We propose an enhanced contrastive pre-training approach, using semi-synthetic and mixed-domain code representations as hard negatives.
- Score: 33.406897794088515
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, the generality of natural language text has been leveraged to
develop transferable recommender systems. The basic idea is to employ
pre-trained language model (PLM) to encode item text into item representations.
Despite the promising transferability, the binding between item text and item
representations might be too tight, leading to potential problems such as
over-emphasizing text similarity and exaggerating domain gaps. To address this
issue, this paper proposes VQ-Rec, a novel approach to learning
Vector-Quantized item representations for transferable sequential Recommender.
The major novelty of our approach lies in the new item representation scheme:
it first maps item text into a vector of discrete indices (called item code),
and then employs these indices to lookup the code embedding table for deriving
item representations. Such a scheme can be denoted as "text -> code ->
representation". Based on this representation scheme, we further propose an
enhanced contrastive pre-training approach, using semi-synthetic and
mixed-domain code representations as hard negatives. Furthermore, we design a
new cross-domain fine-tuning method based on a differentiable permutation-based
network. Extensive experiments conducted on six public benchmarks demonstrate
the effectiveness of the proposed approach, in both cross-domain and
cross-platform settings.
Related papers
- Learning Partially Aligned Item Representation for Cross-Domain Sequential Recommendation [72.73379646418435]
Cross-domain sequential recommendation aims to uncover and transfer users' sequential preferences across domains.
misaligned item representations can potentially lead to sub-optimal sequential modeling and user representation alignment.
We propose a model-agnostic framework called textbfCross-domain item representation textbfAlignment for textbfCross-textbfDomain textbfSequential textbfRecommendation.
arXiv Detail & Related papers (2024-05-21T03:25:32Z) - Binder: Hierarchical Concept Representation through Order Embedding of Binary Vectors [3.9271338080639753]
We propose Binder, a novel approach for order-based representation.
Binder uses binary vectors for embedding, so the embedding vectors are compact with an order of magnitude smaller footprint than other methods.
arXiv Detail & Related papers (2024-04-16T21:52:55Z) - Uncovering Prototypical Knowledge for Weakly Open-Vocabulary Semantic
Segmentation [59.37587762543934]
This paper studies the problem of weakly open-vocabulary semantic segmentation (WOVSS)
Existing methods suffer from a granularity inconsistency regarding the usage of group tokens.
We propose the prototypical guidance network (PGSeg) that incorporates multi-modal regularization.
arXiv Detail & Related papers (2023-10-29T13:18:00Z) - LRANet: Towards Accurate and Efficient Scene Text Detection with
Low-Rank Approximation Network [63.554061288184165]
We propose a novel parameterized text shape method based on low-rank approximation.
By exploring the shape correlation among different text contours, our method achieves consistency, compactness, simplicity, and robustness in shape representation.
We implement an accurate and efficient arbitrary-shaped text detector named LRANet.
arXiv Detail & Related papers (2023-06-27T02:03:46Z) - UniTRec: A Unified Text-to-Text Transformer and Joint Contrastive
Learning Framework for Text-based Recommendation [17.88375225459453]
Prior study has shown that pretrained language models (PLM) can boost the performance of text-based recommendation.
We propose a unified local- and global-attention Transformer encoder to better model two-level contexts of user history.
Our framework, UniTRec, unifies the contrastive objectives of discriminative matching scores and candidate text perplexity to jointly enhance text-based recommendation.
arXiv Detail & Related papers (2023-05-25T06:11:31Z) - RetroMAE-2: Duplex Masked Auto-Encoder For Pre-Training
Retrieval-Oriented Language Models [12.37229805276939]
We propose a novel pre-training method called Duplex Masked Auto-Encoder, a.k.a. DupMAE.
It is designed to improve the quality semantic representation where all contextualized embeddings of the pretrained model can be leveraged.
arXiv Detail & Related papers (2023-05-04T05:37:22Z) - Towards Universal Sequence Representation Learning for Recommender
Systems [98.02154164251846]
We present a novel universal sequence representation learning approach, named UniSRec.
The proposed approach utilizes the associated description text of items to learn transferable representations across different recommendation scenarios.
Our approach can be effectively transferred to new recommendation domains or platforms in a parameter-efficient way.
arXiv Detail & Related papers (2022-06-13T07:21:56Z) - UnifieR: A Unified Retriever for Large-Scale Retrieval [84.61239936314597]
Large-scale retrieval is to recall relevant documents from a huge collection given a query.
Recent retrieval methods based on pre-trained language models (PLM) can be coarsely categorized into either dense-vector or lexicon-based paradigms.
We propose a new learning framework, UnifieR which unifies dense-vector and lexicon-based retrieval in one model with a dual-representing capability.
arXiv Detail & Related papers (2022-05-23T11:01:59Z) - Text Revision by On-the-Fly Representation Optimization [76.11035270753757]
Current state-of-the-art methods formulate these tasks as sequence-to-sequence learning problems.
We present an iterative in-place editing approach for text revision, which requires no parallel data.
It achieves competitive and even better performance than state-of-the-art supervised methods on text simplification.
arXiv Detail & Related papers (2022-04-15T07:38:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.