Text Matching Improves Sequential Recommendation by Reducing Popularity
Biases
- URL: http://arxiv.org/abs/2308.14029v1
- Date: Sun, 27 Aug 2023 07:44:33 GMT
- Title: Text Matching Improves Sequential Recommendation by Reducing Popularity
Biases
- Authors: Zhenghao Liu, Sen Mei, Chenyan Xiong, Xiaohua Li, Shi Yu, Zhiyuan Liu,
Yu Gu, Ge Yu
- Abstract summary: TASTE verbalizes items and user-item interactions using identifiers and attributes of items.
Our experiments show that TASTE outperforms the state-of-the-art methods on widely used sequential recommendation datasets.
- Score: 48.272381505993366
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes Text mAtching based SequenTial rEcommendation model
(TASTE), which maps items and users in an embedding space and recommends items
by matching their text representations. TASTE verbalizes items and user-item
interactions using identifiers and attributes of items. To better characterize
user behaviors, TASTE additionally proposes an attention sparsity method, which
enables TASTE to model longer user-item interactions by reducing the
self-attention computations during encoding. Our experiments show that TASTE
outperforms the state-of-the-art methods on widely used sequential
recommendation datasets. TASTE alleviates the cold start problem by
representing long-tail items using full-text modeling and bringing the benefits
of pretrained language models to recommendation systems. Our further analyses
illustrate that TASTE significantly improves the recommendation accuracy by
reducing the popularity bias of previous item id based recommendation models
and returning more appropriate and text-relevant items to satisfy users. All
codes are available at https://github.com/OpenMatch/TASTE.
Related papers
- Reindex-Then-Adapt: Improving Large Language Models for Conversational Recommendation [50.19602159938368]
Large language models (LLMs) are revolutionizing conversational recommender systems.
We propose a Reindex-Then-Adapt (RTA) framework, which converts multi-token item titles into single tokens within LLMs.
Our framework demonstrates improved accuracy metrics across three different conversational recommendation datasets.
arXiv Detail & Related papers (2024-05-20T15:37:55Z) - LlamaRec: Two-Stage Recommendation using Large Language Models for
Ranking [10.671747198171136]
We propose a two-stage framework using large language models for ranking-based recommendation (LlamaRec)
In particular, we use small-scale sequential recommenders to retrieve candidates based on the user interaction history.
LlamaRec consistently achieves datasets superior performance in both recommendation performance and efficiency.
arXiv Detail & Related papers (2023-10-25T06:23:48Z) - Attentive Graph-based Text-aware Preference Modeling for Top-N
Recommendation [2.3991565023534083]
We propose a new model named Attentive Graph-based Text-aware Recommendation Model (AGTM)
In this work, we aim to further improve top-N recommendation by effectively modeling both item textual content and high-order connectivity in user-item graph.
arXiv Detail & Related papers (2023-05-22T12:32:06Z) - How to Index Item IDs for Recommendation Foundation Models [49.425959632372425]
Recommendation foundation model utilizes large language models (LLM) for recommendation by converting recommendation tasks into natural language tasks.
To avoid generating excessively long text and hallucinated recommendations, creating LLM-compatible item IDs is essential.
We propose four simple yet effective solutions, including sequential indexing, collaborative indexing, semantic (content-based) indexing, and hybrid indexing.
arXiv Detail & Related papers (2023-05-11T05:02:37Z) - Recommender Systems with Generative Retrieval [58.454606442670034]
We propose a novel generative retrieval approach, where the retrieval model autoregressively decodes the identifiers of the target candidates.
To that end, we create semantically meaningful of codewords to serve as a Semantic ID for each item.
We show that recommender systems trained with the proposed paradigm significantly outperform the current SOTA models on various datasets.
arXiv Detail & Related papers (2023-05-08T21:48:17Z) - Improving Items and Contexts Understanding with Descriptive Graph for
Conversational Recommendation [4.640835690336652]
State-of-the-art methods on conversational recommender systems (CRS) leverage external knowledge to enhance both items' and contextual words' representations.
We propose a new CRS framework KLEVER, which jointly models items and their associated contextual words in the same semantic space.
Experiments on benchmarking CRS dataset demonstrate that KLEVER achieves superior performance, especially when the information from the users' responses is lacking.
arXiv Detail & Related papers (2023-04-11T21:21:46Z) - Using Interventions to Improve Out-of-Distribution Generalization of
Text-Matching Recommendation Systems [14.363532867533012]
Fine-tuning a large, base language model on paired item relevance data can be counter-productive for generalization.
For a product recommendation task, fine-tuning obtains worse accuracy than the base model when recommending items in a new category or for a future time period.
We propose an intervention-based regularizer that constraints the causal effect of any token on the model's relevance score to be similar to the base model.
arXiv Detail & Related papers (2022-10-07T11:16:45Z) - Modeling Dynamic User Preference via Dictionary Learning for Sequential
Recommendation [133.8758914874593]
Capturing the dynamics in user preference is crucial to better predict user future behaviors because user preferences often drift over time.
Many existing recommendation algorithms -- including both shallow and deep ones -- often model such dynamics independently.
This paper considers the problem of embedding a user's sequential behavior into the latent space of user preferences.
arXiv Detail & Related papers (2022-04-02T03:23:46Z) - Sequential Recommendation via Stochastic Self-Attention [68.52192964559829]
Transformer-based approaches embed items as vectors and use dot-product self-attention to measure the relationship between items.
We propose a novel textbfSTOchastic textbfSelf-textbfAttention(STOSA) to overcome these issues.
We devise a novel Wasserstein Self-Attention module to characterize item-item position-wise relationships in sequences.
arXiv Detail & Related papers (2022-01-16T12:38:45Z) - Sequential recommendation with metric models based on frequent sequences [0.688204255655161]
We propose to use frequent sequences to identify the most relevant part of the user history for the recommendation.
The most salient items are then used in a unified metric model that embeds items based on user preferences and sequential dynamics.
arXiv Detail & Related papers (2020-08-12T22:08:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.