Headache to Overstock? Promoting Long-tail Items through Debiased Product Bundling
- URL: http://arxiv.org/abs/2411.19107v1
- Date: Thu, 28 Nov 2024 12:44:56 GMT
- Title: Headache to Overstock? Promoting Long-tail Items through Debiased Product Bundling
- Authors: Shuo Xu, Haokai Ma, Yunshan Ma, Xiaohao Liu, Lei Meng, Xiangxu Meng, Tat-Seng Chua,
- Abstract summary: We propose a Distilled Modality-Oriented Knowledge Transfer framework (DieT) to counter the popularity bias misintroduced by the user feedback features.
Extensive experiments on two real-world datasets demonstrate the superiority of DieT over a list of SOTA methods in the long-tail bundling scenario.
- Score: 47.630529473943824
- License:
- Abstract: Product bundling aims to organize a set of thematically related items into a combined bundle for shipment facilitation and item promotion. To increase the exposure of fresh or overstocked products, sellers typically bundle these items with popular products for inventory clearance. This specific task can be formulated as a long-tail product bundling scenario, which leverages the user-item interactions to define the popularity of each item. The inherent popularity bias in the pre-extracted user feedback features and the insufficient utilization of other popularity-independent knowledge may force the conventional bundling methods to find more popular items, thereby struggling with this long-tail bundling scenario. Through intuitive and empirical analysis, we navigate the core solution for this challenge, which is maximally mining the popularity-free features and effectively incorporating them into the bundling process. To achieve this, we propose a Distilled Modality-Oriented Knowledge Transfer framework (DieT) to effectively counter the popularity bias misintroduced by the user feedback features and adhere to the original intent behind the real-world bundling behaviors. Specifically, DieT first proposes the Popularity-free Collaborative Distribution Modeling module (PCD) to capture the popularity-independent information from the bundle-item view, which is proven most effective in the long-tail bundling scenario to enable the directional information transfer. With the tailored Unbiased Bundle-aware Knowledge Transferring module (UBT), DieT can highlight the significance of popularity-free features while mitigating the negative effects of user feedback features in the long-tail scenario via the knowledge distillation paradigm. Extensive experiments on two real-world datasets demonstrate the superiority of DieT over a list of SOTA methods in the long-tail bundling scenario.
Related papers
- Towards Popularity-Aware Recommendation: A Multi-Behavior Enhanced Framework with Orthogonality Constraint [4.137753517504481]
Top-$K$ recommendation involves inferring latent user preferences and generating personalized recommendations.
We present a textbfPopularity-aware top-$K$ recommendation algorithm integrating multi-behavior textbfSide textbfInformation.
arXiv Detail & Related papers (2024-12-26T11:06:49Z) - Enhancing Sequential Music Recommendation with Personalized Popularity Awareness [56.972624411205224]
This paper introduces a novel approach that incorporates personalized popularity information into sequential recommendation.
Experimental results demonstrate that a Personalized Most Popular recommender outperforms existing state-of-the-art models.
arXiv Detail & Related papers (2024-09-06T15:05:12Z) - Popularity-Aware Alignment and Contrast for Mitigating Popularity Bias [34.006766098392525]
Collaborative Filtering (CF) typically suffers from the challenge of popularity bias due to the uneven distribution of items in real-world datasets.
This bias leads to a significant accuracy gap between popular and unpopular items.
We propose Popularity-Aware Alignment and Contrast (PAAC) to address two challenges.
arXiv Detail & Related papers (2024-05-31T09:14:48Z) - LLM-ESR: Large Language Models Enhancement for Long-tailed Sequential Recommendation [58.04939553630209]
In real-world systems, most users interact with only a handful of items, while the majority of items are seldom consumed.
These two issues, known as the long-tail user and long-tail item challenges, often pose difficulties for existing Sequential Recommendation systems.
We propose the Large Language Models Enhancement framework for Sequential Recommendation (LLM-ESR) to address these challenges.
arXiv Detail & Related papers (2024-05-31T07:24:42Z) - Test Time Embedding Normalization for Popularity Bias Mitigation [6.145760252113906]
Popularity bias is a widespread problem in the field of recommender systems.
We propose 'Test Time Embedding Normalization' as a simple yet effective strategy for mitigating popularity bias.
arXiv Detail & Related papers (2023-08-22T08:57:44Z) - Capturing Popularity Trends: A Simplistic Non-Personalized Approach for
Enhanced Item Recommendation [10.606845291519932]
Popularity-Aware Recommender (PARE) makes non-personalized recommendations by predicting the items that will attain the highest popularity.
To our knowledge, this is the first work to explicitly model item popularity in recommendation systems.
arXiv Detail & Related papers (2023-08-17T06:20:03Z) - Personalizing Intervened Network for Long-tailed Sequential User
Behavior Modeling [66.02953670238647]
Tail users suffer from significantly lower-quality recommendation than the head users after joint training.
A model trained on tail users separately still achieve inferior results due to limited data.
We propose a novel approach that significantly improves the recommendation performance of the tail users.
arXiv Detail & Related papers (2022-08-19T02:50:19Z) - Sequential Recommendation via Stochastic Self-Attention [68.52192964559829]
Transformer-based approaches embed items as vectors and use dot-product self-attention to measure the relationship between items.
We propose a novel textbfSTOchastic textbfSelf-textbfAttention(STOSA) to overcome these issues.
We devise a novel Wasserstein Self-Attention module to characterize item-item position-wise relationships in sequences.
arXiv Detail & Related papers (2022-01-16T12:38:45Z) - Learning Transferrable Parameters for Long-tailed Sequential User
Behavior Modeling [70.64257515361972]
We argue that focusing on tail users could bring more benefits and address the long tails issue.
Specifically, we propose a gradient alignment and adopt an adversarial training scheme to facilitate knowledge transfer from the head to the tail.
arXiv Detail & Related papers (2020-10-22T03:12:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.