NAM: A Normalization Attention Model for Personalized Product Search In Fliggy
- URL: http://arxiv.org/abs/2506.08382v1
- Date: Tue, 10 Jun 2025 02:46:05 GMT
- Title: NAM: A Normalization Attention Model for Personalized Product Search In Fliggy
- Authors: Shui Liu, Mingyuan Tao, Maofei Que, Pan Li, Dong Li, Shenghua Ni, Zhuoran Zhuang,
- Abstract summary: We propose a Normalization Attention Model (NAM) for personalized product search.<n>We show that our proposed NAM model significantly outperforms state-of-the-art baseline models.
- Score: 14.447458070745231
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Personalized product search provides significant benefits to e-commerce platforms by extracting more accurate user preferences from historical behaviors. Previous studies largely focused on the user factors when personalizing the search query, while ignoring the item perspective, which leads to the following two challenges that we summarize in this paper: First, previous approaches relying only on co-occurrence frequency tend to overestimate the conversion rates for popular items and underestimate those for long-tail items, resulting in inaccurate item similarities; Second, user purchasing propensity is highly heterogeneous according to the popularity of the target item: it is less correlated with the user's historical behavior for a popular item and more correlated for a long-tail item. To address these challenges, in this paper we propose NAM, a Normalization Attention Model, which optimizes ''when to personalize'' by utilizing Inverse Item Frequency (IIF) and employing a gating mechanism, as well as optimizes ''how to personalize'' by normalizing the attention mechanism from a global perspective. Through comprehensive experiments, we demonstrate that our proposed NAM model significantly outperforms state-of-the-art baseline models. Furthermore, we conducted an online A/B test at Fliggy, and obtained a significant improvement of 0.8% over the latest production system in conversion rate.
Related papers
- Why is Normalization Necessary for Linear Recommenders? [10.843794863154391]
We propose a versatile normalization solution, called Data- Normalization (DAN), which flexibly controls the popularity and neighborhood biases.<n> Experimental results show that DAN-equipped LAEs consistently improve existing LAE-based models across six benchmark datasets.
arXiv Detail & Related papers (2025-04-08T08:37:32Z) - ComPO: Community Preferences for Language Model Personalization [122.54846260663922]
ComPO is a method to personalize preference optimization in language models.
We collect and release ComPRed, a question answering dataset with community-level preferences from Reddit.
arXiv Detail & Related papers (2024-10-21T14:02:40Z) - Centrality-aware Product Retrieval and Ranking [14.710718676076327]
This paper addresses the challenge of improving user experience on e-commerce platforms by enhancing product ranking relevant to users' search queries.
We curate samples from eBay, manually annotated with buyer-centric relevance scores and centrality scores, which reflect how well the product title matches the users' intent.
We introduce a User-intent Centrality Optimization (UCO) approach for existing models, which optimises for the user intent in semantic product search.
arXiv Detail & Related papers (2024-10-21T11:59:14Z) - Long-Sequence Recommendation Models Need Decoupled Embeddings [49.410906935283585]
We identify and characterize a neglected deficiency in existing long-sequence recommendation models.<n>A single set of embeddings struggles with learning both attention and representation, leading to interference between these two processes.<n>We propose the Decoupled Attention and Representation Embeddings (DARE) model, where two distinct embedding tables are learned separately to fully decouple attention and representation.
arXiv Detail & Related papers (2024-10-03T15:45:15Z) - Unified Embedding Based Personalized Retrieval in Etsy Search [0.206242362470764]
We propose learning a unified embedding model incorporating graph, transformer and term-based embeddings end to end.
Our personalized retrieval model significantly improves the overall search experience, as measured by a 5.58% increase in search purchase rate and a 2.63% increase in site-wide conversion rate.
arXiv Detail & Related papers (2023-06-07T23:24:50Z) - Rethinking Missing Data: Aleatoric Uncertainty-Aware Recommendation [59.500347564280204]
We propose a new Aleatoric Uncertainty-aware Recommendation (AUR) framework.
AUR consists of a new uncertainty estimator along with a normal recommender model.
As the chance of mislabeling reflects the potential of a pair, AUR makes recommendations according to the uncertainty.
arXiv Detail & Related papers (2022-09-22T04:32:51Z) - ICPE: An Item Cluster-Wise Pareto-Efficient Framework for Recommendation Debiasing [7.100121083949393]
In this work, we explore the central theme of recommendation debiasing from an item cluster-wise multi-objective optimization perspective.<n>Aiming to balance the learning on various item clusters that differ in popularity during the training process, we propose a model-agnostic framework namely Item Cluster-Wise.<n>In detail, we define our item cluster-wise optimization target as the recommender model should balance all item clusters that differ in popularity.
arXiv Detail & Related papers (2021-09-27T09:17:53Z) - PURS: Personalized Unexpected Recommender System for Improving User
Satisfaction [76.98616102965023]
We describe a novel Personalized Unexpected Recommender System (PURS) model that incorporates unexpectedness into the recommendation process.
Extensive offline experiments on three real-world datasets illustrate that the proposed PURS model significantly outperforms the state-of-the-art baseline approaches.
arXiv Detail & Related papers (2021-06-05T01:33:21Z) - PreSizE: Predicting Size in E-Commerce using Transformers [76.33790223551074]
PreSizE is a novel deep learning framework which utilizes Transformers for accurate size prediction.
We demonstrate that PreSizE is capable of achieving superior prediction performance compared to previous state-of-the-art baselines.
As a proof of concept, we demonstrate that size predictions made by PreSizE can be effectively integrated into an existing production recommender system.
arXiv Detail & Related papers (2021-05-04T15:23:59Z) - Heterogeneous Network Embedding for Deep Semantic Relevance Match in
E-commerce Search [29.881612817309716]
We design an end-to-end First-and-Second-order Relevance prediction model for e-commerce item relevance.
We introduce external knowledge generated from BERT to refine the network of user behaviors.
Results of offline experiments showed that the new model significantly improved the prediction accuracy in terms of human relevance judgment.
arXiv Detail & Related papers (2021-01-13T03:12:53Z) - Learning Transferrable Parameters for Long-tailed Sequential User
Behavior Modeling [70.64257515361972]
We argue that focusing on tail users could bring more benefits and address the long tails issue.
Specifically, we propose a gradient alignment and adopt an adversarial training scheme to facilitate knowledge transfer from the head to the tail.
arXiv Detail & Related papers (2020-10-22T03:12:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.