Research on E-Commerce Long-Tail Product Recommendation Mechanism Based on Large-Scale Language Models
- URL: http://arxiv.org/abs/2506.06336v1
- Date: Sat, 31 May 2025 19:17:48 GMT
- Title: Research on E-Commerce Long-Tail Product Recommendation Mechanism Based on Large-Scale Language Models
- Authors: Qingyi Lu, Haotian Lyu, Jiayun Zheng, Yang Wang, Li Zhang, Chengrui Zhou,
- Abstract summary: We propose a novel long-tail product recommendation mechanism that integrates product text descriptions and user behavior sequences using a large-scale language model (LLM)<n>Our work highlights the potential of LLMs in interpreting product content and user intent, offering a promising direction for future e-commerce recommendation systems.
- Score: 7.792622257477251
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As e-commerce platforms expand their product catalogs, accurately recommending long-tail items becomes increasingly important for enhancing both user experience and platform revenue. A key challenge is the long-tail problem, where extreme data sparsity and cold-start issues limit the performance of traditional recommendation methods. To address this, we propose a novel long-tail product recommendation mechanism that integrates product text descriptions and user behavior sequences using a large-scale language model (LLM). First, we introduce a semantic visor, which leverages a pre-trained LLM to convert multimodal textual content such as product titles, descriptions, and user reviews into meaningful embeddings. These embeddings help represent item-level semantics effectively. We then employ an attention-based user intent encoder that captures users' latent interests, especially toward long-tail items, by modeling collaborative behavior patterns. These components feed into a hybrid ranking model that fuses semantic similarity scores, collaborative filtering outputs, and LLM-generated recommendation candidates. Extensive experiments on a real-world e-commerce dataset show that our method outperforms baseline models in recall (+12%), hit rate (+9%), and user coverage (+15%). These improvements lead to better exposure and purchase rates for long-tail products. Our work highlights the potential of LLMs in interpreting product content and user intent, offering a promising direction for future e-commerce recommendation systems.
Related papers
- End-to-End Personalization: Unifying Recommender Systems with Large Language Models [0.0]
We propose a novel hybrid recommendation framework that combines Graph Attention Networks (GATs) with Large Language Models (LLMs)<n>LLMs are first used to enrich user and item representations by generating semantically meaningful profiles based on metadata such as titles, genres, and overviews.<n>We evaluate our model on benchmark datasets, including MovieLens 100k and 1M, where it consistently outperforms strong baselines.
arXiv Detail & Related papers (2025-08-02T22:46:50Z) - LLM-Enhanced Reranking for Complementary Product Recommendation [1.7149913637404794]
This paper introduces a model-agnostic approach that leverages Large Language Models (LLMs) to enhance the reranking of complementary product recommendations.<n>We demonstrate that our approach effectively balances accuracy and diversity in complementary product recommendations, with at least 50% lift in accuracy metrics and 2% lift in diversity metrics on average for the top recommended items across datasets.
arXiv Detail & Related papers (2025-07-22T05:15:45Z) - Learning Item Representations Directly from Multimodal Features for Effective Recommendation [51.49251689107541]
multimodal recommender systems predominantly leverage Bayesian Personalized Ranking (BPR) optimization to learn item representations.<n>We propose a novel model (i.e., LIRDRec) that learns item representations directly from multimodal features to augment recommendation performance.
arXiv Detail & Related papers (2025-05-08T05:42:22Z) - Multi-agents based User Values Mining for Recommendation [52.26100802380767]
We propose a zero-shot multi-LLM collaborative framework for effective and accurate user value extraction.<n>We apply text summarization techniques to condense item content while preserving essential meaning.<n>To mitigate hallucinations, we introduce two specialized agent roles: evaluators and supervisors.
arXiv Detail & Related papers (2025-05-02T04:01:31Z) - DashCLIP: Leveraging multimodal models for generating semantic embeddings for DoorDash [0.4288177321445912]
We introduce a joint training framework for product and user queries by aligning uni-modal and multi-modal encoders through contrastive learning on image-text data.<n>Our novel approach trains a query encoder with an LLM-curated relevance dataset, eliminating the reliance on engagement history.<n>For personalized ads recommendation, a significant uplift in the click-through rate and conversion rate after the deployment confirms the impact on key business metrics.
arXiv Detail & Related papers (2025-03-18T20:38:31Z) - Beyond Retrieval: Generating Narratives in Conversational Recommender Systems [4.73461454584274]
We introduce a new dataset (REGEN) for natural language generation tasks in conversational recommendations.<n>We establish benchmarks using well-known generative metrics, and perform an automated evaluation of the new dataset using a rater LLM.<n>And to the best of our knowledge, represents the first attempt to analyze the capabilities of LLMs in understanding recommender signals and generating rich narratives.
arXiv Detail & Related papers (2024-10-22T07:53:41Z) - Fine-tuning Multimodal Large Language Models for Product Bundling [53.01642741096356]
We introduce Bundle-MLLM, a novel framework that fine-tunes large language models (LLMs) through a hybrid item tokenization approach.<n>Specifically, we integrate textual, media, and relational data into a unified tokenization, introducing a soft separation token to distinguish between textual and non-textual tokens.<n>We propose a progressive optimization strategy that fine-tunes LLMs for disentangled objectives: 1) learning bundle patterns and 2) enhancing multimodal semantic understanding specific to product bundling.
arXiv Detail & Related papers (2024-07-16T13:30:14Z) - LLM-ESR: Large Language Models Enhancement for Long-tailed Sequential Recommendation [58.04939553630209]
In real-world systems, most users interact with only a handful of items, while the majority of items are seldom consumed.
These two issues, known as the long-tail user and long-tail item challenges, often pose difficulties for existing Sequential Recommendation systems.
We propose the Large Language Models Enhancement framework for Sequential Recommendation (LLM-ESR) to address these challenges.
arXiv Detail & Related papers (2024-05-31T07:24:42Z) - Breaking the Barrier: Utilizing Large Language Models for Industrial
Recommendation Systems through an Inferential Knowledge Graph [19.201697767418597]
We propose a novel Large Language Model based Complementary Knowledge Enhanced Recommendation System (LLM-KERec)
It extracts unified concept terms from item and user information to capture user intent transitions and adapt to new items.
Extensive experiments conducted on three industry datasets demonstrate the significant performance improvement of our model compared to existing approaches.
arXiv Detail & Related papers (2024-02-21T12:22:01Z) - ItemSage: Learning Product Embeddings for Shopping Recommendations at
Pinterest [60.841761065439414]
At Pinterest, we build a single set of product embeddings called ItemSage to provide relevant recommendations in all shopping use cases.
This approach has led to significant improvements in engagement and conversion metrics, while reducing both infrastructure and maintenance cost.
arXiv Detail & Related papers (2022-05-24T02:28:58Z) - PreSizE: Predicting Size in E-Commerce using Transformers [76.33790223551074]
PreSizE is a novel deep learning framework which utilizes Transformers for accurate size prediction.
We demonstrate that PreSizE is capable of achieving superior prediction performance compared to previous state-of-the-art baselines.
As a proof of concept, we demonstrate that size predictions made by PreSizE can be effectively integrated into an existing production recommender system.
arXiv Detail & Related papers (2021-05-04T15:23:59Z) - User-Inspired Posterior Network for Recommendation Reason Generation [53.035224183349385]
Recommendation reason generation plays a vital role in attracting customers' attention as well as improving user experience.
We propose a user-inspired multi-source posterior transformer (MSPT), which induces the model reflecting the users' interests.
Experimental results show that our model is superior to traditional generative models.
arXiv Detail & Related papers (2021-02-16T02:08:52Z) - Personalized Embedding-based e-Commerce Recommendations at eBay [3.1236273633321416]
We present an approach for generating personalized item recommendations in an e-commerce marketplace by learning to embed items and users in the same vector space.
Data ablation is incorporated into the offline model training process to improve the robustness of the production system.
arXiv Detail & Related papers (2021-02-11T17:58:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.