Breaking the Barrier: Utilizing Large Language Models for Industrial
Recommendation Systems through an Inferential Knowledge Graph
- URL: http://arxiv.org/abs/2402.13750v1
- Date: Wed, 21 Feb 2024 12:22:01 GMT
- Title: Breaking the Barrier: Utilizing Large Language Models for Industrial
Recommendation Systems through an Inferential Knowledge Graph
- Authors: Qian Zhao, Hao Qian, Ziqi Liu, Gong-Duo Zhang and Lihong Gu
- Abstract summary: We propose a novel Large Language Model based Complementary Knowledge Enhanced Recommendation System (LLM-KERec)
It extracts unified concept terms from item and user information to capture user intent transitions and adapt to new items.
Extensive experiments conducted on three industry datasets demonstrate the significant performance improvement of our model compared to existing approaches.
- Score: 19.201697767418597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recommendation systems are widely used in e-commerce websites and online
platforms to address information overload. However, existing systems primarily
rely on historical data and user feedback, making it difficult to capture user
intent transitions. Recently, Knowledge Base (KB)-based models are proposed to
incorporate expert knowledge, but it struggle to adapt to new items and the
evolving e-commerce environment. To address these challenges, we propose a
novel Large Language Model based Complementary Knowledge Enhanced
Recommendation System (LLM-KERec). It introduces an entity extractor that
extracts unified concept terms from item and user information. To provide
cost-effective and reliable prior knowledge, entity pairs are generated based
on entity popularity and specific strategies. The large language model
determines complementary relationships in each entity pair, constructing a
complementary knowledge graph. Furthermore, a new complementary recall module
and an Entity-Entity-Item (E-E-I) weight decision model refine the scoring of
the ranking model using real complementary exposure-click samples. Extensive
experiments conducted on three industry datasets demonstrate the significant
performance improvement of our model compared to existing approaches.
Additionally, detailed analysis shows that LLM-KERec enhances users' enthusiasm
for consumption by recommending complementary items. In summary, LLM-KERec
addresses the limitations of traditional recommendation systems by
incorporating complementary knowledge and utilizing a large language model to
capture user intent transitions, adapt to new items, and enhance recommendation
efficiency in the evolving e-commerce landscape.
Related papers
- Beyond Retrieval: Generating Narratives in Conversational Recommender Systems [4.912663905306209]
We introduce a new dataset (REGEN) for natural language generation tasks in conversational recommendations.
We establish benchmarks using well-known generative metrics, and perform an automated evaluation of the new dataset using a rater LLM.
And to the best of our knowledge, represents the first attempt to analyze the capabilities of LLMs in understanding recommender signals and generating rich narratives.
arXiv Detail & Related papers (2024-10-22T07:53:41Z) - EmbSum: Leveraging the Summarization Capabilities of Large Language Models for Content-Based Recommendations [38.44534579040017]
We introduce EmbSum, a framework that enables offline pre-computations of users and candidate items.
The model's ability to generate summaries of user interests serves as a valuable by-product, enhancing its usefulness for personalized content recommendations.
arXiv Detail & Related papers (2024-05-19T04:31:54Z) - Knowledge Adaptation from Large Language Model to Recommendation for Practical Industrial Application [54.984348122105516]
Large Language Models (LLMs) pretrained on massive text corpus presents a promising avenue for enhancing recommender systems.
We propose an Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework that synergizes open-world knowledge with collaborative knowledge.
arXiv Detail & Related papers (2024-05-07T04:00:30Z) - Learning to Extract Structured Entities Using Language Models [52.281701191329]
Recent advances in machine learning have significantly impacted the field of information extraction.
We reformulate the task to be entity-centric, enabling the use of diverse metrics.
We contribute to the field by introducing Structured Entity Extraction and proposing the Approximate Entity Set OverlaP metric.
arXiv Detail & Related papers (2024-02-06T22:15:09Z) - Knowledge Graphs and Pre-trained Language Models enhanced Representation Learning for Conversational Recommender Systems [58.561904356651276]
We introduce the Knowledge-Enhanced Entity Representation Learning (KERL) framework to improve the semantic understanding of entities for Conversational recommender systems.
KERL uses a knowledge graph and a pre-trained language model to improve the semantic understanding of entities.
KERL achieves state-of-the-art results in both recommendation and response generation tasks.
arXiv Detail & Related papers (2023-12-18T06:41:23Z) - Reformulating Sequential Recommendation: Learning Dynamic User Interest with Content-enriched Language Modeling [18.297332953450514]
We propose LANCER, which leverages the semantic understanding capabilities of pre-trained language models to generate personalized recommendations.
Our approach bridges the gap between language models and recommender systems, resulting in more human-like recommendations.
arXiv Detail & Related papers (2023-09-19T08:54:47Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - Recommender Systems in the Era of Large Language Models (LLMs) [62.0129013439038]
Large Language Models (LLMs) have revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI)
We conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting.
arXiv Detail & Related papers (2023-07-05T06:03:40Z) - Meta Knowledge Condensation for Federated Learning [65.20774786251683]
Existing federated learning paradigms usually extensively exchange distributed models at a central solver to achieve a more powerful model.
This would incur severe communication burden between a server and multiple clients especially when data distributions are heterogeneous.
Unlike existing paradigms, we introduce an alternative perspective to significantly decrease the communication cost in federate learning.
arXiv Detail & Related papers (2022-09-29T15:07:37Z) - Improving Conversational Recommender System via Contextual and
Time-Aware Modeling with Less Domain-Specific Knowledge [25.503407835218773]
We propose to fully discover and extract internal knowledge from the context.
We capture both entity-level and contextual-level representations to jointly model user preferences for the recommendation.
Our model achieves better performance on most evaluation metrics with less external knowledge and generalizes well to other domains.
arXiv Detail & Related papers (2022-09-23T03:30:22Z) - Self-supervised Learning for Large-scale Item Recommendations [18.19202958502061]
Large scale recommender models find most relevant items from huge catalogs.
With millions to billions of items in the corpus, users tend to provide feedback for a very small set of them.
We propose a multi-task self-supervised learning framework for large-scale item recommendations.
arXiv Detail & Related papers (2020-07-25T06:21:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.