CRSLab: An Open-Source Toolkit for Building Conversational Recommender
System
- URL: http://arxiv.org/abs/2101.00939v1
- Date: Mon, 4 Jan 2021 13:10:31 GMT
- Title: CRSLab: An Open-Source Toolkit for Building Conversational Recommender
System
- Authors: Kun Zhou, Xiaolei Wang, Yuanhang Zhou, Chenzhan Shang, Yuan Cheng,
Wayne Xin Zhao, Yaliang Li, Ji-Rong Wen
- Abstract summary: conversational recommender system (CRS) has received much attention in the research community.
Existing studies on CRS vary in scenarios, goals and techniques, lacking unified, standardized implementation or comparison.
We propose an open-source CRS toolkit CRSLab, which provides a unified framework with highly-decoupled modules to develop CRSs.
- Score: 57.208266345350474
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, conversational recommender system (CRS) has received much
attention in the research community. However, existing studies on CRS vary in
scenarios, goals and techniques, lacking unified, standardized implementation
or comparison. To tackle this challenge, we propose an open-source CRS toolkit
CRSLab, which provides a unified and extensible framework with highly-decoupled
modules to develop CRSs. Based on this framework, we collect 6 commonly-used
human-annotated CRS datasets and implement 18 models that include recent
techniques such as graph neural network and pre-training models. Besides, our
toolkit provides a series of automatic evaluation protocols and a human-machine
interaction interface to test and compare different CRS methods. The project
and documents are released at https://github.com/RUCAIBox/CRSLab.
Related papers
- Collaborative Retrieval for Large Language Model-based Conversational Recommender Systems [65.75265303064654]
Conversational recommender systems (CRS) aim to provide personalized recommendations via interactive dialogues with users.
Large language models (LLMs) enhance CRS with their superior understanding of context-aware user preferences.
We propose CRAG, Collaborative Retrieval Augmented Generation for LLM-based CRS.
arXiv Detail & Related papers (2025-02-19T22:47:40Z) - Neural Click Models for Recommender Systems [13.358229360322486]
We develop and evaluate neural architectures to model the user behavior in recommender systems (RS) inspired by click models for Web search.
Our models outperform baselines on the ContentWise and RL4RS datasets and can be used in RS simulators to model user response for RS evaluation and pretraining.
arXiv Detail & Related papers (2024-09-30T08:00:04Z) - Framework for Curating Speech Datasets and Evaluating ASR Systems: A Case Study for Polish [0.0]
Speech datasets available in the public domain are often underutilized because of challenges in discoverability and interoperability.
A comprehensive framework has been designed to survey, catalog, and curate available speech datasets.
This research constitutes the most extensive comparison to date of both commercial and free ASR systems for the Polish language.
arXiv Detail & Related papers (2024-07-18T21:32:12Z) - FlashRAG: A Modular Toolkit for Efficient Retrieval-Augmented Generation Research [32.820100519805486]
FlashRAG is an efficient and modular open-source toolkit designed to assist researchers in reproducing existing RAG methods and in developing their own RAG algorithms within a unified framework.
Our toolkit has various features, including customizable modular framework, rich collection of pre-implemented RAG works, comprehensive datasets, efficient auxiliary pre-processing scripts, and extensive and standard evaluation metrics.
arXiv Detail & Related papers (2024-05-22T12:12:40Z) - A Review of Modern Recommender Systems Using Generative Models (Gen-RecSys) [57.30228361181045]
This survey connects key advancements in recommender systems using Generative Models (Gen-RecSys)
It covers: interaction-driven generative models; the use of large language models (LLM) and textual data for natural language recommendation; and the integration of multimodal models for generating and processing images/videos in RS.
Our work highlights necessary paradigms for evaluating the impact and harm of Gen-RecSys and identifies open challenges.
arXiv Detail & Related papers (2024-03-31T06:57:57Z) - A Conversation is Worth A Thousand Recommendations: A Survey of Holistic
Conversational Recommender Systems [54.78815548652424]
Conversational recommender systems generate recommendations through an interactive process.
Not all CRS approaches use human conversations as their source of interaction data.
holistic CRS are trained using conversational data collected from real-world scenarios.
arXiv Detail & Related papers (2023-09-14T12:55:23Z) - Zero-shot Composed Text-Image Retrieval [72.43790281036584]
We consider the problem of composed image retrieval (CIR)
It aims to train a model that can fuse multi-modal information, e.g., text and images, to accurately retrieve images that match the query, extending the user's expression ability.
arXiv Detail & Related papers (2023-06-12T17:56:01Z) - Rethinking the Evaluation for Conversational Recommendation in the Era
of Large Language Models [115.7508325840751]
The recent success of large language models (LLMs) has shown great potential to develop more powerful conversational recommender systems (CRSs)
In this paper, we embark on an investigation into the utilization of ChatGPT for conversational recommendation, revealing the inadequacy of the existing evaluation protocol.
We propose an interactive Evaluation approach based on LLMs named iEvaLM that harnesses LLM-based user simulators.
arXiv Detail & Related papers (2023-05-22T15:12:43Z) - Leveraging Historical Interaction Data for Improving Conversational
Recommender System [105.90963882850265]
We propose a novel pre-training approach to integrate item- and attribute-based preference sequence.
Experiment results on two real-world datasets have demonstrated the effectiveness of our approach.
arXiv Detail & Related papers (2020-08-19T03:43:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.