UniMS-RAG: A Unified Multi-source Retrieval-Augmented Generation for
Personalized Dialogue Systems
- URL: http://arxiv.org/abs/2401.13256v1
- Date: Wed, 24 Jan 2024 06:50:20 GMT
- Title: UniMS-RAG: A Unified Multi-source Retrieval-Augmented Generation for
Personalized Dialogue Systems
- Authors: Hongru Wang, Wenyu Huang, Yang Deng, Rui Wang, Zezhong Wang, Yufei
Wang, Fei Mi, Jeff Z. Pan, Kam-Fai Wong
- Abstract summary: Large Language Models (LLMs) has shown exceptional capabilities in many natual language understanding and generation tasks.
We decompose the use of multiple sources in generating personalized response into three sub-tasks: Knowledge Source Selection, Knowledge Retrieval, and Response Generation.
We propose a novel Unified Multi-Source Retrieval-Augmented Generation system (UniMS-RAG)
- Score: 44.893215129952395
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large Language Models (LLMs) has shown exceptional capabilities in many
natual language understanding and generation tasks. However, the
personalization issue still remains a much-coveted property, especially when it
comes to the multiple sources involved in the dialogue system. To better plan
and incorporate the use of multiple sources in generating personalized
response, we firstly decompose it into three sub-tasks: Knowledge Source
Selection, Knowledge Retrieval, and Response Generation. We then propose a
novel Unified Multi-Source Retrieval-Augmented Generation system (UniMS-RAG)
Specifically, we unify these three sub-tasks with different formulations into
the same sequence-to-sequence paradigm during the training, to adaptively
retrieve evidences and evaluate the relevance on-demand using special tokens,
called acting tokens and evaluation tokens. Enabling language models to
generate acting tokens facilitates interaction with various knowledge sources,
allowing them to adapt their behavior to diverse task requirements. Meanwhile,
evaluation tokens gauge the relevance score between the dialogue context and
the retrieved evidence. In addition, we carefully design a self-refinement
mechanism to iteratively refine the generated response considering 1) the
consistency scores between the generated response and retrieved evidence; and
2) the relevance scores. Experiments on two personalized datasets (DuLeMon and
KBP) show that UniMS-RAG achieves state-of-the-art performance on the knowledge
source selection and response generation task with itself as a retriever in a
unified manner. Extensive analyses and discussions are provided for shedding
some new perspectives for personalized dialogue systems.
Related papers
- Do LLMs suffer from Multi-Party Hangover? A Diagnostic Approach to Addressee Recognition and Response Selection in Conversations [11.566214724241798]
We propose a methodological pipeline to investigate model performance across specific structural attributes of conversations.
We focus on Response Selection and Addressee Recognition tasks, to diagnose model weaknesses.
Results show that response selection relies more on the textual content of conversations, while addressee recognition requires capturing their structural dimension.
arXiv Detail & Related papers (2024-09-27T10:07:33Z) - PICK: Polished & Informed Candidate Scoring for Knowledge-Grounded
Dialogue Systems [59.1250765143521]
Current knowledge-grounded dialogue systems often fail to align the generated responses with human-preferred qualities.
We propose Polished & Informed Candidate Scoring (PICK), a generation re-scoring framework.
We demonstrate the effectiveness of PICK in generating responses that are more faithful while keeping them relevant to the dialogue history.
arXiv Detail & Related papers (2023-09-19T08:27:09Z) - Diverse and Faithful Knowledge-Grounded Dialogue Generation via
Sequential Posterior Inference [82.28542500317445]
We present an end-to-end learning framework, termed Sequential Posterior Inference (SPI), capable of selecting knowledge and generating dialogues.
Unlike other methods, SPI does not require the inference network or assume a simple geometry of the posterior distribution.
arXiv Detail & Related papers (2023-06-01T21:23:13Z) - Mixtures of Deep Neural Experts for Automated Speech Scoring [11.860560781894458]
The paper copes with the task of automatic assessment of second language proficiency from the language learners' spoken responses to test prompts.
The approach relies on two separate modules: (1) an automatic speech recognition system that yields text transcripts of the spoken interactions involved, and (2) a multiple classifier system based on deep learners that ranks the transcripts into proficiency classes.
arXiv Detail & Related papers (2021-06-23T15:44:50Z) - Dialogue History Matters! Personalized Response Selectionin Multi-turn
Retrieval-based Chatbots [62.295373408415365]
We propose a personalized hybrid matching network (PHMN) for context-response matching.
Our contributions are two-fold: 1) our model extracts personalized wording behaviors from user-specific dialogue history as extra matching information.
We evaluate our model on two large datasets with user identification, i.e., personalized dialogue Corpus Ubuntu (P- Ubuntu) and personalized Weibo dataset (P-Weibo)
arXiv Detail & Related papers (2021-03-17T09:42:11Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z) - Do Response Selection Models Really Know What's Next? Utterance
Manipulation Strategies for Multi-turn Response Selection [11.465266718370536]
We study the task of selecting the optimal response given a user and system utterance history in retrieval-based dialog systems.
We propose utterance manipulation strategies (UMS) to address this problem.
UMS consist of several strategies (i.e., insertion, deletion, and search) which aid the response selection model towards maintaining dialog coherence.
arXiv Detail & Related papers (2020-09-10T07:39:05Z) - Multi-Stage Conversational Passage Retrieval: An Approach to Fusing Term
Importance Estimation and Neural Query Rewriting [56.268862325167575]
We tackle conversational passage retrieval (ConvPR) with query reformulation integrated into a multi-stage ad-hoc IR system.
We propose two conversational query reformulation (CQR) methods: (1) term importance estimation and (2) neural query rewriting.
For the former, we expand conversational queries using important terms extracted from the conversational context with frequency-based signals.
For the latter, we reformulate conversational queries into natural, standalone, human-understandable queries with a pretrained sequence-tosequence model.
arXiv Detail & Related papers (2020-05-05T14:30:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.