Dual-Feedback Knowledge Retrieval for Task-Oriented Dialogue Systems
- URL: http://arxiv.org/abs/2310.14528v1
- Date: Mon, 23 Oct 2023 03:21:11 GMT
- Title: Dual-Feedback Knowledge Retrieval for Task-Oriented Dialogue Systems
- Authors: Tianyuan Shi, Liangzhi Li, Zijian Lin, Tao Yang, Xiaojun Quan, Qifan
Wang
- Abstract summary: We propose a retriever-generator architecture that harnesses a retriever to retrieve pertinent knowledge and a generator to generate system responses.
Our method demonstrates superior performance in task-oriented dialogue tasks, as evidenced by experimental results on three benchmark datasets.
- Score: 42.17072207835827
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Efficient knowledge retrieval plays a pivotal role in ensuring the success of
end-to-end task-oriented dialogue systems by facilitating the selection of
relevant information necessary to fulfill user requests. However, current
approaches generally integrate knowledge retrieval and response generation,
which poses scalability challenges when dealing with extensive knowledge bases.
Taking inspiration from open-domain question answering, we propose a
retriever-generator architecture that harnesses a retriever to retrieve
pertinent knowledge and a generator to generate system responses.~Due to the
lack of retriever training labels, we propose relying on feedback from the
generator as pseudo-labels to train the retriever. To achieve this, we
introduce a dual-feedback mechanism that generates both positive and negative
feedback based on the output of the generator. Our method demonstrates superior
performance in task-oriented dialogue tasks, as evidenced by experimental
results on three benchmark datasets.
Related papers
- Retriever-and-Memory: Towards Adaptive Note-Enhanced Retrieval-Augmented Generation [72.70046559930555]
We propose a generic RAG approach called Adaptive Note-Enhanced RAG (Adaptive-Note) for complex QA tasks.
Specifically, Adaptive-Note introduces an overarching view of knowledge growth, iteratively gathering new information in the form of notes.
In addition, we employ an adaptive, note-based stop-exploration strategy to decide "what to retrieve and when to stop" to encourage sufficient knowledge exploration.
arXiv Detail & Related papers (2024-10-11T14:03:29Z) - Multimodal Reranking for Knowledge-Intensive Visual Question Answering [77.24401833951096]
We introduce a multi-modal reranker to improve the ranking quality of knowledge candidates for answer generation.
Experiments on OK-VQA and A-OKVQA show that multi-modal reranker from distant supervision provides consistent improvements.
arXiv Detail & Related papers (2024-07-17T02:58:52Z) - UniRetriever: Multi-task Candidates Selection for Various
Context-Adaptive Conversational Retrieval [47.40553943948673]
We propose a multi-task framework function as a universal retriever for three dominant retrieval tasks during the conversation: persona selection, knowledge selection, and response selection.
To this end, we design a dual-encoder architecture consisting of a context-adaptive dialogue encoder and a candidate encoder.
Experiments and analysis establish state-of-the-art retrieval quality both within and outside its training domain.
arXiv Detail & Related papers (2024-02-26T02:48:43Z) - Retrieval-Generation Alignment for End-to-End Task-Oriented Dialogue
System [40.33178881317882]
We propose the application of maximal marginal likelihood to train a perceptive retriever by utilizing signals from response generation for supervision.
We evaluate our approach on three task-oriented dialogue datasets using T5 and ChatGPT as the backbone models.
arXiv Detail & Related papers (2023-10-13T06:03:47Z) - PICK: Polished & Informed Candidate Scoring for Knowledge-Grounded
Dialogue Systems [59.1250765143521]
Current knowledge-grounded dialogue systems often fail to align the generated responses with human-preferred qualities.
We propose Polished & Informed Candidate Scoring (PICK), a generation re-scoring framework.
We demonstrate the effectiveness of PICK in generating responses that are more faithful while keeping them relevant to the dialogue history.
arXiv Detail & Related papers (2023-09-19T08:27:09Z) - Multi-Grained Knowledge Retrieval for End-to-End Task-Oriented Dialog [42.088274728084265]
Retrieving proper domain knowledge from an external database lies at the heart of end-to-end task-oriented dialog systems.
Most existing systems blend knowledge retrieval with response generation and optimize them with direct supervision from reference responses.
We propose to decouple knowledge retrieval from response generation and introduce a multi-grained knowledge retriever.
arXiv Detail & Related papers (2023-05-17T12:12:46Z) - Search-Engine-augmented Dialogue Response Generation with Cheaply
Supervised Query Production [98.98161995555485]
We propose a dialogue model that can access the vast and dynamic information from any search engine for response generation.
As the core module, a query producer is used to generate queries from a dialogue context to interact with a search engine.
Experiments show that our query producer can achieve R@1 and R@5 rates of 62.4% and 74.8% for retrieving gold knowledge.
arXiv Detail & Related papers (2023-02-16T01:58:10Z) - Retrieval-Free Knowledge-Grounded Dialogue Response Generation with
Adapters [52.725200145600624]
We propose KnowExpert to bypass the retrieval process by injecting prior knowledge into the pre-trained language models with lightweight adapters.
Experimental results show that KnowExpert performs comparably with the retrieval-based baselines.
arXiv Detail & Related papers (2021-05-13T12:33:23Z) - Learning to Retrieve Entity-Aware Knowledge and Generate Responses with
Copy Mechanism for Task-Oriented Dialogue Systems [43.57597820119909]
Task-oriented conversational modeling with unstructured knowledge access, as track 1 of the 9th Dialogue System Technology Challenges (DSTC 9)
This challenge can be separated into three subtasks, (1) knowledge-seeking turn detection, (2) knowledge selection, and (3) knowledge-grounded response generation.
We use pre-trained language models, ELECTRA and RoBERTa, as our base encoder for different subtasks.
arXiv Detail & Related papers (2020-12-22T11:36:37Z) - Distilling Knowledge from Reader to Retriever for Question Answering [16.942581590186343]
We propose a technique to learn retriever models for downstream tasks, inspired by knowledge distillation.
We evaluate our method on question answering, obtaining state-of-the-art results.
arXiv Detail & Related papers (2020-12-08T17:36:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.