Multi-Grained Knowledge Retrieval for End-to-End Task-Oriented Dialog
- URL: http://arxiv.org/abs/2305.10149v1
- Date: Wed, 17 May 2023 12:12:46 GMT
- Title: Multi-Grained Knowledge Retrieval for End-to-End Task-Oriented Dialog
- Authors: Fanqi Wan, Weizhou Shen, Ke Yang, Xiaojun Quan and Wei Bi
- Abstract summary: Retrieving proper domain knowledge from an external database lies at the heart of end-to-end task-oriented dialog systems.
Most existing systems blend knowledge retrieval with response generation and optimize them with direct supervision from reference responses.
We propose to decouple knowledge retrieval from response generation and introduce a multi-grained knowledge retriever.
- Score: 42.088274728084265
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Retrieving proper domain knowledge from an external database lies at the
heart of end-to-end task-oriented dialog systems to generate informative
responses. Most existing systems blend knowledge retrieval with response
generation and optimize them with direct supervision from reference responses,
leading to suboptimal retrieval performance when the knowledge base becomes
large-scale. To address this, we propose to decouple knowledge retrieval from
response generation and introduce a multi-grained knowledge retriever (MAKER)
that includes an entity selector to search for relevant entities and an
attribute selector to filter out irrelevant attributes. To train the retriever,
we propose a novel distillation objective that derives supervision signals from
the response generator. Experiments conducted on three standard benchmarks with
both small and large-scale knowledge bases demonstrate that our retriever
performs knowledge retrieval more effectively than existing methods. Our code
has been made publicly
available.\footnote{https://github.com/18907305772/MAKER}
Related papers
- Retriever-and-Memory: Towards Adaptive Note-Enhanced Retrieval-Augmented Generation [72.70046559930555]
We propose a generic RAG approach called Adaptive Note-Enhanced RAG (Adaptive-Note) for complex QA tasks.
Specifically, Adaptive-Note introduces an overarching view of knowledge growth, iteratively gathering new information in the form of notes.
In addition, we employ an adaptive, note-based stop-exploration strategy to decide "what to retrieve and when to stop" to encourage sufficient knowledge exploration.
arXiv Detail & Related papers (2024-10-11T14:03:29Z) - Multimodal Reranking for Knowledge-Intensive Visual Question Answering [77.24401833951096]
We introduce a multi-modal reranker to improve the ranking quality of knowledge candidates for answer generation.
Experiments on OK-VQA and A-OKVQA show that multi-modal reranker from distant supervision provides consistent improvements.
arXiv Detail & Related papers (2024-07-17T02:58:52Z) - Enhancing Retrieval and Managing Retrieval: A Four-Module Synergy for Improved Quality and Efficiency in RAG Systems [14.62114319247837]
Retrieval-augmented generation (RAG) techniques leverage the in-context learning capabilities of large language models (LLMs) to produce more accurate and relevant responses.
A critical component, the Query Rewriter module, enhances knowledge retrieval by generating a search-friendly query.
These four RAG modules synergistically improve the response quality and efficiency of the RAG system.
arXiv Detail & Related papers (2024-07-15T12:35:00Z) - Redefining Information Retrieval of Structured Database via Large Language Models [9.65171883231521]
This paper introduces a novel retrieval augmentation framework called ChatLR.
It primarily employs the powerful semantic understanding ability of Large Language Models (LLMs) as retrievers to achieve precise and concise information retrieval.
Experimental results demonstrate the effectiveness of ChatLR in addressing user queries, achieving an overall information retrieval accuracy exceeding 98.8%.
arXiv Detail & Related papers (2024-05-09T02:37:53Z) - Dual-Feedback Knowledge Retrieval for Task-Oriented Dialogue Systems [42.17072207835827]
We propose a retriever-generator architecture that harnesses a retriever to retrieve pertinent knowledge and a generator to generate system responses.
Our method demonstrates superior performance in task-oriented dialogue tasks, as evidenced by experimental results on three benchmark datasets.
arXiv Detail & Related papers (2023-10-23T03:21:11Z) - Retrieval-Generation Alignment for End-to-End Task-Oriented Dialogue
System [40.33178881317882]
We propose the application of maximal marginal likelihood to train a perceptive retriever by utilizing signals from response generation for supervision.
We evaluate our approach on three task-oriented dialogue datasets using T5 and ChatGPT as the backbone models.
arXiv Detail & Related papers (2023-10-13T06:03:47Z) - Task Oriented Conversational Modelling With Subjective Knowledge [0.0]
DSTC-11 proposes a three stage pipeline consisting of knowledge seeking turn detection, knowledge selection and response generation.
We propose entity retrieval methods which result in an accurate and faster knowledge search.
Preliminary results show a 4 % improvement in exact match score on knowledge selection task.
arXiv Detail & Related papers (2023-03-30T20:23:49Z) - Search-Engine-augmented Dialogue Response Generation with Cheaply
Supervised Query Production [98.98161995555485]
We propose a dialogue model that can access the vast and dynamic information from any search engine for response generation.
As the core module, a query producer is used to generate queries from a dialogue context to interact with a search engine.
Experiments show that our query producer can achieve R@1 and R@5 rates of 62.4% and 74.8% for retrieving gold knowledge.
arXiv Detail & Related papers (2023-02-16T01:58:10Z) - Generate rather than Retrieve: Large Language Models are Strong Context
Generators [74.87021992611672]
We present a novel perspective for solving knowledge-intensive tasks by replacing document retrievers with large language model generators.
We call our method generate-then-read (GenRead), which first prompts a large language model to generate contextutal documents based on a given question, and then reads the generated documents to produce the final answer.
arXiv Detail & Related papers (2022-09-21T01:30:59Z) - Open-Retrieval Conversational Question Answering [62.11228261293487]
We introduce an open-retrieval conversational question answering (ORConvQA) setting, where we learn to retrieve evidence from a large collection before extracting answers.
We build an end-to-end system for ORConvQA, featuring a retriever, a reranker, and a reader that are all based on Transformers.
arXiv Detail & Related papers (2020-05-22T19:39:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.