End-to-End Autoregressive Retrieval via Bootstrapping for Smart Reply
Systems
- URL: http://arxiv.org/abs/2310.18956v1
- Date: Sun, 29 Oct 2023 09:56:17 GMT
- Title: End-to-End Autoregressive Retrieval via Bootstrapping for Smart Reply
Systems
- Authors: Benjamin Towle, Ke Zhou
- Abstract summary: We consider a novel approach that learns the smart reply task end-to-end from a dataset of (message, reply set) pairs obtained via bootstrapping.
Empirical results show this method consistently outperforms a range of state-of-the-art baselines across three datasets.
- Score: 7.2949782290577945
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reply suggestion systems represent a staple component of many instant
messaging and email systems. However, the requirement to produce sets of
replies, rather than individual replies, makes the task poorly suited for
out-of-the-box retrieval architectures, which only consider individual
message-reply similarity. As a result, these system often rely on additional
post-processing modules to diversify the outputs. However, these approaches are
ultimately bottlenecked by the performance of the initial retriever, which in
practice struggles to present a sufficiently diverse range of options to the
downstream diversification module, leading to the suggestions being less
relevant to the user. In this paper, we consider a novel approach that
radically simplifies this pipeline through an autoregressive text-to-text
retrieval model, that learns the smart reply task end-to-end from a dataset of
(message, reply set) pairs obtained via bootstrapping. Empirical results show
this method consistently outperforms a range of state-of-the-art baselines
across three datasets, corresponding to a 5.1%-17.9% improvement in relevance,
and a 0.5%-63.1% improvement in diversity compared to the best baseline
approach. We make our code publicly available.
Related papers
- MaFeRw: Query Rewriting with Multi-Aspect Feedbacks for Retrieval-Augmented Large Language Models [34.39053202801489]
In a real-world RAG system, the current query often involves spoken ellipses and ambiguous references from dialogue contexts.
We propose a novel query rewriting method MaFeRw, which improves RAG performance by integrating multi-aspect feedback from both the retrieval process and generated results.
Experimental results on two conversational RAG datasets demonstrate that MaFeRw achieves superior generation metrics and more stable training compared to baselines.
arXiv Detail & Related papers (2024-08-30T07:57:30Z) - QPO: Query-dependent Prompt Optimization via Multi-Loop Offline Reinforcement Learning [58.767866109043055]
We introduce Query-dependent Prompt Optimization (QPO), which iteratively fine-tune a small pretrained language model to generate optimal prompts tailored to the input queries.
We derive insights from offline prompting demonstration data, which already exists in large quantities as a by-product of benchmarking diverse prompts on open-sourced tasks.
Experiments on various LLM scales and diverse NLP and math tasks demonstrate the efficacy and cost-efficiency of our method in both zero-shot and few-shot scenarios.
arXiv Detail & Related papers (2024-08-20T03:06:48Z) - MCS-SQL: Leveraging Multiple Prompts and Multiple-Choice Selection For Text-to-SQL Generation [10.726734105960924]
Large language models (LLMs) have enabled in-context learning (ICL)-based methods that significantly outperform fine-tuning approaches for text-to- tasks.
This study considers the sensitivity of LLMs to the prompts and introduces a novel approach that leverages multiple prompts to explore a broader search space for possible answers.
We establish a new SOTA performance on the BIRD in terms of both the accuracy and efficiency of the generated queries.
arXiv Detail & Related papers (2024-05-13T04:59:32Z) - Adaptive-RAG: Learning to Adapt Retrieval-Augmented Large Language Models through Question Complexity [59.57065228857247]
Retrieval-augmented Large Language Models (LLMs) have emerged as a promising approach to enhancing response accuracy in several tasks, such as Question-Answering (QA)
We propose a novel adaptive QA framework, that can dynamically select the most suitable strategy for (retrieval-augmented) LLMs based on the query complexity.
We validate our model on a set of open-domain QA datasets, covering multiple query complexities, and show that ours enhances the overall efficiency and accuracy of QA systems.
arXiv Detail & Related papers (2024-03-21T13:52:30Z) - Ask Optimal Questions: Aligning Large Language Models with Retriever's
Preference in Conversational Search [25.16282868262589]
RetPO is designed to optimize a language model (LM) for reformulating search queries in line with the preferences of the target retrieval systems.
We construct a large-scale dataset called Retrievers' Feedback on over 410K query rewrites across 12K conversations.
The resulting model achieves state-of-the-art performance on two recent conversational search benchmarks.
arXiv Detail & Related papers (2024-02-19T04:41:31Z) - Model-Based Simulation for Optimising Smart Reply [3.615981646205045]
Smart Reply (SR) systems present a user with a set of replies, of which one can be selected in place of having to type out a response.
Previous work has focused largely on post-hoc diversification, rather than explicitly learning to predict sets of responses.
We present a novel method SimSR, that employs model-based simulation to discover high-value response sets.
arXiv Detail & Related papers (2023-05-26T12:04:33Z) - Contextual Fine-to-Coarse Distillation for Coarse-grained Response
Selection in Open-Domain Conversations [48.046725390986595]
We propose a Contextual Fine-to-Coarse (CFC) distilled model for coarse-grained response selection in open-domain conversations.
To evaluate the performance of our proposed model, we construct two new datasets based on the Reddit comments dump and Twitter corpus.
arXiv Detail & Related papers (2021-09-24T08:22:35Z) - Text Summarization with Latent Queries [60.468323530248945]
We introduce LaQSum, the first unified text summarization system that learns Latent Queries from documents for abstractive summarization with any existing query forms.
Under a deep generative framework, our system jointly optimize a latent query model and a conditional language model, allowing users to plug-and-play queries of any type at test time.
Our system robustly outperforms strong comparison systems across summarization benchmarks with different query types, document settings, and target domains.
arXiv Detail & Related papers (2021-05-31T21:14:58Z) - Automated Concatenation of Embeddings for Structured Prediction [75.44925576268052]
We propose Automated Concatenation of Embeddings (ACE) to automate the process of finding better concatenations of embeddings for structured prediction tasks.
We follow strategies in reinforcement learning to optimize the parameters of the controller and compute the reward based on the accuracy of a task model.
arXiv Detail & Related papers (2020-10-10T14:03:20Z) - Tradeoffs in Sentence Selection Techniques for Open-Domain Question
Answering [54.541952928070344]
We describe two groups of models for sentence selection: QA-based approaches, which run a full-fledged QA system to identify answer candidates, and retrieval-based models, which find parts of each passage specifically related to each question.
We show that very lightweight QA models can do well at this task, but retrieval-based models are faster still.
arXiv Detail & Related papers (2020-09-18T23:39:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.