Hybrid Retrieval-Augmented Generation for Real-time Composition
Assistance
- URL: http://arxiv.org/abs/2308.04215v2
- Date: Mon, 5 Feb 2024 14:55:19 GMT
- Title: Hybrid Retrieval-Augmented Generation for Real-time Composition
Assistance
- Authors: Menglin Xia, Xuchao Zhang, Camille Couturier, Guoqing Zheng, Saravan
Rajmohan, Victor Ruhle
- Abstract summary: We propose the Hybrid Retrieval-Augmented Generation (HybridRAG) framework.
It efficiently combines a cloud-based large language model with a smaller, client-side, language model through retrieval augmented memory.
Our experiments on five benchmark datasets demonstrate that HybridRAG significantly improves utility over client-only models while maintaining low latency.
- Score: 19.011514931732904
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Retrieval augmentation enhances performance of traditional language models by
incorporating additional context. However, the computational demands for
retrieval augmented large language models (LLMs) pose a challenge when applying
them to real-time tasks, such as composition assistance. To address this
limitation, we propose the Hybrid Retrieval-Augmented Generation (HybridRAG)
framework, a novel approach that efficiently combines a cloud-based LLM with a
smaller, client-side, language model through retrieval augmented memory. This
integration enables the client model to generate effective responses,
benefiting from the LLM's capabilities and contextual information.
Additionally, through an asynchronous memory update mechanism, the client model
can deliver real-time completions swiftly to user inputs without the need to
wait for responses from the cloud. Our experiments on five benchmark datasets
demonstrate that HybridRAG significantly improves utility over client-only
models while maintaining low latency.
Related papers
- Beyond the Turn-Based Game: Enabling Real-Time Conversations with Duplex Models [66.24055500785657]
Traditional turn-based chat systems prevent users from verbally interacting with system while it is generating responses.
To overcome these limitations, we adapt existing LLMs to listen users while generating output and provide users with instant feedback.
We build a dataset consisting of alternating time slices of queries and responses as well as covering typical feedback types in instantaneous interactions.
arXiv Detail & Related papers (2024-06-22T03:20:10Z) - Towards Client Driven Federated Learning [7.528642177161784]
We introduce Client-Driven Federated Learning (CDFL), a novel FL framework that puts clients at the driving role.
In CDFL, each client independently and asynchronously updates its model by uploading the locally trained model to the server and receiving a customized model tailored to its local task.
arXiv Detail & Related papers (2024-05-24T10:17:49Z) - CMAT: A Multi-Agent Collaboration Tuning Framework for Enhancing Small Language Models [3.2186308082558623]
We introduce the TinyAgent model, trained on a meticulously curated high-quality dataset.
We also present the Collaborative Multi-Agent Tuning (CMAT) framework, an innovative system designed to augment language agent capabilities.
In this research, we propose a new communication agent framework that integrates multi-agent systems with environmental feedback mechanisms.
arXiv Detail & Related papers (2024-04-02T06:07:35Z) - Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters [65.15700861265432]
We present a parameter-efficient continual learning framework to alleviate long-term forgetting in incremental learning with vision-language models.
Our approach involves the dynamic expansion of a pre-trained CLIP model, through the integration of Mixture-of-Experts (MoE) adapters.
To preserve the zero-shot recognition capability of vision-language models, we introduce a Distribution Discriminative Auto-Selector.
arXiv Detail & Related papers (2024-03-18T08:00:23Z) - Online Adaptation of Language Models with a Memory of Amortized Contexts [86.91360597169563]
Memory of Amortized Contexts (MAC) is an efficient and effective online adaptation framework for large language models.
We propose an amortized feature extraction and memory-augmentation approach to compress and extract information from new documents.
Our experiment demonstrates the superiority of MAC in multiple aspects, including online adaptation performance, time, and memory efficiency.
arXiv Detail & Related papers (2024-03-07T08:34:57Z) - Cloud-Device Collaborative Learning for Multimodal Large Language Models [24.65882336700547]
We introduce a Cloud-Device Collaborative Continual Adaptation framework to enhance the performance of compressed, device-deployed MLLMs.
Our framework is structured into three key components: a device-to-cloud uplink for efficient data transmission, cloud-based knowledge adaptation, and an optimized cloud-to-device downlink for model deployment.
arXiv Detail & Related papers (2023-12-26T18:46:14Z) - Re-parameterized Low-rank Prompt: Generalize a Vision-Language Model
within 0.5K Parameters [75.28536311904489]
We develop a new type of prompt, Re- parameterized Low-rank Prompt (RLP), for both efficient and effective adaptation.
On a series of tasks over 11 datasets, RLP significantly increases the average downstream accuracy of classic prompt tuning by up to 5.25% using merely 0.5K parameters.
arXiv Detail & Related papers (2023-12-17T20:42:43Z) - Enhancing Retrieval-Augmented Large Language Models with Iterative
Retrieval-Generation Synergy [164.83371924650294]
We show that strong performance can be achieved by a method we call Iter-RetGen, which synergizes retrieval and generation in an iterative manner.
A model output shows what might be needed to finish a task, and thus provides an informative context for retrieving more relevant knowledge.
Iter-RetGen processes all retrieved knowledge as a whole and largely preserves the flexibility in generation without structural constraints.
arXiv Detail & Related papers (2023-05-24T16:17:36Z) - TSGM: A Flexible Framework for Generative Modeling of Synthetic Time Series [61.436361263605114]
Time series data are often scarce or highly sensitive, which precludes the sharing of data between researchers and industrial organizations.
We introduce Time Series Generative Modeling (TSGM), an open-source framework for the generative modeling of synthetic time series.
arXiv Detail & Related papers (2023-05-19T10:11:21Z) - HiFlash: Communication-Efficient Hierarchical Federated Learning with
Adaptive Staleness Control and Heterogeneity-aware Client-Edge Association [38.99309610943313]
Federated learning (FL) is a promising paradigm that enables collaboratively learning a shared model across massive clients.
For many existing FL systems, clients need to frequently exchange model parameters of large data size with the remote cloud server directly via wide-area networks (WAN)
We resort to the hierarchical federated learning paradigm of HiFL, which reaps the benefits of mobile edge computing.
arXiv Detail & Related papers (2023-01-16T14:39:04Z) - FedNet2Net: Saving Communication and Computations in Federated Learning
with Model Growing [0.0]
Federated learning (FL) is a recently developed area of machine learning.
In this paper, a novel scheme based on the notion of "model growing" is proposed.
The proposed approach is tested extensively on three standard benchmarks and is shown to achieve substantial reduction in communication and client computation.
arXiv Detail & Related papers (2022-07-19T21:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.