Extensible Embedding: A Flexible Multipler For LLM's Context Length
- URL: http://arxiv.org/abs/2402.11577v1
- Date: Sun, 18 Feb 2024 12:50:19 GMT
- Title: Extensible Embedding: A Flexible Multipler For LLM's Context Length
- Authors: Ninglu Shao, Shitao Xiao, Zheng Liu, Peitian Zhang
- Abstract summary: Large language models (LLMs) call for extension of context to handle many critical applications.
Existing approaches are prone to expensive costs and inferior quality of context extension.
We propose Extensible Embedding, which realizes high-quality extension of LLM's context with strong flexibility and cost-effectiveness.
- Score: 6.9004592877749005
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) call for extension of context to handle many
critical applications. However, the existing approaches are prone to expensive
costs and inferior quality of context extension. In this work, we propose
Extensible Embedding, which realizes high-quality extension of LLM's context
with strong flexibility and cost-effectiveness. Extensible embedding stand as
an enhancement of typical token embedding, which represents the information for
an extensible scope of context instead of a single token. By leveraging such
compact input units of higher information density, the LLM can access to a vast
scope of context even with a small context window. Extensible embedding is
systematically optimized in architecture and training method, which leads to
multiple advantages. 1) High flexibility of context extension, which flexibly
supports ad-hoc extension of diverse context lengths. 2) Strong sample
efficiency of training, which enables the embedding model to be learned in a
cost-effective way. 3) Superior compatibility with the existing LLMs, where the
extensible embedding can be seamlessly introduced as a plug-in component.
Comprehensive evaluations on long-context language modeling and understanding
tasks verify extensible embedding as an effective, efficient, flexible, and
compatible method to extend the LLM's context.
Related papers
- Enhancing LLM's Cognition via Structurization [41.13997892843677]
Large language models (LLMs) process input contexts through a causal and sequential perspective.
This paper presents a novel concept of context structurization.
Specifically, we transform the plain, unordered contextual sentences into well-ordered and hierarchically structurized elements.
arXiv Detail & Related papers (2024-07-23T12:33:58Z) - Soft Prompting for Unlearning in Large Language Models [11.504012974208466]
This work focuses on investigating machine unlearning for Large Language Models motivated by data protection regulations.
We propose a framework textbfSoft textbfPrompting for textbfUntextbflearning (SPUL) that learns prompt tokens that can be appended to an arbitrary query to induce unlearning.
arXiv Detail & Related papers (2024-06-17T19:11:40Z) - Multi-modal Instruction Tuned LLMs with Fine-grained Visual Perception [63.03288425612792]
We propose bfAnyRef, a general MLLM model that can generate pixel-wise object perceptions and natural language descriptions from multi-modality references.
Our model achieves state-of-the-art results across multiple benchmarks, including diverse modality referring segmentation and region-level referring expression generation.
arXiv Detail & Related papers (2024-03-05T13:45:46Z) - Structure Guided Prompt: Instructing Large Language Model in Multi-Step
Reasoning by Exploring Graph Structure of the Text [44.81698187939784]
This paper introduces Structure Guided Prompt, a framework designed to improve the multi-step reasoning capabilities of Large Language Models (LLMs)
Our experiments show that this framework significantly enhances the reasoning capabilities of LLMs, enabling them to excel in a broader spectrum of natural language scenarios.
arXiv Detail & Related papers (2024-02-20T22:56:23Z) - BGE Landmark Embedding: A Chunking-Free Embedding Method For Retrieval
Augmented Long-Context Large Language Models [13.229325187638432]
Large language models (LLMs) call for extension of context to handle many critical applications.
Existing approaches are prone to expensive costs and inferior quality of context extension.
Extensible embedding stand as an enhancement of typical token embedding.
arXiv Detail & Related papers (2024-02-18T12:41:01Z) - Flexibly Scaling Large Language Models Contexts Through Extensible
Tokenization [6.9004592877749005]
Large language models (LLMs) are in need of sufficient contexts to handle many critical applications.
Although the size of context window can be extended by fine-tuning, it will result in a substantial cost in both training and inference stage.
We present Extensible Tokenization as an alternative method which realizes the flexible scaling of LLMs' context.
arXiv Detail & Related papers (2024-01-15T16:00:50Z) - Video Understanding with Large Language Models: A Survey [97.29126722004949]
Given the remarkable capabilities of large language models (LLMs) in language and multimodal tasks, this survey provides a detailed overview of recent advancements in video understanding.
The emergent capabilities Vid-LLMs are surprisingly advanced, particularly their ability for open-ended multi-granularity reasoning.
This survey presents a comprehensive study of the tasks, datasets, benchmarks, and evaluation methodologies for Vid-LLMs.
arXiv Detail & Related papers (2023-12-29T01:56:17Z) - Supervised Knowledge Makes Large Language Models Better In-context Learners [94.89301696512776]
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
The challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored.
We propose a framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks.
arXiv Detail & Related papers (2023-12-26T07:24:46Z) - Towards More Unified In-context Visual Understanding [74.55332581979292]
We present a new ICL framework for visual understanding with multi-modal output enabled.
First, we quantize and embed both text and visual prompt into a unified representational space.
Then a decoder-only sparse transformer architecture is employed to perform generative modeling on them.
arXiv Detail & Related papers (2023-12-05T06:02:21Z) - CRAFT: Customizing LLMs by Creating and Retrieving from Specialized
Toolsets [75.64181719386497]
We present CRAFT, a tool creation and retrieval framework for large language models (LLMs)
It creates toolsets specifically curated for the tasks and equips LLMs with a component that retrieves tools from these sets to enhance their capability to solve complex tasks.
Our method is designed to be flexible and offers a plug-and-play approach to adapt off-the-shelf LLMs to unseen domains and modalities, without any finetuning.
arXiv Detail & Related papers (2023-09-29T17:40:26Z) - FederatedScope-LLM: A Comprehensive Package for Fine-tuning Large
Language Models in Federated Learning [70.38817963253034]
This paper first discusses these challenges of federated fine-tuning LLMs, and introduces our package FS-LLM as a main contribution.
We provide comprehensive federated parameter-efficient fine-tuning algorithm implementations and versatile programming interfaces for future extension in FL scenarios.
We conduct extensive experiments to validate the effectiveness of FS-LLM and benchmark advanced LLMs with state-of-the-art parameter-efficient fine-tuning algorithms in FL settings.
arXiv Detail & Related papers (2023-09-01T09:40:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.