$\infty$Bench: Extending Long Context Evaluation Beyond 100K Tokens
- URL: http://arxiv.org/abs/2402.13718v3
- Date: Sat, 24 Feb 2024 15:07:55 GMT
- Title: $\infty$Bench: Extending Long Context Evaluation Beyond 100K Tokens
- Authors: Xinrong Zhang and Yingfa Chen and Shengding Hu and Zihang Xu and
Junhao Chen and Moo Khai Hao and Xu Han and Zhen Leng Thai and Shuo Wang and
Zhiyuan Liu and Maosong Sun
- Abstract summary: There is currently a lack of a standardized benchmark to evaluate this long-context capability.
$infty$Bench is the first benchmark featuring an average data length surpassing 100K tokens.
The results indicate that existing long context LLMs still require significant advancements to effectively process 100K+ context.
- Score: 64.08660301017302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Processing and reasoning over long contexts is crucial for many practical
applications of Large Language Models (LLMs), such as document comprehension
and agent construction. Despite recent strides in making LLMs process contexts
with more than 100K tokens, there is currently a lack of a standardized
benchmark to evaluate this long-context capability. Existing public benchmarks
typically focus on contexts around 10K tokens, limiting the assessment and
comparison of LLMs in processing longer contexts. In this paper, we propose
$\infty$Bench, the first LLM benchmark featuring an average data length
surpassing 100K tokens. $\infty$Bench comprises synthetic and realistic tasks
spanning diverse domains, presented in both English and Chinese. The tasks in
$\infty$Bench are designed to require well understanding of long dependencies
in contexts, and make simply retrieving a limited number of passages from
contexts not sufficient for these tasks. In our experiments, based on
$\infty$Bench, we evaluate the state-of-the-art proprietary and open-source
LLMs tailored for processing long contexts. The results indicate that existing
long context LLMs still require significant advancements to effectively process
100K+ context. We further present three intriguing analyses regarding the
behavior of LLMs processing long context.
Related papers
- Needle Threading: Can LLMs Follow Threads through Near-Million-Scale Haystacks? [36.83397306207386]
We evaluate the capabilities of 17 leading Large Language Models (LLMs)
Strikingly, many models are remarkably threadsafe: capable of simultaneously following multiple threads without significant loss in performance.
We find the effective context limit is significantly shorter than the supported context length, with accuracy decreasing as the context window grows.
arXiv Detail & Related papers (2024-11-07T18:59:27Z) - DetectiveQA: Evaluating Long-Context Reasoning on Detective Novels [89.51834016940153]
We introduce DetectiveQA, a narrative reasoning benchmark with an average context length of over 100K tokens.
We use detective novels as data sources, which naturally have various reasoning elements.
We manually annotated 600 questions in Chinese and then also provided an English edition of the context information and questions.
arXiv Detail & Related papers (2024-09-04T06:28:22Z) - NeedleBench: Can LLMs Do Retrieval and Reasoning in 1 Million Context Window? [37.64593022203498]
NeedleBench is a framework consisting of progressively more challenging tasks for assessing bilingual long-context capabilities.
We use the framework to assess how well the leading open-source models can identify key information relevant to the question.
We propose the Ancestral Trace Challenge to mimic the complexity of logical reasoning challenges that are likely to be present in real-world long-context tasks.
arXiv Detail & Related papers (2024-07-16T17:59:06Z) - LongIns: A Challenging Long-context Instruction-based Exam for LLMs [44.51209510772957]
Long-context capabilities of large language models (LLMs) have been a hot topic in recent years.
We propose the LongIns benchmark dataset, a challenging long-context instruction-based exam for LLMs.
arXiv Detail & Related papers (2024-06-25T14:31:26Z) - LLoCO: Learning Long Contexts Offline [63.3458260335454]
We propose LLoCO, a novel approach to processing long contexts.
LLoCO learns contexts offline through context compression and in-domain parameter-efficient finetuning with LoRA.
Our approach extends the effective context window of a 4k token LLaMA2-7B model to handle up to 128k tokens.
arXiv Detail & Related papers (2024-04-11T17:57:22Z) - Ada-LEval: Evaluating long-context LLMs with length-adaptable benchmarks [76.43527940649939]
We introduce Ada-LEval, a benchmark for evaluating the long-context understanding of large language models (LLMs)
Ada-LEval includes two challenging subsets, TSort and BestAnswer, which enable a more reliable evaluation of LLMs' long context capabilities.
We evaluate 4 state-of-the-art closed-source API models and 6 open-source models with Ada-LEval.
arXiv Detail & Related papers (2024-04-09T17:30:48Z) - XL$^2$Bench: A Benchmark for Extremely Long Context Understanding with Long-range Dependencies [45.31042312867939]
Large Language Models (LLMs) have demonstrated remarkable performance across diverse tasks but are constrained by their small context window sizes.
Various efforts have been proposed to expand the context window to accommodate even up to 200K input tokens.
We introduce a benchmark for extremely long context understanding with long-range dependencies, XL$2$Bench.
arXiv Detail & Related papers (2024-04-08T12:29:07Z) - LooGLE: Can Long-Context Language Models Understand Long Contexts? [46.143956498529796]
LooGLE is a benchmark for large language models' long context understanding.
It features relatively new documents post-2022, with over 24,000 tokens per document and 6,000 newly generated questions spanning diverse domains.
The evaluation of eight state-of-the-art LLMs on LooGLE revealed key findings.
arXiv Detail & Related papers (2023-11-08T01:45:37Z) - LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding [58.20031627237889]
LongBench is the first bilingual, multi-task benchmark for long context understanding.
It comprises 21 datasets across 6 task categories in both English and Chinese, with an average length of 6,711 words (English) and 13,386 characters (Chinese)
arXiv Detail & Related papers (2023-08-28T11:53:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.