Drilling Down into the Discourse Structure with LLMs for Long Document
Question Answering
- URL: http://arxiv.org/abs/2311.13565v1
- Date: Wed, 22 Nov 2023 18:22:56 GMT
- Title: Drilling Down into the Discourse Structure with LLMs for Long Document
Question Answering
- Authors: Inderjeet Nair, Shwetha Somasundaram, Apoorv Saxena, Koustava Goswami
- Abstract summary: We propose a suite of techniques that exploit the discourse structure commonly found in documents.
We show how our approach can be combined with textitself-ask reasoning agent to achieve best zero-shot performance in complex multi-hop question answering.
- Score: 5.022057415488129
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We address the task of evidence retrieval for long document question
answering, which involves locating relevant paragraphs within a document to
answer a question. We aim to assess the applicability of large language models
(LLMs) in the task of zero-shot long document evidence retrieval, owing to
their unprecedented performance across various NLP tasks. However, currently
the LLMs can consume limited context lengths as input, thus providing document
chunks as inputs might overlook the global context while missing out on
capturing the inter-segment dependencies. Moreover, directly feeding the large
input sets can incur significant computational costs, particularly when
processing the entire document (and potentially incurring monetary expenses
with enterprise APIs like OpenAI's GPT variants). To address these challenges,
we propose a suite of techniques that exploit the discourse structure commonly
found in documents. By utilizing this structure, we create a condensed
representation of the document, enabling a more comprehensive understanding and
analysis of relationships between different parts. We retain $99.6\%$ of the
best zero-shot approach's performance, while processing only $26\%$ of the
total tokens used by the best approach in the information seeking evidence
retrieval setup. We also show how our approach can be combined with
\textit{self-ask} reasoning agent to achieve best zero-shot performance in
complex multi-hop question answering, just $\approx 4\%$ short of zero-shot
performance using gold evidence.
Related papers
- Emulating Retrieval Augmented Generation via Prompt Engineering for Enhanced Long Context Comprehension in LLMs [23.960451986662996]
This paper proposes a method that emulates Retrieval Augmented Generation (RAG) through specialized prompt engineering and chain-of-thought reasoning.
We evaluate our approach on selected tasks from BABILong, which interleaves standard bAbI QA problems with large amounts of distractor text.
arXiv Detail & Related papers (2025-02-18T02:49:40Z) - Enhanced Retrieval of Long Documents: Leveraging Fine-Grained Block Representations with Large Language Models [24.02950598944251]
We introduce a novel, fine-grained approach aimed at enhancing the accuracy of relevance scoring for long documents.
Our methodology firstly segments a long document into blocks, each of which is embedded using an LLM.
We aggregate the query-block relevance scores through a weighted sum method, yielding a comprehensive score for the query with the entire document.
arXiv Detail & Related papers (2025-01-28T16:03:52Z) - GeAR: Generation Augmented Retrieval [82.20696567697016]
Document retrieval techniques form the foundation for the development of large-scale information systems.
The prevailing methodology is to construct a bi-encoder and compute the semantic similarity.
We propose a new method called $textbfGe$neration that incorporates well-designed fusion and decoding modules.
arXiv Detail & Related papers (2025-01-06T05:29:00Z) - M3DocRAG: Multi-modal Retrieval is What You Need for Multi-page Multi-document Understanding [63.33447665725129]
We introduce M3DocRAG, a novel multi-modal RAG framework that flexibly accommodates various document contexts.
M3DocRAG can efficiently handle single or many documents while preserving visual information.
We also present M3DocVQA, a new benchmark for evaluating open-domain DocVQA over 3,000+ PDF documents with 40,000+ pages.
arXiv Detail & Related papers (2024-11-07T18:29:38Z) - LLM$\times$MapReduce: Simplified Long-Sequence Processing using Large Language Models [73.13933847198395]
We propose a training-free framework for processing long texts, utilizing a divide-and-conquer strategy to achieve comprehensive document understanding.
The proposed LLM$times$MapReduce framework splits the entire document into several chunks for LLMs to read and then aggregates the intermediate answers to produce the final output.
arXiv Detail & Related papers (2024-10-12T03:13:44Z) - Refiner: Restructure Retrieval Content Efficiently to Advance Question-Answering Capabilities [30.1331670544648]
Large Language Models (LLMs) are limited by their parametric knowledge, leading to hallucinations in knowledge-extensive tasks.
We propose $textitRefiner$, an end-to-end extract-and-restructure paradigm that operates in the post-retrieval process of RAG.
arXiv Detail & Related papers (2024-06-17T09:25:10Z) - In-context Pretraining: Language Modeling Beyond Document Boundaries [137.53145699439898]
In-Context Pretraining is a new approach where language models are pretrained on a sequence of related documents.
We introduce approximate algorithms for finding related documents with efficient nearest neighbor search.
We see notable improvements in tasks that require more complex contextual reasoning.
arXiv Detail & Related papers (2023-10-16T17:57:12Z) - PDFTriage: Question Answering over Long, Structured Documents [60.96667912964659]
Representing structured documents as plain text is incongruous with the user's mental model of these documents with rich structure.
We propose PDFTriage that enables models to retrieve the context based on either structure or content.
Our benchmark dataset consists of 900+ human-generated questions over 80 structured documents.
arXiv Detail & Related papers (2023-09-16T04:29:05Z) - PEARL: Prompting Large Language Models to Plan and Execute Actions Over
Long Documents [78.27865456183397]
We propose PEARL, a prompting framework to improve reasoning over long documents.
Each stage of PEARL is implemented via zero-shot or few-shot prompting with minimal human input.
We evaluate PEARL on a challenging subset of the QuALITY dataset, which contains questions that require complex reasoning over long narrative texts.
arXiv Detail & Related papers (2023-05-23T23:06:04Z) - Information Extraction from Documents: Question Answering vs Token
Classification in real-world setups [0.0]
We compare the Question Answering approach with the classical token classification approach for document key information extraction.
Our research showed that when dealing with clean and relatively short entities, it is still best to use token classification-based approach.
arXiv Detail & Related papers (2023-04-21T14:43:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.