Hierarchical multimodal transformers for Multi-Page DocVQA
- URL: http://arxiv.org/abs/2212.05935v2
- Date: Sat, 1 Apr 2023 10:00:35 GMT
- Title: Hierarchical multimodal transformers for Multi-Page DocVQA
- Authors: Rub\`en Tito, Dimosthenis Karatzas and Ernest Valveny
- Abstract summary: Existing work on DocVQA only considers single-page documents.
In this work we extend DocVQA to the multi-page scenario.
We propose a new hierarchical method, Hi-VT5, that overcomes the limitations of current methods to process long multi-page documents.
- Score: 9.115927248875566
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Document Visual Question Answering (DocVQA) refers to the task of answering
questions from document images. Existing work on DocVQA only considers
single-page documents. However, in real scenarios documents are mostly composed
of multiple pages that should be processed altogether. In this work we extend
DocVQA to the multi-page scenario. For that, we first create a new dataset,
MP-DocVQA, where questions are posed over multi-page documents instead of
single pages. Second, we propose a new hierarchical method, Hi-VT5, based on
the T5 architecture, that overcomes the limitations of current methods to
process long multi-page documents. The proposed method is based on a
hierarchical transformer architecture where the encoder summarizes the most
relevant information of every page and then, the decoder takes this summarized
information to generate the final answer. Through extensive experimentation, we
demonstrate that our method is able, in a single stage, to answer the questions
and provide the page that contains the relevant information to find the answer,
which can be used as a kind of explainability measure.
Related papers
- Docopilot: Improving Multimodal Models for Document-Level Understanding [87.60020625241178]
We present a high-quality document-level dataset, Doc-750K, designed to support in-depth understanding of multimodal documents.<n>This dataset includes diverse document structures, extensive cross-page dependencies, and real question-answer pairs derived from the original documents.<n>Building on the dataset, we develop a native multimodal model, Docopilot, which can accurately handle document-level dependencies without relying on RAG.
arXiv Detail & Related papers (2025-07-19T16:03:34Z) - SimpleDoc: Multi-Modal Document Understanding with Dual-Cue Page Retrieval and Iterative Refinement [17.272061289197342]
Document Visual Question Answering (DocVQA) is a practical yet challenging task.<n>Recent methods follow a similar Retrieval Augmented Generation (RAG) pipeline.<n>We introduce SimpleDoc, a lightweight yet powerful retrieval - augmented framework for DocVQA.
arXiv Detail & Related papers (2025-06-16T22:15:58Z) - M3DocRAG: Multi-modal Retrieval is What You Need for Multi-page Multi-document Understanding [63.33447665725129]
We introduce M3DocRAG, a novel multi-modal RAG framework that flexibly accommodates various document contexts.
M3DocRAG can efficiently handle single or many documents while preserving visual information.
We also present M3DocVQA, a new benchmark for evaluating open-domain DocVQA over 3,000+ PDF documents with 40,000+ pages.
arXiv Detail & Related papers (2024-11-07T18:29:38Z) - Multi-Page Document Visual Question Answering using Self-Attention Scoring Mechanism [12.289101189321181]
Document Visual Question Answering (Document VQA) has garnered significant interest from both the document understanding and natural language processing communities.
The state-of-the-art single-page Document VQA methods show impressive performance, yet in multi-page scenarios, these methods struggle.
We propose a novel method and efficient training strategy for multi-page Document VQA tasks.
arXiv Detail & Related papers (2024-04-29T18:07:47Z) - NewsQs: Multi-Source Question Generation for the Inquiring Mind [59.79288644158271]
We present NewsQs, a dataset that provides question-answer pairs for multiple news documents.
To create NewsQs, we augment a traditional multi-document summarization dataset with questions automatically generated by a T5-Large model fine-tuned on FAQ-style news articles.
arXiv Detail & Related papers (2024-02-28T16:59:35Z) - GRAM: Global Reasoning for Multi-Page VQA [14.980413646626234]
We present GRAM, a method that seamlessly extends pre-trained single-page models to the multi-page setting.
To do so, we leverage a single-page encoder for local page-level understanding, and enhance it with document-level designated layers and learnable tokens.
For additional computational savings during decoding, we introduce an optional compression stage.
arXiv Detail & Related papers (2024-01-07T08:03:06Z) - PDFTriage: Question Answering over Long, Structured Documents [60.96667912964659]
Representing structured documents as plain text is incongruous with the user's mental model of these documents with rich structure.
We propose PDFTriage that enables models to retrieve the context based on either structure or content.
Our benchmark dataset consists of 900+ human-generated questions over 80 structured documents.
arXiv Detail & Related papers (2023-09-16T04:29:05Z) - PDFVQA: A New Dataset for Real-World VQA on PDF Documents [2.105395241374678]
Document-based Visual Question Answering examines the document understanding of document images in conditions of natural language questions.
Our PDF-VQA dataset extends the current scale of document understanding that limits on the single document page to the new scale that asks questions over the full document of multiple pages.
arXiv Detail & Related papers (2023-04-13T12:28:14Z) - Generate rather than Retrieve: Large Language Models are Strong Context
Generators [74.87021992611672]
We present a novel perspective for solving knowledge-intensive tasks by replacing document retrievers with large language model generators.
We call our method generate-then-read (GenRead), which first prompts a large language model to generate contextutal documents based on a given question, and then reads the generated documents to produce the final answer.
arXiv Detail & Related papers (2022-09-21T01:30:59Z) - Multi-View Document Representation Learning for Open-Domain Dense
Retrieval [87.11836738011007]
This paper proposes a multi-view document representation learning framework.
It aims to produce multi-view embeddings to represent documents and enforce them to align with different queries.
Experiments show our method outperforms recent works and achieves state-of-the-art results.
arXiv Detail & Related papers (2022-03-16T03:36:38Z) - End-to-End Multihop Retrieval for Compositional Question Answering over
Long Documents [93.55268936974971]
We propose a multi-hop retrieval method, DocHopper, to answer compositional questions over long documents.
At each step, DocHopper retrieves a paragraph or sentence embedding from the document, mixes the retrieved result with the query, and updates the query for the next step.
We demonstrate that utilizing document structure in this was can largely improve question-answering and retrieval performance on long documents.
arXiv Detail & Related papers (2021-06-01T03:13:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.