VRDU: A Benchmark for Visually-rich Document Understanding
- URL: http://arxiv.org/abs/2211.15421v3
- Date: Sat, 16 Sep 2023 17:52:27 GMT
- Title: VRDU: A Benchmark for Visually-rich Document Understanding
- Authors: Zilong Wang, Yichao Zhou, Wei Wei, Chen-Yu Lee, Sandeep Tata
- Abstract summary: We identify the desiderata for a more comprehensive benchmark and propose one we call Visually Rich Document Understanding (VRDU)
VRDU contains two datasets that represent several challenges: rich schema including diverse data types as well as hierarchical entities, complex templates including tables and multi-column layouts, and diversity of different layouts (templates) within a single document type.
We design few-shot and conventional experiment settings along with a carefully designed matching algorithm to evaluate extraction results.
- Score: 22.040372755535767
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Understanding visually-rich business documents to extract structured data and
automate business workflows has been receiving attention both in academia and
industry. Although recent multi-modal language models have achieved impressive
results, we find that existing benchmarks do not reflect the complexity of real
documents seen in industry. In this work, we identify the desiderata for a more
comprehensive benchmark and propose one we call Visually Rich Document
Understanding (VRDU). VRDU contains two datasets that represent several
challenges: rich schema including diverse data types as well as hierarchical
entities, complex templates including tables and multi-column layouts, and
diversity of different layouts (templates) within a single document type. We
design few-shot and conventional experiment settings along with a carefully
designed matching algorithm to evaluate extraction results. We report the
performance of strong baselines and offer three observations: (1) generalizing
to new document templates is still very challenging, (2) few-shot performance
has a lot of headroom, and (3) models struggle with hierarchical fields such as
line-items in an invoice. We plan to open source the benchmark and the
evaluation toolkit. We hope this helps the community make progress on these
challenging tasks in extracting structured data from visually rich documents.
Related papers
- MMDocIR: Benchmarking Multi-Modal Retrieval for Long Documents [26.39534684408116]
This work introduces a new benchmark, named as MMDocIR, encompassing two distinct tasks: page-level and layout-level retrieval.
The MMDocIR benchmark comprises a rich dataset featuring expertly annotated labels for 1,685 questions and bootstrapped labels for 173,843 questions.
arXiv Detail & Related papers (2025-01-15T14:30:13Z) - LongDocURL: a Comprehensive Multimodal Long Document Benchmark Integrating Understanding, Reasoning, and Locating [40.44974704748952]
Large vision language models (LVLMs) have improved the document understanding capabilities remarkably.
Existing document understanding benchmarks have been limited to handling only a small number of pages.
We develop a semi-automated construction pipeline and collect 2,325 high-quality question-answering pairs, covering more than 33,000 pages of documents.
arXiv Detail & Related papers (2024-12-24T13:39:32Z) - Document Parsing Unveiled: Techniques, Challenges, and Prospects for Structured Information Extraction [23.47150047875133]
Document parsing is essential for converting unstructured and semi-structured documents into machine-readable data.
Document parsing plays an indispensable role in both knowledge base construction and training data generation.
This paper discusses the challenges faced by modular document parsing systems and vision-language models in handling complex layouts.
arXiv Detail & Related papers (2024-10-28T16:11:35Z) - BuDDIE: A Business Document Dataset for Multi-task Information Extraction [18.440587946049845]
BuDDIE is the first multi-task dataset of 1,665 real-world business documents.
Our dataset consists of publicly available business entity documents from US state government websites.
arXiv Detail & Related papers (2024-04-05T10:26:42Z) - On Task-personalized Multimodal Few-shot Learning for Visually-rich
Document Entity Retrieval [59.25292920967197]
Few-shot document entity retrieval (VDER) is an important topic in industrial NLP applications.
FewVEX is a new dataset to boost future research in the field of entity-level few-shot VDER.
We present a task-aware meta-learning based framework, with a central focus on achieving effective task personalization.
arXiv Detail & Related papers (2023-11-01T17:51:43Z) - Doc2SoarGraph: Discrete Reasoning over Visually-Rich Table-Text
Documents via Semantic-Oriented Hierarchical Graphs [79.0426838808629]
We propose TAT-DQA, i.e. to answer the question over a visually-rich table-text document.
Specifically, we propose a novel Doc2SoarGraph framework with enhanced discrete reasoning capability.
We conduct extensive experiments on TAT-DQA dataset, and the results show that our proposed framework outperforms the best baseline model by 17.73% and 16.91% in terms of Exact Match (EM) and F1 score respectively on the test set.
arXiv Detail & Related papers (2023-05-03T07:30:32Z) - Modeling Entities as Semantic Points for Visual Information Extraction
in the Wild [55.91783742370978]
We propose an alternative approach to precisely and robustly extract key information from document images.
We explicitly model entities as semantic points, i.e., center points of entities are enriched with semantic information describing the attributes and relationships of different entities.
The proposed method can achieve significantly enhanced performance on entity labeling and linking, compared with previous state-of-the-art models.
arXiv Detail & Related papers (2023-03-23T08:21:16Z) - ReSel: N-ary Relation Extraction from Scientific Text and Tables by
Learning to Retrieve and Select [53.071352033539526]
We study the problem of extracting N-ary relations from scientific articles.
Our proposed method ReSel decomposes this task into a two-stage procedure.
Our experiments on three scientific information extraction datasets show that ReSel outperforms state-of-the-art baselines significantly.
arXiv Detail & Related papers (2022-10-26T02:28:02Z) - TRIE++: Towards End-to-End Information Extraction from Visually Rich
Documents [51.744527199305445]
This paper proposes a unified end-to-end information extraction framework from visually rich documents.
Text reading and information extraction can reinforce each other via a well-designed multi-modal context block.
The framework can be trained in an end-to-end trainable manner, achieving global optimization.
arXiv Detail & Related papers (2022-07-14T08:52:07Z) - Multi-View Document Representation Learning for Open-Domain Dense
Retrieval [87.11836738011007]
This paper proposes a multi-view document representation learning framework.
It aims to produce multi-view embeddings to represent documents and enforce them to align with different queries.
Experiments show our method outperforms recent works and achieves state-of-the-art results.
arXiv Detail & Related papers (2022-03-16T03:36:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.