Information Redundancy and Biases in Public Document Information
Extraction Benchmarks
- URL: http://arxiv.org/abs/2304.14936v1
- Date: Fri, 28 Apr 2023 15:48:26 GMT
- Title: Information Redundancy and Biases in Public Document Information
Extraction Benchmarks
- Authors: Seif Laatiri, Pirashanth Ratnamogan, Joel Tang, Laurent Lam, William
Vanhuffel, Fabien Caspani
- Abstract summary: Despite the good performance of KIE models when fine-tuned on public benchmarks, they still struggle to generalize on complex real-life use-cases lacking sufficient document annotations.
Our research highlighted that KIE standard benchmarks such as SROIE and FUNSD contain significant similarity between training and testing documents and can be adjusted to better evaluate the generalization of models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Advances in the Visually-rich Document Understanding (VrDU) field and
particularly the Key-Information Extraction (KIE) task are marked with the
emergence of efficient Transformer-based approaches such as the LayoutLM
models. Despite the good performance of KIE models when fine-tuned on public
benchmarks, they still struggle to generalize on complex real-life use-cases
lacking sufficient document annotations. Our research highlighted that KIE
standard benchmarks such as SROIE and FUNSD contain significant similarity
between training and testing documents and can be adjusted to better evaluate
the generalization of models. In this work, we designed experiments to quantify
the information redundancy in public benchmarks, revealing a 75% template
replication in SROIE official test set and 16% in FUNSD. We also proposed
resampling strategies to provide benchmarks more representative of the
generalization ability of models. We showed that models not suited for document
analysis struggle on the adjusted splits dropping on average 10,5% F1 score on
SROIE and 3.5% on FUNSD compared to multi-modal models dropping only 7,5% F1 on
SROIE and 0.5% F1 on FUNSD.
Related papers
- LiveXiv -- A Multi-Modal Live Benchmark Based on Arxiv Papers Content [62.816876067499415]
We propose LiveXiv: a scalable evolving live benchmark based on scientific ArXiv papers.
LiveXiv accesses domain-specific manuscripts at any given timestamp and proposes to automatically generate visual question-answer pairs.
We benchmark multiple open and proprietary Large Multi-modal Models (LMMs) on the first version of our benchmark, showing its challenging nature and exposing the models true abilities.
arXiv Detail & Related papers (2024-10-14T17:51:23Z) - Revisiting BPR: A Replicability Study of a Common Recommender System Baseline [78.00363373925758]
We study the features of the BPR model, indicating their impact on its performance, and investigate open-source BPR implementations.
Our analysis reveals inconsistencies between these implementations and the original BPR paper, leading to a significant decrease in performance of up to 50% for specific implementations.
We show that the BPR model can achieve performance levels close to state-of-the-art methods on the top-n recommendation tasks and even outperform them on specific datasets.
arXiv Detail & Related papers (2024-09-21T18:39:53Z) - Data Efficient Evaluation of Large Language Models and Text-to-Image Models via Adaptive Sampling [3.7467864495337624]
SubLIME is a data-efficient evaluation framework for text-to-image models.
Our approach ensures statistically aligned model rankings compared to full datasets.
We leverage the HEIM leaderboard to cover 25 text-to-image models on 17 different benchmarks.
arXiv Detail & Related papers (2024-06-21T07:38:55Z) - Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models [15.50128790503447]
We propose a novel and theoretically motivated methodology for pre-training data detection, named Min-K%++.
Specifically, we present a key insight that training samples tend to be local maxima of the modeled distribution along each input dimension through likelihood training.
arXiv Detail & Related papers (2024-04-03T04:25:01Z) - RoDLA: Benchmarking the Robustness of Document Layout Analysis Models [32.52120363558076]
We introduce a robustness benchmark for Document Layout Analysis (DLA) models, which includes 450K document images of three datasets.
To cover realistic corruptions, we propose a perturbation taxonomy with 36 common document perturbations inspired by real-world document processing.
To better understand document perturbation impacts, we propose two metrics, Mean Perturbation Effect (mPE) for perturbation assessment and Mean Robustness Degradation (mRD) for robustness evaluation.
arXiv Detail & Related papers (2024-03-21T14:47:12Z) - Zero-shot Retrieval: Augmenting Pre-trained Models with Search Engines [83.65380507372483]
Large pre-trained models can dramatically reduce the amount of task-specific data required to solve a problem, but they often fail to capture domain-specific nuances out of the box.
This paper shows how to leverage recent advances in NLP and multi-modal learning to augment a pre-trained model with search engine retrieval.
arXiv Detail & Related papers (2023-11-29T05:33:28Z) - Preserving Knowledge Invariance: Rethinking Robustness Evaluation of
Open Information Extraction [50.62245481416744]
We present the first benchmark that simulates the evaluation of open information extraction models in the real world.
We design and annotate a large-scale testbed in which each example is a knowledge-invariant clique.
By further elaborating the robustness metric, a model is judged to be robust if its performance is consistently accurate on the overall cliques.
arXiv Detail & Related papers (2023-05-23T12:05:09Z) - Domain Adaptation of Transformer-Based Models using Unlabeled Data for
Relevance and Polarity Classification of German Customer Feedback [1.2999413717930817]
This work explores how efficient transformer-based models are when working with a German customer feedback dataset.
The experimental results show that transformer-based models can reach significant improvements compared to a fastText baseline.
arXiv Detail & Related papers (2022-12-12T08:32:28Z) - Text Embeddings by Weakly-Supervised Contrastive Pre-training [98.31785569325402]
E5 is a family of state-of-the-art text embeddings that transfer well to a wide range of tasks.
E5 can be readily used as a general-purpose embedding model for any tasks requiring a single-vector representation of texts.
arXiv Detail & Related papers (2022-12-07T09:25:54Z) - News Summarization and Evaluation in the Era of GPT-3 [73.48220043216087]
We study how GPT-3 compares against fine-tuned models trained on large summarization datasets.
We show that not only do humans overwhelmingly prefer GPT-3 summaries, prompted using only a task description, but these also do not suffer from common dataset-specific issues such as poor factuality.
arXiv Detail & Related papers (2022-09-26T01:04:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.