Entry Separation using a Mixed Visual and Textual Language Model:
Application to 19th century French Trade Directories
- URL: http://arxiv.org/abs/2302.08948v1
- Date: Fri, 17 Feb 2023 15:30:44 GMT
- Title: Entry Separation using a Mixed Visual and Textual Language Model:
Application to 19th century French Trade Directories
- Authors: Bertrand Dum\'enieu (1), Edwin Carlinet (2), Nathalie Abadie (3),
Joseph Chazalon (2) ((1) LaD\'eHiS, CRH, EHESS, France, (2) EPITA Research
Laboratory (LRE), France, (3) Univ. Gustave Eiffel, IGN-ENSG, LaSTIG, France)
- Abstract summary: A key challenge is to correctly segment what constitutes the basic text regions for the target database.
We propose a new pragmatic approach whose efficiency is demonstrated on 19th century French Trade Directories.
By injecting special visual tokens, coding, for instance, indentation or breaks, into the token stream of the language model used for NER purpose, we can leverage both textual and visual knowledge simultaneously.
- Score: 18.323615434182553
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: When extracting structured data from repetitively organized documents, such
as dictionaries, directories, or even newspapers, a key challenge is to
correctly segment what constitutes the basic text regions for the target
database. Traditionally, such a problem was tackled as part of the layout
analysis and was mostly based on visual clues for dividing (top-down)
approaches. Some agglomerating (bottom-up) approaches started to consider
textual information to link similar contents, but they required a proper
over-segmentation of fine-grained units. In this work, we propose a new
pragmatic approach whose efficiency is demonstrated on 19th century French
Trade Directories. We propose to consider two sub-problems: coarse layout
detection (text columns and reading order), which is assumed to be effective
and not detailed here, and a fine-grained entry separation stage for which we
propose to adapt a state-of-the-art Named Entity Recognition (NER) approach. By
injecting special visual tokens, coding, for instance, indentation or breaks,
into the token stream of the language model used for NER purpose, we can
leverage both textual and visual knowledge simultaneously. Code, data, results
and models are available at
https://github.com/soduco/paper-entryseg-icdar23-code,
https://huggingface.co/HueyNemud/ (icdar23-entrydetector* variants)
Related papers
- LESS: Label-Efficient and Single-Stage Referring 3D Segmentation [55.06002976797879]
Referring 3D is a visual-language task that segments all points of the specified object from a 3D point cloud described by a sentence of query.
We propose a novel Referring 3D pipeline, Label-Efficient and Single-Stage, dubbed LESS, which is only under the supervision of efficient binary mask.
We achieve state-of-the-art performance on ScanRefer dataset by surpassing the previous methods about 3.7% mIoU using only binary labels.
arXiv Detail & Related papers (2024-10-17T07:47:41Z) - Contextual Document Embeddings [77.22328616983417]
We propose two complementary methods for contextualized document embeddings.
First, an alternative contrastive learning objective that explicitly incorporates the document neighbors into the intra-batch contextual loss.
Second, a new contextual architecture that explicitly encodes neighbor document information into the encoded representation.
arXiv Detail & Related papers (2024-10-03T14:33:34Z) - Leveraging Semantic Segmentation Masks with Embeddings for Fine-Grained Form Classification [0.0]
Efficient categorization of historical documents is crucial for fields such as genealogy, legal research and historical scholarship.
We propose a representational learning strategy that integrates deep learning models such as ResNet, masked Image Transformer (Di), and embedding segmentation.
arXiv Detail & Related papers (2024-05-23T04:28:50Z) - SelfDocSeg: A Self-Supervised vision-based Approach towards Document
Segmentation [15.953725529361874]
Document layout analysis is a known problem to the documents research community.
With growing internet connectivity to personal life, an enormous amount of documents had been available in the public domain.
We address this challenge using self-supervision and unlike, the few existing self-supervised document segmentation approaches.
arXiv Detail & Related papers (2023-05-01T12:47:55Z) - PARAGRAPH2GRAPH: A GNN-based framework for layout paragraph analysis [6.155943751502232]
We present a language-independent graph neural network (GNN)-based model that achieves competitive results on common document layout datasets.
Our model is suitable for industrial applications, particularly in multi-language scenarios.
arXiv Detail & Related papers (2023-04-24T03:54:48Z) - A Simple Framework for Open-Vocabulary Segmentation and Detection [85.21641508535679]
We present OpenSeeD, a simple Open-vocabulary and Detection framework that jointly learns from different segmentation and detection datasets.
We first introduce a pre-trained text encoder to encode all the visual concepts in two tasks and learn a common semantic space for them.
After pre-training, our model exhibits competitive or stronger zero-shot transferability for both segmentation and detection.
arXiv Detail & Related papers (2023-03-14T17:58:34Z) - Learning Object-Language Alignments for Open-Vocabulary Object Detection [83.09560814244524]
We propose a novel open-vocabulary object detection framework directly learning from image-text pair data.
It enables us to train an open-vocabulary object detector on image-text pairs in a much simple and effective way.
arXiv Detail & Related papers (2022-11-27T14:47:31Z) - Knowing Where and What: Unified Word Block Pretraining for Document
Understanding [11.46378901674016]
We propose UTel, a language model with Unified TExt and layout pre-training.
Specifically, we propose two pre-training tasks: Surrounding Word Prediction (SWP) for the layout learning, and Contrastive learning of Word Embeddings (CWE) for identifying different word blocks.
In this way, the joint training of Masked Layout-Language Modeling (MLLM) and two newly proposed tasks enables the interaction between semantic and spatial features in a unified way.
arXiv Detail & Related papers (2022-07-28T09:43:06Z) - TRIE++: Towards End-to-End Information Extraction from Visually Rich
Documents [51.744527199305445]
This paper proposes a unified end-to-end information extraction framework from visually rich documents.
Text reading and information extraction can reinforce each other via a well-designed multi-modal context block.
The framework can be trained in an end-to-end trainable manner, achieving global optimization.
arXiv Detail & Related papers (2022-07-14T08:52:07Z) - RDU: A Region-based Approach to Form-style Document Understanding [69.29541701576858]
Key Information Extraction (KIE) is aimed at extracting structured information from form-style documents.
We develop a new KIE model named Region-based Understanding Document (RDU)
RDU takes as input the text content and corresponding coordinates of a document, and tries to predict the result by localizing a bounding-box-like region.
arXiv Detail & Related papers (2022-06-14T14:47:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.