TableNet: Deep Learning model for end-to-end Table detection and Tabular
data extraction from Scanned Document Images
- URL: http://arxiv.org/abs/2001.01469v1
- Date: Mon, 6 Jan 2020 10:25:32 GMT
- Title: TableNet: Deep Learning model for end-to-end Table detection and Tabular
data extraction from Scanned Document Images
- Authors: Shubham Paliwal, Vishwanath D, Rohit Rahul, Monika Sharma, Lovekesh
Vig
- Abstract summary: We propose a novel end-to-end deep learning model for both table detection and structure recognition.
TableNet exploits the interdependence between the twin tasks of table detection and table structure recognition.
The proposed model and extraction approach was evaluated on the publicly available ICDAR 2013 and Marmot Table datasets.
- Score: 18.016832803961165
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the widespread use of mobile phones and scanners to photograph and
upload documents, the need for extracting the information trapped in
unstructured document images such as retail receipts, insurance claim forms and
financial invoices is becoming more acute. A major hurdle to this objective is
that these images often contain information in the form of tables and
extracting data from tabular sub-images presents a unique set of challenges.
This includes accurate detection of the tabular region within an image, and
subsequently detecting and extracting information from the rows and columns of
the detected table. While some progress has been made in table detection,
extracting the table contents is still a challenge since this involves more
fine grained table structure(rows & columns) recognition. Prior approaches have
attempted to solve the table detection and structure recognition problems
independently using two separate models. In this paper, we propose TableNet: a
novel end-to-end deep learning model for both table detection and structure
recognition. The model exploits the interdependence between the twin tasks of
table detection and table structure recognition to segment out the table and
column regions. This is followed by semantic rule-based row extraction from the
identified tabular sub-regions. The proposed model and extraction approach was
evaluated on the publicly available ICDAR 2013 and Marmot Table datasets
obtaining state of the art results. Additionally, we demonstrate that feeding
additional semantic features further improves model performance and that the
model exhibits transfer learning across datasets. Another contribution of this
paper is to provide additional table structure annotations for the Marmot data,
which currently only has annotations for table detection.
Related papers
- UniTabNet: Bridging Vision and Language Models for Enhanced Table Structure Recognition [55.153629718464565]
We introduce UniTabNet, a novel framework for table structure parsing based on the image-to-text model.
UniTabNet employs a divide-and-conquer'' strategy, utilizing an image-to-text model to decouple table cells and integrating both physical and logical decoders to reconstruct the complete table structure.
arXiv Detail & Related papers (2024-09-20T01:26:32Z) - A large-scale dataset for end-to-end table recognition in the wild [13.717478398235055]
Table recognition (TR) is one of the research hotspots in pattern recognition.
Currently, the end-to-end TR in real scenarios, accomplishing the three sub-tasks simultaneously, is yet an unexplored research area.
We propose a new large-scale dataset named Table Recognition Set (TabRecSet) with diverse table forms sourcing from multiple scenarios in the wild.
arXiv Detail & Related papers (2023-03-27T02:48:51Z) - SEMv2: Table Separation Line Detection Based on Instance Segmentation [96.36188168694781]
We propose an accurate table structure recognizer, termed SEMv2 (SEM: Split, Embed and Merge)
We address the table separation line instance-level discrimination problem and introduce a table separation line detection strategy based on conditional convolution.
To comprehensively evaluate the SEMv2, we also present a more challenging dataset for table structure recognition, dubbed iFLYTAB.
arXiv Detail & Related papers (2023-03-08T05:15:01Z) - Table Retrieval May Not Necessitate Table-specific Model Design [83.27735758203089]
We focus on the task of table retrieval, and ask: "is table-specific model design necessary for table retrieval?"
Based on an analysis on a table-based portion of the Natural Questions dataset (NQ-table), we find that structure plays a negligible role in more than 70% of the cases.
We then experiment with three modules to explicitly encode table structures, namely auxiliary row/column embeddings, hard attention masks, and soft relation-based attention biases.
None of these yielded significant improvements, suggesting that table-specific model design may not be necessary for table retrieval.
arXiv Detail & Related papers (2022-05-19T20:35:23Z) - TGRNet: A Table Graph Reconstruction Network for Table Structure
Recognition [76.06530816349763]
We propose an end-to-end trainable table graph reconstruction network (TGRNet) for table structure recognition.
Specifically, the proposed method has two main branches, a cell detection branch and a cell logical location branch, to jointly predict the spatial location and the logical location of different cells.
arXiv Detail & Related papers (2021-06-20T01:57:05Z) - Multi-Type-TD-TSR -- Extracting Tables from Document Images using a
Multi-stage Pipeline for Table Detection and Table Structure Recognition:
from OCR to Structured Table Representations [63.98463053292982]
The recognition of tables consists of two main tasks, namely table detection and table structure recognition.
Recent work shows a clear trend towards deep learning approaches coupled with the use of transfer learning for the task of table structure recognition.
We present a multistage pipeline named Multi-Type-TD-TSR, which offers an end-to-end solution for the problem of table recognition.
arXiv Detail & Related papers (2021-05-23T21:17:18Z) - Deep Structured Feature Networks for Table Detection and Tabular Data
Extraction from Scanned Financial Document Images [0.6299766708197884]
This research is proposing an automated table detection and tabular data extraction from financial PDF documents.
We proposed a method that consists of three main processes, which are detecting table areas with a Faster R-CNN (Region-based Convolutional Neural Network) model.
The excellent table detection performance of the detection model is obtained from our customized dataset.
arXiv Detail & Related papers (2021-02-20T08:21:17Z) - A Graph Representation of Semi-structured Data for Web Question
Answering [96.46484690047491]
We propose a novel graph representation of Web tables and lists based on a systematic categorization of the components in semi-structured data as well as their relations.
Our method improves F1 score by 3.90 points over the state-of-the-art baselines.
arXiv Detail & Related papers (2020-10-14T04:01:54Z) - Identifying Table Structure in Documents using Conditional Generative
Adversarial Networks [0.0]
In many industries and in academic research, information is primarily transmitted in the form of unstructured documents.
We propose a top-down approach, first using a conditional generative adversarial network to map a table image into a standardised skeleton' table form.
We then deriving latent table structure using xy-cut projection and Genetic Algorithm optimisation.
arXiv Detail & Related papers (2020-01-13T20:42:40Z) - Table Structure Extraction with Bi-directional Gated Recurrent Unit
Networks [5.350788087718877]
This paper proposes a robust deep learning based approach to extract rows and columns from a detected table in document images with a high precision.
We have benchmarked our system on publicly available UNLV as well as ICDAR 2013 datasets on which it outperformed the state-of-the-art table structure extraction systems by a significant margin.
arXiv Detail & Related papers (2020-01-08T13:17:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.