DocXChain: A Powerful Open-Source Toolchain for Document Parsing and
Beyond
- URL: http://arxiv.org/abs/2310.12430v1
- Date: Thu, 19 Oct 2023 02:49:09 GMT
- Title: DocXChain: A Powerful Open-Source Toolchain for Document Parsing and
Beyond
- Authors: Cong Yao
- Abstract summary: DocXChain is a powerful open-source toolchain for document parsing.
It automatically converts the rich information embodied in unstructured documents, such as text, tables and charts, into structured representations.
- Score: 17.853066545805554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this report, we introduce DocXChain, a powerful open-source toolchain for
document parsing, which is designed and developed to automatically convert the
rich information embodied in unstructured documents, such as text, tables and
charts, into structured representations that are readable and manipulable by
machines. Specifically, basic capabilities, including text detection, text
recognition, table structure recognition and layout analysis, are provided.
Upon these basic capabilities, we also build a set of fully functional
pipelines for document parsing, i.e., general text reading, table parsing, and
document structurization, to drive various applications related to documents in
real-world scenarios. Moreover, DocXChain is concise, modularized and flexible,
such that it can be readily integrated with existing tools, libraries or models
(such as LangChain and ChatGPT), to construct more powerful systems that can
accomplish more complicated and challenging tasks. The code of DocXChain is
publicly available
at:~\url{https://github.com/AlibabaResearch/AdvancedLiterateMachinery/tree/main/Applications/DocXChain}
Related papers
- Document Parsing Unveiled: Techniques, Challenges, and Prospects for Structured Information Extraction [23.47150047875133]
Document parsing is essential for converting unstructured and semi-structured documents into machine-readable data.
Document parsing plays an indispensable role in both knowledge base construction and training data generation.
This paper discusses the challenges faced by modular document parsing systems and vision-language models in handling complex layouts.
arXiv Detail & Related papers (2024-10-28T16:11:35Z) - DocGenome: An Open Large-scale Scientific Document Benchmark for Training and Testing Multi-modal Large Language Models [63.466265039007816]
We present DocGenome, a structured document benchmark constructed by annotating 500K scientific documents from 153 disciplines in the arXiv open-access community.
We conduct extensive experiments to demonstrate the advantages of DocGenome and objectively evaluate the performance of large models on our benchmark.
arXiv Detail & Related papers (2024-06-17T15:13:52Z) - Docs2KG: Unified Knowledge Graph Construction from Heterogeneous Documents Assisted by Large Language Models [11.959445364035734]
80% of enterprise data reside in unstructured files, stored in data lakes that accommodate heterogeneous formats.
We introduce Docs2KG, a novel framework designed to extract multimodal information from diverse and heterogeneous documents.
Docs2KG generates a unified knowledge graph that represents the extracted key information.
arXiv Detail & Related papers (2024-06-05T05:35:59Z) - KnowledgeHub: An end-to-end Tool for Assisted Scientific Discovery [1.6080795642111267]
This paper describes the KnowledgeHub tool, a scientific literature Information Extraction (IE) and Question Answering (QA) pipeline.
This is achieved by supporting the ingestion of PDF documents that are converted to text and structured representations.
A browser-based annotation tool enables annotating the contents of the PDF documents according to the ontology.
A knowledge graph is constructed from these entity and relation triples which can be queried to obtain insights from the data.
arXiv Detail & Related papers (2024-05-16T13:17:14Z) - PDFTriage: Question Answering over Long, Structured Documents [60.96667912964659]
Representing structured documents as plain text is incongruous with the user's mental model of these documents with rich structure.
We propose PDFTriage that enables models to retrieve the context based on either structure or content.
Our benchmark dataset consists of 900+ human-generated questions over 80 structured documents.
arXiv Detail & Related papers (2023-09-16T04:29:05Z) - mPLUG-DocOwl: Modularized Multimodal Large Language Model for Document
Understanding [55.4806974284156]
Document understanding refers to automatically extract, analyze and comprehend information from digital documents, such as a web page.
Existing Multi-model Large Language Models (MLLMs) have demonstrated promising zero-shot capabilities in shallow OCR-free text recognition.
arXiv Detail & Related papers (2023-07-04T11:28:07Z) - Cross-Modal Entity Matching for Visually Rich Documents [4.8119678510491815]
Visually rich documents utilize visual cues to augment their semantics.
Existing works that enable structured querying on these documents do not take this into account.
We propose Juno -- a cross-modal entity matching framework to address this limitation.
arXiv Detail & Related papers (2023-03-01T18:26:14Z) - Doc2Bot: Accessing Heterogeneous Documents via Conversational Bots [103.54897676954091]
Doc2Bot is a dataset for building machines that help users seek information via conversations.
Our dataset contains over 100,000 turns based on Chinese documents from five domains.
arXiv Detail & Related papers (2022-10-20T07:33:05Z) - DOC2PPT: Automatic Presentation Slides Generation from Scientific
Documents [76.19748112897177]
We present a novel task and approach for document-to-slide generation.
We propose a hierarchical sequence-to-sequence approach to tackle our task in an end-to-end manner.
Our approach exploits the inherent structures within documents and slides and incorporates paraphrasing and layout prediction modules to generate slides.
arXiv Detail & Related papers (2021-01-28T03:21:17Z) - DocBank: A Benchmark Dataset for Document Layout Analysis [114.81155155508083]
We present textbfDocBank, a benchmark dataset that contains 500K document pages with fine-grained token-level annotations for document layout analysis.
Experiment results show that models trained on DocBank accurately recognize the layout information for a variety of documents.
arXiv Detail & Related papers (2020-06-01T16:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.