Document Structure in Long Document Transformers
- URL: http://arxiv.org/abs/2401.17658v1
- Date: Wed, 31 Jan 2024 08:28:06 GMT
- Title: Document Structure in Long Document Transformers
- Authors: Jan Buchmann, Max Eichler, Jan-Micha Bodensohn, Ilia Kuznetsov, Iryna
Gurevych
- Abstract summary: Long documents often exhibit structure with hierarchically organized elements of different functions, such as section headers and paragraphs.
Despite the omnipresence of document structure, its role in natural language processing (NLP) remains opaque.
Do long-document Transformer models acquire an internal representation of document structure during pre-training?
How can structural information be communicated to a model after pre-training, and how does it influence downstream performance?
- Score: 64.76981299465885
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Long documents often exhibit structure with hierarchically organized elements
of different functions, such as section headers and paragraphs. Despite the
omnipresence of document structure, its role in natural language processing
(NLP) remains opaque. Do long-document Transformer models acquire an internal
representation of document structure during pre-training? How can structural
information be communicated to a model after pre-training, and how does it
influence downstream performance? To answer these questions, we develop a novel
suite of probing tasks to assess structure-awareness of long-document
Transformers, propose general-purpose structure infusion methods, and evaluate
the effects of structure infusion on QASPER and Evidence Inference, two
challenging long-document NLP tasks. Results on LED and LongT5 suggest that
they acquire implicit understanding of document structure during pre-training,
which can be further enhanced by structure infusion, leading to improved
end-task performance. To foster research on the role of document structure in
NLP modeling, we make our data and code publicly available.
Related papers
- Document Parsing Unveiled: Techniques, Challenges, and Prospects for Structured Information Extraction [23.47150047875133]
Document parsing is essential for converting unstructured and semi-structured documents into machine-readable data.
Document parsing plays an indispensable role in both knowledge base construction and training data generation.
This paper discusses the challenges faced by modular document parsing systems and vision-language models in handling complex layouts.
arXiv Detail & Related papers (2024-10-28T16:11:35Z) - Seg2Act: Global Context-aware Action Generation for Document Logical Structuring [45.55145491566147]
We introduce Seg2Act, an end-to-end, generation-based method for document logical structuring.
Seg2Act iteratively generates the action sequence via a global context-aware generative model, and simultaneously updates its global context and current logical structure.
Experiments on ChCatExt and HierDoc datasets demonstrate the superior performance of Seg2Act in both supervised and transfer learning settings.
arXiv Detail & Related papers (2024-10-09T11:58:40Z) - HDT: Hierarchical Document Transformer [70.2271469410557]
HDT exploits document structure by introducing auxiliary anchor tokens and redesigning the attention mechanism into a sparse multi-level hierarchy.
We develop a novel sparse attention kernel that considers the hierarchical structure of documents.
arXiv Detail & Related papers (2024-07-11T09:28:04Z) - StructGPT: A General Framework for Large Language Model to Reason over
Structured Data [117.13986738340027]
We develop an emphIterative Reading-then-Reasoning(IRR) approach for solving question answering tasks based on structured data.
Our approach can significantly boost the performance of ChatGPT and achieve comparable performance against the full-data supervised-tuning baselines.
arXiv Detail & Related papers (2023-05-16T17:45:23Z) - Autoregressive Structured Prediction with Language Models [73.11519625765301]
We describe an approach to model structures as sequences of actions in an autoregressive manner with PLMs.
Our approach achieves the new state-of-the-art on all the structured prediction tasks we looked at.
arXiv Detail & Related papers (2022-10-26T13:27:26Z) - Modeling Document-level Temporal Structures for Building Temporal
Dependency Graphs [31.32005522003613]
We propose to leverage news discourse profiling to model document-level temporal structures for building temporal dependency graphs.
Our key observation is that the functional roles of sentences used for profiling news discourse signify different time frames relevant to a news story and can, therefore, help to recover the global temporal structure of a document.
arXiv Detail & Related papers (2022-10-21T07:45:17Z) - Unified Pretraining Framework for Document Understanding [52.224359498792836]
We present UDoc, a new unified pretraining framework for document understanding.
UDoc is designed to support most document understanding tasks, extending the Transformer to take multimodal embeddings as input.
An important feature of UDoc is that it learns a generic representation by making use of three self-supervised losses.
arXiv Detail & Related papers (2022-04-22T21:47:04Z) - Long Document Summarization with Top-down and Bottom-up Inference [113.29319668246407]
We propose a principled inference framework to improve summarization models on two aspects.
Our framework assumes a hierarchical latent structure of a document where the top-level captures the long range dependency.
We demonstrate the effectiveness of the proposed framework on a diverse set of summarization datasets.
arXiv Detail & Related papers (2022-03-15T01:24:51Z) - ERNIE-DOC: The Retrospective Long-Document Modeling Transformer [24.426571160930635]
We propose ERNIE-DOC, a document-level language pretraining model based on Recurrence Transformers.
Two well-designed techniques, namely the retrospective feed mechanism and the enhanced recurrence mechanism enable ERNIE-DOC with much longer effective context length.
Various experiments on both English and Chinese document-level tasks are conducted.
arXiv Detail & Related papers (2020-12-31T16:12:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.