Modular Self-Supervision for Document-Level Relation Extraction
- URL: http://arxiv.org/abs/2109.05362v1
- Date: Sat, 11 Sep 2021 20:09:18 GMT
- Title: Modular Self-Supervision for Document-Level Relation Extraction
- Authors: Sheng Zhang, Cliff Wong, Naoto Usuyama, Sarthak Jain, Tristan Naumann,
Hoifung Poon
- Abstract summary: We propose decomposing document-level relation extraction into relation detection and argument resolution.
We conduct a thorough evaluation in biomedical machine reading for precision oncology, where cross-paragraph relation mentions are prevalent.
Our method outperforms prior state of the art, such as multi-scale learning and graph neural networks, by over 20 absolute F1 points.
- Score: 17.039775384229355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extracting relations across large text spans has been relatively
underexplored in NLP, but it is particularly important for high-value domains
such as biomedicine, where obtaining high recall of the latest findings is
crucial for practical applications. Compared to conventional information
extraction confined to short text spans, document-level relation extraction
faces additional challenges in both inference and learning. Given longer text
spans, state-of-the-art neural architectures are less effective and
task-specific self-supervision such as distant supervision becomes very noisy.
In this paper, we propose decomposing document-level relation extraction into
relation detection and argument resolution, taking inspiration from Davidsonian
semantics. This enables us to incorporate explicit discourse modeling and
leverage modular self-supervision for each sub-problem, which is less
noise-prone and can be further refined end-to-end via variational EM. We
conduct a thorough evaluation in biomedical machine reading for precision
oncology, where cross-paragraph relation mentions are prevalent. Our method
outperforms prior state of the art, such as multi-scale learning and graph
neural networks, by over 20 absolute F1 points. The gain is particularly
pronounced among the most challenging relation instances whose arguments never
co-occur in a paragraph.
Related papers
- Hierarchical Attention Graph for Scientific Document Summarization in Global and Local Level [3.7651378994837104]
Long input hinders simultaneous modeling of both global high-order relations between sentences and local intra-sentence relations.
We propose HAESum, a novel approach utilizing graph neural networks to model documents based on their hierarchical discourse structure.
We validate our approach on two benchmark datasets, and the experimental results demonstrate the effectiveness of HAESum.
arXiv Detail & Related papers (2024-05-16T15:46:30Z) - PromptRE: Weakly-Supervised Document-Level Relation Extraction via
Prompting-Based Data Programming [30.597623178206874]
We propose PromptRE, a novel weakly-supervised document-level relation extraction method.
PromptRE incorporates the label distribution and entity types as prior knowledge to improve the performance.
Experimental results on ReDocRED, a benchmark dataset for document-level relation extraction, demonstrate the superiority of PromptRE over baseline approaches.
arXiv Detail & Related papers (2023-10-13T17:23:17Z) - A Comprehensive Survey of Document-level Relation Extraction (2016-2023) [3.0204640945657326]
Document-level relation extraction (DocRE) is an active area of research in natural language processing (NLP)
This paper aims to provide a comprehensive overview of recent advances in this field, highlighting its different applications in comparison to sentence-level relation extraction.
arXiv Detail & Related papers (2023-09-28T12:43:32Z) - Multimodal Relation Extraction with Cross-Modal Retrieval and Synthesis [89.04041100520881]
This research proposes to retrieve textual and visual evidence based on the object, sentence, and whole image.
We develop a novel approach to synthesize the object-level, image-level, and sentence-level information for better reasoning between the same and different modalities.
arXiv Detail & Related papers (2023-05-25T15:26:13Z) - Improving Long Tailed Document-Level Relation Extraction via Easy
Relation Augmentation and Contrastive Learning [66.83982926437547]
We argue that mitigating the long-tailed distribution problem is crucial for DocRE in the real-world scenario.
Motivated by the long-tailed distribution problem, we propose an Easy Relation Augmentation(ERA) method for improving DocRE.
arXiv Detail & Related papers (2022-05-21T06:15:11Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - Learning Relation Prototype from Unlabeled Texts for Long-tail Relation
Extraction [84.64435075778988]
We propose a general approach to learn relation prototypes from unlabeled texts.
We learn relation prototypes as an implicit factor between entities.
We conduct experiments on two publicly available datasets: New York Times and Google Distant Supervision.
arXiv Detail & Related papers (2020-11-27T06:21:12Z) - Be More with Less: Hypergraph Attention Networks for Inductive Text
Classification [56.98218530073927]
Graph neural networks (GNNs) have received increasing attention in the research community and demonstrated their promising results on this canonical task.
Despite the success, their performance could be largely jeopardized in practice since they are unable to capture high-order interaction between words.
We propose a principled model -- hypergraph attention networks (HyperGAT) which can obtain more expressive power with less computational consumption for text representation learning.
arXiv Detail & Related papers (2020-11-01T00:21:59Z) - Enhancing Extractive Text Summarization with Topic-Aware Graph Neural
Networks [21.379555672973975]
This paper proposes a graph neural network (GNN)-based extractive summarization model.
Our model integrates a joint neural topic model (NTM) to discover latent topics, which can provide document-level features for sentence selection.
The experimental results demonstrate that our model achieves substantially state-of-the-art results on CNN/DM and NYT datasets.
arXiv Detail & Related papers (2020-10-13T09:30:04Z) - A Dependency Syntactic Knowledge Augmented Interactive Architecture for
End-to-End Aspect-based Sentiment Analysis [73.74885246830611]
We propose a novel dependency syntactic knowledge augmented interactive architecture with multi-task learning for end-to-end ABSA.
This model is capable of fully exploiting the syntactic knowledge (dependency relations and types) by leveraging a well-designed Dependency Relation Embedded Graph Convolutional Network (DreGcn)
Extensive experimental results on three benchmark datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-04T14:59:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.