SelfORE: Self-supervised Relational Feature Learning for Open Relation
Extraction
- URL: http://arxiv.org/abs/2004.02438v2
- Date: Tue, 6 Oct 2020 12:32:20 GMT
- Title: SelfORE: Self-supervised Relational Feature Learning for Open Relation
Extraction
- Authors: Xuming Hu, Chenwei Zhang, Yusong Xu, Lijie Wen, Philip S. Yu
- Abstract summary: Open-domain relation extraction is the task of extracting open-domain relation facts from natural language sentences.
We proposed a self-supervised framework named SelfORE, which exploits weak, self-supervised signals.
Experimental results on three datasets show the effectiveness and robustness of SelfORE.
- Score: 60.08464995629325
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Open relation extraction is the task of extracting open-domain relation facts
from natural language sentences. Existing works either utilize heuristics or
distant-supervised annotations to train a supervised classifier over
pre-defined relations, or adopt unsupervised methods with additional
assumptions that have less discriminative power. In this work, we proposed a
self-supervised framework named SelfORE, which exploits weak, self-supervised
signals by leveraging large pretrained language model for adaptive clustering
on contextualized relational features, and bootstraps the self-supervised
signals by improving contextualized features in relation classification.
Experimental results on three datasets show the effectiveness and robustness of
SelfORE on open-domain Relation Extraction when comparing with competitive
baselines.
Related papers
- Siamese Representation Learning for Unsupervised Relation Extraction [5.776369192706107]
Unsupervised relation extraction (URE) aims at discovering underlying relations between named entity pairs from open-domain plain text.
Existing URE models utilizing contrastive learning, which attract positive samples and repulse negative samples to promote better separation, have got decent effect.
We propose Siamese Representation Learning for Unsupervised Relation Extraction -- a novel framework to simply leverage positive pairs to representation learning.
arXiv Detail & Related papers (2023-10-01T02:57:43Z) - Less is More: Mitigate Spurious Correlations for Open-Domain Dialogue
Response Generation Models by Causal Discovery [52.95935278819512]
We conduct the first study on spurious correlations for open-domain response generation models based on a corpus CGDIALOG curated in our work.
Inspired by causal discovery algorithms, we propose a novel model-agnostic method for training and inference of response generation model.
arXiv Detail & Related papers (2023-03-02T06:33:48Z) - HiURE: Hierarchical Exemplar Contrastive Learning for Unsupervised
Relation Extraction [60.80849503639896]
Unsupervised relation extraction aims to extract the relationship between entities from natural language sentences without prior information on relational scope or distribution.
We propose a novel contrastive learning framework named HiURE, which has the capability to derive hierarchical signals from relational feature space using cross hierarchy attention.
Experimental results on two public datasets demonstrate the advanced effectiveness and robustness of HiURE on unsupervised relation extraction when compared with state-of-the-art models.
arXiv Detail & Related papers (2022-05-04T17:56:48Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - Element Intervention for Open Relation Extraction [27.408443348900057]
OpenRE aims to cluster relation instances referring to the same underlying relation.
Current OpenRE models are commonly trained on the datasets generated from distant supervision.
In this paper, we revisit the procedure of OpenRE from a causal view.
arXiv Detail & Related papers (2021-06-17T14:37:13Z) - Cross-Supervised Joint-Event-Extraction with Heterogeneous Information
Networks [61.950353376870154]
Joint-event-extraction is a sequence-to-sequence labeling task with a tag set composed of tags of triggers and entities.
We propose a Cross-Supervised Mechanism (CSM) to alternately supervise the extraction of triggers or entities.
Our approach outperforms the state-of-the-art methods in both entity and trigger extraction.
arXiv Detail & Related papers (2020-10-13T11:51:17Z) - Clustering-based Unsupervised Generative Relation Extraction [3.342376225738321]
We propose a Clustering-based Unsupervised generative Relation Extraction framework (CURE)
We use an "Encoder-Decoder" architecture to perform self-supervised learning so the encoder can extract relation information.
Our model performs better than state-of-the-art models on both New York Times (NYT) and United Nations Parallel Corpus (UNPC) standard datasets.
arXiv Detail & Related papers (2020-09-26T20:36:40Z) - Self-Supervised Relational Reasoning for Representation Learning [5.076419064097733]
In self-supervised learning, a system is tasked with achieving a surrogate objective by defining alternative targets on unlabeled data.
We propose a novel self-supervised formulation of relational reasoning that allows a learner to bootstrap a signal from information implicit in unlabeled data.
We evaluate the proposed method following a rigorous experimental procedure, using standard datasets, protocols, and backbones.
arXiv Detail & Related papers (2020-06-10T14:24:25Z) - Relabel the Noise: Joint Extraction of Entities and Relations via
Cooperative Multiagents [52.55119217982361]
We propose a joint extraction approach to handle noisy instances with a group of cooperative multiagents.
To handle noisy instances in a fine-grained manner, each agent in the cooperative group evaluates the instance by calculating a continuous confidence score from its own perspective.
A confidence consensus module is designed to gather the wisdom of all agents and re-distribute the noisy training set with confidence-scored labels.
arXiv Detail & Related papers (2020-04-21T12:03:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.