MORE: A Metric Learning Based Framework for Open-domain Relation
Extraction
- URL: http://arxiv.org/abs/2206.00289v1
- Date: Wed, 1 Jun 2022 07:51:20 GMT
- Title: MORE: A Metric Learning Based Framework for Open-domain Relation
Extraction
- Authors: Yutong Wang, Renze Lou, Kai Zhang, MaoYan Chen, Yujiu Yang
- Abstract summary: Open relation extraction (OpenRE) is the task of extracting relation schemes from open-domain corpora.
We propose a novel learning framework named MORE (Metric learning-based Open Relation Extraction)
- Score: 25.149590577718996
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Open relation extraction (OpenRE) is the task of extracting relation schemes
from open-domain corpora. Most existing OpenRE methods either do not fully
benefit from high-quality labeled corpora or can not learn semantic
representation directly, affecting downstream clustering efficiency. To address
these problems, in this work, we propose a novel learning framework named MORE
(Metric learning-based Open Relation Extraction). The framework utilizes deep
metric learning to obtain rich supervision signals from labeled data and drive
the neural model to learn semantic relational representation directly.
Experiments result in two real-world datasets show that our method outperforms
other state-of-the-art baselines. Our source code is available on Github.
Related papers
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models [61.14336781917986]
We introduce OpenR, an open-source framework for enhancing the reasoning capabilities of large language models (LLMs)
OpenR unifies data acquisition, reinforcement learning training, and non-autoregressive decoding into a cohesive software platform.
Our work is the first to provide an open-source framework that explores the core techniques of OpenAI's o1 model with reinforcement learning.
arXiv Detail & Related papers (2024-10-12T23:42:16Z) - ExaRanker-Open: Synthetic Explanation for IR using Open-Source LLMs [60.81649785463651]
We introduce ExaRanker-Open, where we adapt and explore the use of open-source language models to generate explanations.
Our findings reveal that incorporating explanations consistently enhances neural rankers, with benefits escalating as the LLM size increases.
arXiv Detail & Related papers (2024-02-09T11:23:14Z) - Relation-aware Ensemble Learning for Knowledge Graph Embedding [68.94900786314666]
We propose to learn an ensemble by leveraging existing methods in a relation-aware manner.
exploring these semantics using relation-aware ensemble leads to a much larger search space than general ensemble methods.
We propose a divide-search-combine algorithm RelEns-DSC that searches the relation-wise ensemble weights independently.
arXiv Detail & Related papers (2023-10-13T07:40:12Z) - Syntactic Multi-view Learning for Open Information Extraction [26.1066324477346]
Open Information Extraction (OpenIE) aims to extracts from open-domain sentences.
In this paper, we model both constituency and dependency trees into word-level graphs.
arXiv Detail & Related papers (2022-12-05T07:15:41Z) - FV-UPatches: Enhancing Universality in Finger Vein Recognition [0.6299766708197883]
We propose a universal learning-based framework, which achieves generalization while training with limited data.
The proposed framework shows application potential in other vein-based biometric recognition as well.
arXiv Detail & Related papers (2022-06-02T14:20:22Z) - HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain
Language Model Compression [53.90578309960526]
Large pre-trained language models (PLMs) have shown overwhelming performances compared with traditional neural network methods.
We propose a hierarchical relational knowledge distillation (HRKD) method to capture both hierarchical and domain relational information.
arXiv Detail & Related papers (2021-10-16T11:23:02Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z) - SelfORE: Self-supervised Relational Feature Learning for Open Relation
Extraction [60.08464995629325]
Open-domain relation extraction is the task of extracting open-domain relation facts from natural language sentences.
We proposed a self-supervised framework named SelfORE, which exploits weak, self-supervised signals.
Experimental results on three datasets show the effectiveness and robustness of SelfORE.
arXiv Detail & Related papers (2020-04-06T07:23:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.