SARF: Aliasing Relation Assisted Self-Supervised Learning for Few-shot
Relation Reasoning
- URL: http://arxiv.org/abs/2304.10297v1
- Date: Thu, 20 Apr 2023 13:24:59 GMT
- Title: SARF: Aliasing Relation Assisted Self-Supervised Learning for Few-shot
Relation Reasoning
- Authors: Lingyuan Meng, Ke Liang, Bin Xiao, Sihang Zhou, Yue Liu, Meng Liu,
Xihong Yang, Xinwang Liu
- Abstract summary: Few-shot relation reasoning on knowledge graphs (FS-KGR) aims to infer long-tail data-poor relations.
We propose a novel Self-Supervised Learning model by leveraging Aliasing Relations to assist FS-KGR, termed SARF.
- Score: 43.59319243928048
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot relation reasoning on knowledge graphs (FS-KGR) aims to infer
long-tail data-poor relations, which has drawn increasing attention these years
due to its practicalities. The pre-training of previous methods needs to
manually construct the meta-relation set, leading to numerous labor costs.
Self-supervised learning (SSL) is treated as a solution to tackle the issue,
but still at an early stage for FS-KGR task. Moreover, most of the existing
methods ignore leveraging the beneficial information from aliasing relations
(AR), i.e., data-rich relations with similar contextual semantics to the target
data-poor relation. Therefore, we proposed a novel Self-Supervised Learning
model by leveraging Aliasing Relations to assist FS-KGR, termed SARF.
Concretely, four main components are designed in our model, i.e., SSL reasoning
module, AR-assisted mechanism, fusion module, and scoring function. We first
generate the representation of the co-occurrence patterns in a generative
manner. Meanwhile, the representations of aliasing relations are learned to
enhance reasoning in the AR-assist mechanism. Besides, multiple strategies,
i.e., simple summation and learnable fusion, are offered for representation
fusion. Finally, the generated representation is used for scoring. Extensive
experiments on three few-shot benchmarks demonstrate that SARF achieves
state-of-the-art performance compared with other methods in most cases.
Related papers
- High-Performance Few-Shot Segmentation with Foundation Models: An Empirical Study [64.06777376676513]
We develop a few-shot segmentation (FSS) framework based on foundation models.
To be specific, we propose a simple approach to extract implicit knowledge from foundation models to construct coarse correspondence.
Experiments on two widely used datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-09-10T08:04:11Z) - Relation Extraction with Fine-Tuned Large Language Models in Retrieval Augmented Generation Frameworks [0.0]
Relation Extraction (RE) is crucial for converting unstructured data into structured formats like Knowledge Graphs (KGs)
Recent studies leveraging pre-trained language models (PLMs) have shown significant success in this area.
This work explores the performance of fine-tuned LLMs and their integration into the Retrieval Augmented-based (RAG) RE approach.
arXiv Detail & Related papers (2024-06-20T21:27:57Z) - Methods for Recovering Conditional Independence Graphs: A Survey [2.2721854258621064]
Conditional Independence (CI) graphs are used to gain insights about feature relationships.
We list out different methods and study the advances in techniques developed to recover CI graphs.
arXiv Detail & Related papers (2022-11-13T06:11:38Z) - Improving Long Tailed Document-Level Relation Extraction via Easy
Relation Augmentation and Contrastive Learning [66.83982926437547]
We argue that mitigating the long-tailed distribution problem is crucial for DocRE in the real-world scenario.
Motivated by the long-tailed distribution problem, we propose an Easy Relation Augmentation(ERA) method for improving DocRE.
arXiv Detail & Related papers (2022-05-21T06:15:11Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - Exploring Task Difficulty for Few-Shot Relation Extraction [22.585574542329677]
Few-shot relation extraction (FSRE) focuses on recognizing novel relations by learning with merely a handful of annotated instances.
We introduce a novel approach based on contrastive learning that learns better representations by exploiting relation label information.
arXiv Detail & Related papers (2021-09-12T09:40:33Z) - Generative Relation Linking for Question Answering over Knowledge Bases [12.778133758613773]
We propose a novel approach for relation linking framing it as a generative problem.
We extend such sequence-to-sequence models with the idea of infusing structured data from the target knowledge base.
We train the model with the aim to generate a structured output consisting of a list of argument-relation pairs, enabling a knowledge validation step.
arXiv Detail & Related papers (2021-08-16T20:33:43Z) - S^3-Rec: Self-Supervised Learning for Sequential Recommendation with
Mutual Information Maximization [104.87483578308526]
We propose the model S3-Rec, which stands for Self-Supervised learning for Sequential Recommendation.
For our task, we devise four auxiliary self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence.
Extensive experiments conducted on six real-world datasets demonstrate the superiority of our proposed method over existing state-of-the-art methods.
arXiv Detail & Related papers (2020-08-18T11:44:10Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.