A Self-supervised Method for Entity Alignment
- URL: http://arxiv.org/abs/2106.09395v1
- Date: Thu, 17 Jun 2021 11:22:20 GMT
- Title: A Self-supervised Method for Entity Alignment
- Authors: Xiao Liu, Haoyun Hong, Xinghao Wang, Zeyi Chen, Evgeny Kharlamov,
Yuxiao Dong, Jie Tang
- Abstract summary: Entity alignment is a fundamental problem for constructing large-scale knowledge graphs (KGs)
Inspired by the recent progress of self-supervised learning, we explore the extent to which we can get rid of supervision for entity alignment.
We present SelfKG by leveraging this discovery to design a contrastive learning strategy across two KGs.
- Score: 20.368788592613466
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Entity alignment, aiming to identify equivalent entities across different
knowledge graphs (KGs), is a fundamental problem for constructing large-scale
KGs. Over the course of its development, supervision has been considered
necessary for accurate alignments. Inspired by the recent progress of
self-supervised learning, we explore the extent to which we can get rid of
supervision for entity alignment. Existing supervised methods for this task
focus on pulling each pair of positive (labeled) entities close to each other.
However, our analysis suggests that the learning of entity alignment can
actually benefit more from pushing sampled (unlabeled) negatives far away than
pulling positive aligned pairs close. We present SelfKG by leveraging this
discovery to design a contrastive learning strategy across two KGs. Extensive
experiments on benchmark datasets demonstrate that SelfKG without supervision
can match or achieve comparable results with state-of-the-art supervised
baselines. The performance of SelfKG demonstrates self-supervised learning
offers great potential for entity alignment in KGs.
Related papers
- One-bit Supervision for Image Classification: Problem, Solution, and
Beyond [114.95815360508395]
This paper presents one-bit supervision, a novel setting of learning with fewer labels, for image classification.
We propose a multi-stage training paradigm and incorporate negative label suppression into an off-the-shelf semi-supervised learning algorithm.
In multiple benchmarks, the learning efficiency of the proposed approach surpasses that using full-bit, semi-supervised supervision.
arXiv Detail & Related papers (2023-11-26T07:39:00Z) - Towards Distribution-Agnostic Generalized Category Discovery [51.52673017664908]
Data imbalance and open-ended distribution are intrinsic characteristics of the real visual world.
We propose a Self-Balanced Co-Advice contrastive framework (BaCon)
BaCon consists of a contrastive-learning branch and a pseudo-labeling branch, working collaboratively to provide interactive supervision to resolve the DA-GCD task.
arXiv Detail & Related papers (2023-10-02T17:39:58Z) - Semi-supervised learning made simple with self-supervised clustering [65.98152950607707]
Self-supervised learning models have been shown to learn rich visual representations without requiring human annotations.
We propose a conceptually simple yet empirically powerful approach to turn clustering-based self-supervised methods into semi-supervised learners.
arXiv Detail & Related papers (2023-06-13T01:09:18Z) - Deep Active Alignment of Knowledge Graph Entities and Schemata [20.100378168629195]
We propose a new KG alignment approach, called DAAKG, based on deep learning and active learning.
With deep learning, it learns the embeddings of entities, relations and classes, and jointly aligns them in a semi-supervised manner.
With active learning, it estimates how likely an entity, relation or class pair can be inferred, and selects the best batch for human labeling.
arXiv Detail & Related papers (2023-04-10T05:31:24Z) - Hyperspherical Consistency Regularization [45.00073340936437]
We explore the relationship between self-supervised learning and supervised learning, and study how self-supervised learning helps robust data-efficient deep learning.
We propose hyperspherical consistency regularization (HCR), a simple yet effective plug-and-play method, to regularize the classifier using feature-dependent information and thus avoid bias from labels.
arXiv Detail & Related papers (2022-06-02T02:41:13Z) - SelfKG: Self-Supervised Entity Alignment in Knowledge Graphs [24.647609970140095]
We develop a self-supervised learning objective for entity alignment called SelfKG.
We show that SelfKG can match or achieve comparable results with state-of-the-art supervised baselines.
The performance of SelfKG suggests that self-supervised learning offers great potential for entity alignment in KGs.
arXiv Detail & Related papers (2022-03-02T11:40:37Z) - ICLEA: Interactive Contrastive Learning for Self-supervised Entity
Alignment [27.449414854756913]
Self-supervised entity alignment (EA) aims to link equivalent entities across different knowledge graphs (KGs) without seed alignments.
The current SOTA self-supervised EA method draws inspiration from contrastive learning, originally designed in computer vision.
We propose an interactive contrastive learning model for self-supervised EA.
arXiv Detail & Related papers (2022-01-17T06:04:00Z) - Can Semantic Labels Assist Self-Supervised Visual Representation
Learning? [194.1681088693248]
We present a new algorithm named Supervised Contrastive Adjustment in Neighborhood (SCAN)
In a series of downstream tasks, SCAN achieves superior performance compared to previous fully-supervised and self-supervised methods.
Our study reveals that semantic labels are useful in assisting self-supervised methods, opening a new direction for the community.
arXiv Detail & Related papers (2020-11-17T13:25:00Z) - Inductive Learning on Commonsense Knowledge Graph Completion [89.72388313527296]
Commonsense knowledge graph (CKG) is a special type of knowledge graph (CKG) where entities are composed of free-form text.
We propose to study the inductive learning setting for CKG completion where unseen entities may present at test time.
InductivE significantly outperforms state-of-the-art baselines in both standard and inductive settings on ATOMIC and ConceptNet benchmarks.
arXiv Detail & Related papers (2020-09-19T16:10:26Z) - Cross-lingual Entity Alignment with Incidental Supervision [76.66793175159192]
We propose an incidentally supervised model, JEANS, which jointly represents multilingual KGs and text corpora in a shared embedding scheme.
Experiments on benchmark datasets show that JEANS leads to promising improvement on entity alignment with incidental supervision.
arXiv Detail & Related papers (2020-05-01T01:53:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.