ICLEA: Interactive Contrastive Learning for Self-supervised Entity
Alignment
- URL: http://arxiv.org/abs/2201.06225v1
- Date: Mon, 17 Jan 2022 06:04:00 GMT
- Title: ICLEA: Interactive Contrastive Learning for Self-supervised Entity
Alignment
- Authors: Kaisheng Zeng, Zhenhao Dong, Lei Hou, Yixin Cao, Minghao Hu, Jifan Yu,
Xin Lv, Juanzi Li, Ling Feng
- Abstract summary: Self-supervised entity alignment (EA) aims to link equivalent entities across different knowledge graphs (KGs) without seed alignments.
The current SOTA self-supervised EA method draws inspiration from contrastive learning, originally designed in computer vision.
We propose an interactive contrastive learning model for self-supervised EA.
- Score: 27.449414854756913
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised entity alignment (EA) aims to link equivalent entities across
different knowledge graphs (KGs) without seed alignments. The current SOTA
self-supervised EA method draws inspiration from contrastive learning,
originally designed in computer vision based on instance discrimination and
contrastive loss, and suffers from two shortcomings. Firstly, it puts
unidirectional emphasis on pushing sampled negative entities far away rather
than pulling positively aligned pairs close, as is done in the well-established
supervised EA. Secondly, KGs contain rich side information (e.g., entity
description), and how to effectively leverage those information has not been
adequately investigated in self-supervised EA. In this paper, we propose an
interactive contrastive learning model for self-supervised EA. The model
encodes not only structures and semantics of entities (including entity name,
entity description, and entity neighborhood), but also conducts cross-KG
contrastive learning by building pseudo-aligned entity pairs. Experimental
results show that our approach outperforms previous best self-supervised
results by a large margin (over 9% average improvement) and performs on par
with previous SOTA supervised counterparts, demonstrating the effectiveness of
the interactive contrastive learning for self-supervised EA.
Related papers
- Understanding and Guiding Weakly Supervised Entity Alignment with Potential Isomorphism Propagation [31.558938631213074]
We present a propagation perspective to analyze weakly supervised EA.
We show that aggregation-based EA models seek propagation operators for pairwise entity similarities.
We develop a general EA framework, PipEA, incorporating this operator to improve the accuracy of every type of aggregation-based model.
arXiv Detail & Related papers (2024-02-05T14:06:15Z) - Weakly-supervised Contrastive Learning for Unsupervised Object Discovery [52.696041556640516]
Unsupervised object discovery is promising due to its ability to discover objects in a generic manner.
We design a semantic-guided self-supervised learning model to extract high-level semantic features from images.
We introduce Principal Component Analysis (PCA) to localize object regions.
arXiv Detail & Related papers (2023-07-07T04:03:48Z) - Semi-supervised learning made simple with self-supervised clustering [65.98152950607707]
Self-supervised learning models have been shown to learn rich visual representations without requiring human annotations.
We propose a conceptually simple yet empirically powerful approach to turn clustering-based self-supervised methods into semi-supervised learners.
arXiv Detail & Related papers (2023-06-13T01:09:18Z) - Uncovering the Inner Workings of STEGO for Safe Unsupervised Semantic
Segmentation [68.8204255655161]
Self-supervised pre-training strategies have recently shown impressive results for training general-purpose feature extraction backbones in computer vision.
The DINO self-distillation technique has interesting emerging properties, such as unsupervised clustering in the latent space and semantic correspondences of the produced features without using explicit human-annotated labels.
The STEGO method for unsupervised semantic segmentation contrast distills feature correspondences of a DINO-pre-trained Vision Transformer and recently set a new state of the art.
arXiv Detail & Related papers (2023-04-14T15:30:26Z) - Dependency-aware Self-training for Entity Alignment [28.158354625969668]
Entity Alignment (EA) aims to detect entity mappings in different Knowledge Graphs (KGs)
EA methods dominate current EA research but still suffer from their reliance on labelled mappings.
We propose exploiting the dependencies between entities, a particularity of EA, to suppress the noise without hurting the recall of True Positive mappings.
arXiv Detail & Related papers (2022-11-29T11:24:14Z) - Be Your Own Neighborhood: Detecting Adversarial Example by the
Neighborhood Relations Built on Self-Supervised Learning [64.78972193105443]
This paper presents a novel AE detection framework, named trustworthy for predictions.
performs the detection by distinguishing the AE's abnormal relation with its augmented versions.
An off-the-shelf Self-Supervised Learning (SSL) model is used to extract the representation and predict the label.
arXiv Detail & Related papers (2022-08-31T08:18:44Z) - Adversarial Dual-Student with Differentiable Spatial Warping for
Semi-Supervised Semantic Segmentation [70.2166826794421]
We propose a differentiable geometric warping to conduct unsupervised data augmentation.
We also propose a novel adversarial dual-student framework to improve the Mean-Teacher.
Our solution significantly improves the performance and state-of-the-art results are achieved on both datasets.
arXiv Detail & Related papers (2022-03-05T17:36:17Z) - SelfKG: Self-Supervised Entity Alignment in Knowledge Graphs [24.647609970140095]
We develop a self-supervised learning objective for entity alignment called SelfKG.
We show that SelfKG can match or achieve comparable results with state-of-the-art supervised baselines.
The performance of SelfKG suggests that self-supervised learning offers great potential for entity alignment in KGs.
arXiv Detail & Related papers (2022-03-02T11:40:37Z) - ActiveEA: Active Learning for Neural Entity Alignment [31.212894129845093]
Entity alignment (EA) aims to match equivalent entities across different Knowledge Graphs (KGs)
Current mainstream methods -- neural EA models -- rely on training with seed alignment, i.e., a set of pre-aligned entity pairs.
We devise a novel Active Learning (AL) framework for neural EA, aiming to create highly informative seed alignment.
arXiv Detail & Related papers (2021-10-13T03:38:04Z) - A Self-supervised Method for Entity Alignment [20.368788592613466]
Entity alignment is a fundamental problem for constructing large-scale knowledge graphs (KGs)
Inspired by the recent progress of self-supervised learning, we explore the extent to which we can get rid of supervision for entity alignment.
We present SelfKG by leveraging this discovery to design a contrastive learning strategy across two KGs.
arXiv Detail & Related papers (2021-06-17T11:22:20Z) - Self-Attention Attribution: Interpreting Information Interactions Inside
Transformer [89.21584915290319]
We propose a self-attention attribution method to interpret the information interactions inside Transformer.
We show that the attribution results can be used as adversarial patterns to implement non-targeted attacks towards BERT.
arXiv Detail & Related papers (2020-04-23T14:58:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.