Understanding and Guiding Weakly Supervised Entity Alignment with Potential Isomorphism Propagation
- URL: http://arxiv.org/abs/2402.03025v2
- Date: Sat, 12 Oct 2024 16:28:32 GMT
- Title: Understanding and Guiding Weakly Supervised Entity Alignment with Potential Isomorphism Propagation
- Authors: Yuanyi Wang, Wei Tang, Haifeng Sun, Zirui Zhuang, Xiaoyuan Fu, Jingyu Wang, Qi Qi, Jianxin Liao,
- Abstract summary: We present a propagation perspective to analyze weakly supervised EA.
We show that aggregation-based EA models seek propagation operators for pairwise entity similarities.
We develop a general EA framework, PipEA, incorporating this operator to improve the accuracy of every type of aggregation-based model.
- Score: 31.558938631213074
- License:
- Abstract: Weakly Supervised Entity Alignment (EA) is the task of identifying equivalent entities across diverse knowledge graphs (KGs) using only a limited number of seed alignments. Despite substantial advances in aggregation-based weakly supervised EA, the underlying mechanisms in this setting remain unexplored. In this paper, we present a propagation perspective to analyze weakly supervised EA and explain the existing aggregation-based EA models. Our theoretical analysis reveals that these models essentially seek propagation operators for pairwise entity similarities. We further prove that, despite the structural heterogeneity of different KGs, the potentially aligned entities within aggregation-based EA models have isomorphic subgraphs, which is the core premise of EA but has not been investigated. Leveraging this insight, we introduce a potential isomorphism propagation operator to enhance the propagation of neighborhood information across KGs. We develop a general EA framework, PipEA, incorporating this operator to improve the accuracy of every type of aggregation-based model without altering the learning process. Extensive experiments substantiate our theoretical findings and demonstrate PipEA's significant performance gains over state-of-the-art weakly supervised EA methods. Our work not only advances the field but also enhances our comprehension of aggregation-based weakly supervised EA.
Related papers
- Diversifying the Mixture-of-Experts Representation for Language Models with Orthogonal Optimizer [59.43462055143123]
The Mixture of Experts (MoE) has emerged as a highly successful technique in deep learning.
In this study, we shed light on the homogeneous representation problem, wherein experts in the MoE fail to specialize and lack diversity.
We propose an alternating training strategy that encourages each expert to update in a direction to the subspace spanned by other experts.
arXiv Detail & Related papers (2023-10-15T07:20:28Z) - What Makes Entities Similar? A Similarity Flooding Perspective for
Multi-sourced Knowledge Graph Embeddings [20.100378168629195]
We provide a similarity flooding perspective to explain existing translation-based and aggregation-based EA models.
We prove that the embedding learning process of these models actually seeks a fixpoint of pairwise similarities between entities.
arXiv Detail & Related papers (2023-06-05T06:50:09Z) - Improving Knowledge Graph Entity Alignment with Graph Augmentation [11.1094009195297]
Entity alignment (EA) which links equivalent entities across different knowledge graphs (KGs) plays a crucial role in knowledge fusion.
In recent years, graph neural networks (GNNs) have been successfully applied in many embedding-based EA methods.
We propose graph augmentation to create two graph views for margin-based alignment learning and contrastive entity representation learning.
arXiv Detail & Related papers (2023-04-28T01:22:47Z) - Toward Practical Entity Alignment Method Design: Insights from New
Highly Heterogeneous Knowledge Graph Datasets [32.68422342604253]
We study the performance of entity alignment (EA) methods in practical settings, specifically focusing on the alignment of highly heterogeneous KGs (HHKGs)
Our findings reveal that, in aligning HHKGs, valuable structure information can hardly be exploited through message-passing and aggregation mechanisms.
These findings shed light on the potential problems associated with the conventional application of GNN-based methods as a panacea for all EA datasets.
arXiv Detail & Related papers (2023-04-07T04:10:26Z) - Be Your Own Neighborhood: Detecting Adversarial Example by the
Neighborhood Relations Built on Self-Supervised Learning [64.78972193105443]
This paper presents a novel AE detection framework, named trustworthy for predictions.
performs the detection by distinguishing the AE's abnormal relation with its augmented versions.
An off-the-shelf Self-Supervised Learning (SSL) model is used to extract the representation and predict the label.
arXiv Detail & Related papers (2022-08-31T08:18:44Z) - ICLEA: Interactive Contrastive Learning for Self-supervised Entity
Alignment [27.449414854756913]
Self-supervised entity alignment (EA) aims to link equivalent entities across different knowledge graphs (KGs) without seed alignments.
The current SOTA self-supervised EA method draws inspiration from contrastive learning, originally designed in computer vision.
We propose an interactive contrastive learning model for self-supervised EA.
arXiv Detail & Related papers (2022-01-17T06:04:00Z) - Multi-task Learning of Order-Consistent Causal Graphs [59.9575145128345]
We consider the problem of discovering $K related Gaussian acyclic graphs (DAGs)
Under multi-task learning setting, we propose a $l_1/l$-regularized maximum likelihood estimator (MLE) for learning $K$ linear structural equation models.
We theoretically show that the joint estimator, by leveraging data across related tasks, can achieve a better sample complexity for recovering the causal order.
arXiv Detail & Related papers (2021-11-03T22:10:18Z) - GroupifyVAE: from Group-based Definition to VAE-based Unsupervised
Representation Disentanglement [91.9003001845855]
VAE-based unsupervised disentanglement can not be achieved without introducing other inductive bias.
We address VAE-based unsupervised disentanglement by leveraging the constraints derived from the Group Theory based definition as the non-probabilistic inductive bias.
We train 1800 models covering the most prominent VAE-based models on five datasets to verify the effectiveness of our method.
arXiv Detail & Related papers (2021-02-20T09:49:51Z) - Generalization Properties of Optimal Transport GANs with Latent
Distribution Learning [52.25145141639159]
We study how the interplay between the latent distribution and the complexity of the pushforward map affects performance.
Motivated by our analysis, we advocate learning the latent distribution as well as the pushforward map within the GAN paradigm.
arXiv Detail & Related papers (2020-07-29T07:31:33Z) - Closed-Form Factorization of Latent Semantics in GANs [65.42778970898534]
A rich set of interpretable dimensions has been shown to emerge in the latent space of the Generative Adversarial Networks (GANs) trained for synthesizing images.
In this work, we examine the internal representation learned by GANs to reveal the underlying variation factors in an unsupervised manner.
We propose a closed-form factorization algorithm for latent semantic discovery by directly decomposing the pre-trained weights.
arXiv Detail & Related papers (2020-07-13T18:05:36Z) - Degree-Aware Alignment for Entities in Tail [11.153455121529236]
We propose a novel framework for entity alignment (EA)
We identify entity's degree as important guidance to effectively fuse two different sources of information.
For post-alignment, we propose to complement original KGs with facts from their counterparts by using confident EA results as anchors.
arXiv Detail & Related papers (2020-05-25T14:15:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.