Investigating Graph Structure Information for Entity Alignment with
Dangling Cases
- URL: http://arxiv.org/abs/2304.04718v1
- Date: Mon, 10 Apr 2023 17:24:43 GMT
- Title: Investigating Graph Structure Information for Entity Alignment with
Dangling Cases
- Authors: Jin Xu, Yangning Li, Xiangjin Xie, Yinghui Li, Niu Hu, Haitao Zheng,
Yong Jiang
- Abstract summary: Entity alignment aims to discover the equivalent entities in different knowledge graphs (KGs)
We propose a novel entity alignment framework called Weakly-optimal Graph Contrastive Learning (WOGCL)
We show that WOGCL outperforms the current state-of-the-art methods with pure structural information in both traditional (relaxed) and dangling settings.
- Score: 31.779386064600956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Entity alignment (EA) aims to discover the equivalent entities in different
knowledge graphs (KGs), which play an important role in knowledge engineering.
Recently, EA with dangling entities has been proposed as a more realistic
setting, which assumes that not all entities have corresponding equivalent
entities. In this paper, we focus on this setting. Some work has explored this
problem by leveraging translation API, pre-trained word embeddings, and other
off-the-shelf tools. However, these approaches over-rely on the side
information (e.g., entity names), and fail to work when the side information is
absent. On the contrary, they still insufficiently exploit the most fundamental
graph structure information in KG. To improve the exploitation of the
structural information, we propose a novel entity alignment framework called
Weakly-Optimal Graph Contrastive Learning (WOGCL), which is refined on three
dimensions : (i) Model. We propose a novel Gated Graph Attention Network to
capture local and global graph structure similarity. (ii) Training. Two
learning objectives: contrastive learning and optimal transport learning are
designed to obtain distinguishable entity representations via the optimal
transport plan. (iii) Inference. In the inference phase, a PageRank-based
method is proposed to calculate higher-order structural similarity. Extensive
experiments on two dangling benchmarks demonstrate that our WOGCL outperforms
the current state-of-the-art methods with pure structural information in both
traditional (relaxed) and dangling (consolidated) settings. The code will be
public soon.
Related papers
- Learning to Model Graph Structural Information on MLPs via Graph Structure Self-Contrasting [50.181824673039436]
We propose a Graph Structure Self-Contrasting (GSSC) framework that learns graph structural information without message passing.
The proposed framework is based purely on Multi-Layer Perceptrons (MLPs), where the structural information is only implicitly incorporated as prior knowledge.
It first applies structural sparsification to remove potentially uninformative or noisy edges in the neighborhood, and then performs structural self-contrasting in the sparsified neighborhood to learn robust node representations.
arXiv Detail & Related papers (2024-09-09T12:56:02Z) - DERA: Dense Entity Retrieval for Entity Alignment in Knowledge Graphs [3.500936203815729]
We propose a dense entity retrieval framework for Entity Alignment (EA)
We leverage language models to uniformly encode various features of entities and facilitate nearest entity search across Knowledge Graphs (KGs)
Our approach achieves state-of-the-art performance compared to existing EA methods.
arXiv Detail & Related papers (2024-08-02T10:12:42Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - MoCoSA: Momentum Contrast for Knowledge Graph Completion with
Structure-Augmented Pre-trained Language Models [11.57782182864771]
We propose Momentum Contrast for knowledge graph completion with Structure-Augmented pre-trained language models (MoCoSA)
Our approach achieves state-of-the-art performance in terms of mean reciprocal rank (MRR), with improvements of 2.5% on WN18RR and 21% on OpenBG500.
arXiv Detail & Related papers (2023-08-16T08:09:10Z) - VEM$^2$L: A Plug-and-play Framework for Fusing Text and Structure
Knowledge on Sparse Knowledge Graph Completion [14.537509860565706]
We propose a plug-and-play framework VEM2L over sparse Knowledge Graphs to fuse knowledge extracted from text and structure messages into a unity.
Specifically, we partition knowledge acquired by models into two nonoverlapping parts.
We also propose a new fusion strategy proved by Variational EM algorithm to fuse the generalization ability of models.
arXiv Detail & Related papers (2022-07-04T15:50:21Z) - Compact Graph Structure Learning via Mutual Information Compression [79.225671302689]
Graph Structure Learning (GSL) has attracted considerable attentions in its capacity of optimizing graph structure and learning parameters of Graph Neural Networks (GNNs)
We propose a Compact GSL architecture by MI compression, named CoGSL.
We conduct extensive experiments on several datasets under clean and attacked conditions, which demonstrate the effectiveness and robustness of CoGSL.
arXiv Detail & Related papers (2022-01-14T16:22:33Z) - RAGA: Relation-aware Graph Attention Networks for Global Entity
Alignment [14.287681294725438]
We propose a novel framework based on Relation-aware Graph Attention Networks to capture the interactions between entities and relations.
Our framework adopts the self-attention mechanism to spread entity information to the relations and then aggregate relation information back to entities.
arXiv Detail & Related papers (2021-03-01T06:30:51Z) - Towards Entity Alignment in the Open World: An Unsupervised Approach [29.337157862514204]
It is a pivotal step for integrating knowledge graphs (KGs) to increase knowledge coverage and quality.
State-of-the-art solutions tend to rely on labeled data for model training.
We offer an unsupervised framework that performs entity alignment in the open world.
arXiv Detail & Related papers (2021-01-26T03:10:24Z) - Graph Information Bottleneck [77.21967740646784]
Graph Neural Networks (GNNs) provide an expressive way to fuse information from network structure and node features.
Inheriting from the general Information Bottleneck (IB), GIB aims to learn the minimal sufficient representation for a given task.
We show that our proposed models are more robust than state-of-the-art graph defense models.
arXiv Detail & Related papers (2020-10-24T07:13:00Z) - Structure-Augmented Text Representation Learning for Efficient Knowledge
Graph Completion [53.31911669146451]
Human-curated knowledge graphs provide critical supportive information to various natural language processing tasks.
These graphs are usually incomplete, urging auto-completion of them.
graph embedding approaches, e.g., TransE, learn structured knowledge via representing graph elements into dense embeddings.
textual encoding approaches, e.g., KG-BERT, resort to graph triple's text and triple-level contextualized representations.
arXiv Detail & Related papers (2020-04-30T13:50:34Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.