RAGA: Relation-aware Graph Attention Networks for Global Entity
Alignment
- URL: http://arxiv.org/abs/2103.00791v1
- Date: Mon, 1 Mar 2021 06:30:51 GMT
- Title: RAGA: Relation-aware Graph Attention Networks for Global Entity
Alignment
- Authors: Renbo Zhu, Meng Ma, Ping Wang
- Abstract summary: We propose a novel framework based on Relation-aware Graph Attention Networks to capture the interactions between entities and relations.
Our framework adopts the self-attention mechanism to spread entity information to the relations and then aggregate relation information back to entities.
- Score: 14.287681294725438
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Entity alignment (EA) is the task to discover entities referring to the same
real-world object from different knowledge graphs (KGs), which is the most
crucial step in integrating multi-source KGs. The majority of the existing
embeddings-based entity alignment methods embed entities and relations into a
vector space based on relation triples of KGs for local alignment. As these
methods insufficiently consider the multiple relations between entities, the
structure information of KGs has not been fully leveraged. In this paper, we
propose a novel framework based on Relation-aware Graph Attention Networks to
capture the interactions between entities and relations. Our framework adopts
the self-attention mechanism to spread entity information to the relations and
then aggregate relation information back to entities. Furthermore, we propose a
global alignment algorithm to make one-to-one entity alignments with a
fine-grained similarity matrix. Experiments on three real-world cross-lingual
datasets show that our framework outperforms the state-of-the-art methods.
Related papers
- OneNet: A Fine-Tuning Free Framework for Few-Shot Entity Linking via Large Language Model Prompting [49.655711022673046]
OneNet is an innovative framework that utilizes the few-shot learning capabilities of Large Language Models (LLMs) without the need for fine-tuning.
OneNet is structured around three key components prompted by LLMs: (1) an entity reduction processor that simplifies inputs by summarizing and filtering out irrelevant entities, (2) a dual-perspective entity linker that combines contextual cues and prior knowledge for precise entity linking, and (3) an entity consensus judger that employs a unique consistency algorithm to alleviate the hallucination in the entity linking reasoning.
arXiv Detail & Related papers (2024-10-10T02:45:23Z) - Two Heads Are Better Than One: Integrating Knowledge from Knowledge
Graphs and Large Language Models for Entity Alignment [31.70064035432789]
We propose a Large Language Model-enhanced Entity Alignment framework (LLMEA)
LLMEA identifies candidate alignments for a given entity by considering both embedding similarities between entities across Knowledge Graphs and edit distances to a virtual equivalent entity.
Experiments conducted on three public datasets reveal that LLMEA surpasses leading baseline models.
arXiv Detail & Related papers (2024-01-30T12:41:04Z) - From Alignment to Entailment: A Unified Textual Entailment Framework for
Entity Alignment [17.70562397382911]
Existing methods usually encode the triples of entities as embeddings and learn to align the embeddings.
We transform both triples into unified textual sequences, and model the EA task as a bi-directional textual entailment task.
Our approach captures the unified correlation pattern of two kinds of information between entities, and explicitly models the fine-grained interaction between original entity information.
arXiv Detail & Related papers (2023-05-19T08:06:50Z) - Exploiting Global Semantic Similarities in Knowledge Graphs by
Relational Prototype Entities [55.952077365016066]
An empirical observation is that the head and tail entities connected by the same relation often share similar semantic attributes.
We propose a novel approach, which introduces a set of virtual nodes called textittextbfrelational prototype entities.
By enforcing the entities' embeddings close to their associated prototypes' embeddings, our approach can effectively encourage the global semantic similarities of entities.
arXiv Detail & Related papers (2022-06-16T09:25:33Z) - Dynamic Relation Discovery and Utilization in Multi-Entity Time Series
Forecasting [92.32415130188046]
In many real-world scenarios, there could exist crucial yet implicit relation between entities.
We propose an attentional multi-graph neural network with automatic graph learning (A2GNN) in this work.
arXiv Detail & Related papers (2022-02-18T11:37:04Z) - Informed Multi-context Entity Alignment [27.679124991733907]
We propose an Informed Multi-context Entity Alignment (IMEA) model to address these issues.
In particular, we introduce Transformer to flexibly capture the relation, path, and neighborhood contexts.
holistic reasoning is used to estimate alignment probabilities based on both embedding similarity and the relation/entity functionality.
Results on several benchmark datasets demonstrate the superiority of our IMEA model compared with existing state-of-the-art entity alignment methods.
arXiv Detail & Related papers (2022-01-02T06:29:30Z) - EchoEA: Echo Information between Entities and Relations for Entity
Alignment [1.1470070927586016]
We propose a novel framework, Echo Entity Alignment (EchoEA), which leverages self-attention mechanism to spread entity information to relations and echo back to entities.
The experimental results on three real-world cross-lingual datasets are stable at around 96% at hits@1 on average.
arXiv Detail & Related papers (2021-07-07T07:34:21Z) - Neural Production Systems [90.75211413357577]
Visual environments are structured, consisting of distinct objects or entities.
To partition images into entities, deep-learning researchers have proposed structural inductive biases.
We take inspiration from cognitive science and resurrect a classic approach, which consists of a set of rule templates.
This architecture achieves a flexible, dynamic flow of control and serves to factorize entity-specific and rule-based information.
arXiv Detail & Related papers (2021-03-02T18:53:20Z) - Relation-Aware Neighborhood Matching Model for Entity Alignment [8.098825914119693]
We propose a novel Relation-aware Neighborhood Matching model named RNM for entity alignment.
We show that the proposed model RNM performs better than state-of-the-art methods.
arXiv Detail & Related papers (2020-12-15T07:22:39Z) - HittER: Hierarchical Transformers for Knowledge Graph Embeddings [85.93509934018499]
We propose Hitt to learn representations of entities and relations in a complex knowledge graph.
Experimental results show that Hitt achieves new state-of-the-art results on multiple link prediction.
We additionally propose a simple approach to integrate Hitt into BERT and demonstrate its effectiveness on two Freebase factoid answering datasets.
arXiv Detail & Related papers (2020-08-28T18:58:15Z) - Cross-lingual Entity Alignment with Incidental Supervision [76.66793175159192]
We propose an incidentally supervised model, JEANS, which jointly represents multilingual KGs and text corpora in a shared embedding scheme.
Experiments on benchmark datasets show that JEANS leads to promising improvement on entity alignment with incidental supervision.
arXiv Detail & Related papers (2020-05-01T01:53:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.