From Alignment to Entailment: A Unified Textual Entailment Framework for
Entity Alignment
- URL: http://arxiv.org/abs/2305.11501v1
- Date: Fri, 19 May 2023 08:06:50 GMT
- Title: From Alignment to Entailment: A Unified Textual Entailment Framework for
Entity Alignment
- Authors: Yu Zhao, Yike Wu, Xiangrui Cai, Ying Zhang, Haiwei Zhang, Xiaojie Yuan
- Abstract summary: Existing methods usually encode the triples of entities as embeddings and learn to align the embeddings.
We transform both triples into unified textual sequences, and model the EA task as a bi-directional textual entailment task.
Our approach captures the unified correlation pattern of two kinds of information between entities, and explicitly models the fine-grained interaction between original entity information.
- Score: 17.70562397382911
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Entity Alignment (EA) aims to find the equivalent entities between two
Knowledge Graphs (KGs). Existing methods usually encode the triples of entities
as embeddings and learn to align the embeddings, which prevents the direct
interaction between the original information of the cross-KG entities.
Moreover, they encode the relational triples and attribute triples of an entity
in heterogeneous embedding spaces, which prevents them from helping each other.
In this paper, we transform both triples into unified textual sequences, and
model the EA task as a bi-directional textual entailment task between the
sequences of cross-KG entities. Specifically, we feed the sequences of two
entities simultaneously into a pre-trained language model (PLM) and propose two
kinds of PLM-based entity aligners that model the entailment probability
between sequences as the similarity between entities. Our approach captures the
unified correlation pattern of two kinds of information between entities, and
explicitly models the fine-grained interaction between original entity
information. The experiments on five cross-lingual EA datasets show that our
approach outperforms the state-of-the-art EA methods and enables the mutual
enhancement of the heterogeneous information. Codes are available at
https://github.com/OreOZhao/TEA.
Related papers
- DERA: Dense Entity Retrieval for Entity Alignment in Knowledge Graphs [3.500936203815729]
We propose a dense entity retrieval framework for Entity Alignment (EA)
We leverage language models to uniformly encode various features of entities and facilitate nearest entity search across Knowledge Graphs (KGs)
Our approach achieves state-of-the-art performance compared to existing EA methods.
arXiv Detail & Related papers (2024-08-02T10:12:42Z) - EAGER: Two-Stream Generative Recommender with Behavior-Semantic Collaboration [63.112790050749695]
We introduce EAGER, a novel generative recommendation framework that seamlessly integrates both behavioral and semantic information.
We validate the effectiveness of EAGER on four public benchmarks, demonstrating its superior performance compared to existing methods.
arXiv Detail & Related papers (2024-06-20T06:21:56Z) - DRIN: Dynamic Relation Interactive Network for Multimodal Entity Linking [31.15972952813689]
We propose a novel framework called Dynamic Relation Interactive Network (DRIN) for MEL tasks.
DRIN explicitly models four different types of alignment between a mention and entity and builds a dynamic Graph Convolutional Network (GCN) to dynamically select the corresponding alignment relations for different input samples.
Experiments on two datasets show that DRIN outperforms state-of-the-art methods by a large margin, demonstrating the effectiveness of our approach.
arXiv Detail & Related papers (2023-10-09T10:21:42Z) - Prototype-based Embedding Network for Scene Graph Generation [105.97836135784794]
Current Scene Graph Generation (SGG) methods explore contextual information to predict relationships among entity pairs.
Due to the diverse visual appearance of numerous possible subject-object combinations, there is a large intra-class variation within each predicate category.
Prototype-based Embedding Network (PE-Net) models entities/predicates with prototype-aligned compact and distinctive representations.
PL is introduced to help PE-Net efficiently learn such entitypredicate matching, and Prototype Regularization (PR) is devised to relieve the ambiguous entity-predicate matching.
arXiv Detail & Related papers (2023-03-13T13:30:59Z) - Informed Multi-context Entity Alignment [27.679124991733907]
We propose an Informed Multi-context Entity Alignment (IMEA) model to address these issues.
In particular, we introduce Transformer to flexibly capture the relation, path, and neighborhood contexts.
holistic reasoning is used to estimate alignment probabilities based on both embedding similarity and the relation/entity functionality.
Results on several benchmark datasets demonstrate the superiority of our IMEA model compared with existing state-of-the-art entity alignment methods.
arXiv Detail & Related papers (2022-01-02T06:29:30Z) - UniRE: A Unified Label Space for Entity Relation Extraction [67.53850477281058]
Joint entity relation extraction models setup two separated label spaces for the two sub-tasks.
We argue that this setting may hinder the information interaction between entities and relations.
In this work, we propose to eliminate the different treatment on the two sub-tasks' label spaces.
arXiv Detail & Related papers (2021-07-09T08:09:37Z) - Cross-lingual Entity Alignment with Adversarial Kernel Embedding and
Adversarial Knowledge Translation [35.77482102674059]
Cross-lingual entity alignment often suffers challenges from feature inconsistency to sequence context unawareness.
This paper presents a dual adversarial learning framework for cross-lingual entity alignment, DAEA, with two original contributions.
arXiv Detail & Related papers (2021-04-16T00:57:28Z) - RAGA: Relation-aware Graph Attention Networks for Global Entity
Alignment [14.287681294725438]
We propose a novel framework based on Relation-aware Graph Attention Networks to capture the interactions between entities and relations.
Our framework adopts the self-attention mechanism to spread entity information to the relations and then aggregate relation information back to entities.
arXiv Detail & Related papers (2021-03-01T06:30:51Z) - Cross-Supervised Joint-Event-Extraction with Heterogeneous Information
Networks [61.950353376870154]
Joint-event-extraction is a sequence-to-sequence labeling task with a tag set composed of tags of triggers and entities.
We propose a Cross-Supervised Mechanism (CSM) to alternately supervise the extraction of triggers or entities.
Our approach outperforms the state-of-the-art methods in both entity and trigger extraction.
arXiv Detail & Related papers (2020-10-13T11:51:17Z) - Exploring and Evaluating Attributes, Values, and Structures for Entity
Alignment [100.19568734815732]
Entity alignment (EA) aims at building a unified Knowledge Graph (KG) of rich content by linking the equivalent entities from various KGs.
attribute triples can also provide crucial alignment signal but have not been well explored yet.
We propose to utilize an attributed value encoder and partition the KG into subgraphs to model the various types of attribute triples efficiently.
arXiv Detail & Related papers (2020-10-07T08:03:58Z) - HittER: Hierarchical Transformers for Knowledge Graph Embeddings [85.93509934018499]
We propose Hitt to learn representations of entities and relations in a complex knowledge graph.
Experimental results show that Hitt achieves new state-of-the-art results on multiple link prediction.
We additionally propose a simple approach to integrate Hitt into BERT and demonstrate its effectiveness on two Freebase factoid answering datasets.
arXiv Detail & Related papers (2020-08-28T18:58:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.