Two Heads Are Better Than One: Integrating Knowledge from Knowledge
Graphs and Large Language Models for Entity Alignment
- URL: http://arxiv.org/abs/2401.16960v1
- Date: Tue, 30 Jan 2024 12:41:04 GMT
- Title: Two Heads Are Better Than One: Integrating Knowledge from Knowledge
Graphs and Large Language Models for Entity Alignment
- Authors: Linyao Yang and Hongyang Chen and Xiao Wang and Jing Yang and Fei-Yue
Wang and Han Liu
- Abstract summary: We propose a Large Language Model-enhanced Entity Alignment framework (LLMEA)
LLMEA identifies candidate alignments for a given entity by considering both embedding similarities between entities across Knowledge Graphs and edit distances to a virtual equivalent entity.
Experiments conducted on three public datasets reveal that LLMEA surpasses leading baseline models.
- Score: 31.70064035432789
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Entity alignment, which is a prerequisite for creating a more comprehensive
Knowledge Graph (KG), involves pinpointing equivalent entities across disparate
KGs. Contemporary methods for entity alignment have predominantly utilized
knowledge embedding models to procure entity embeddings that encapsulate
various similarities-structural, relational, and attributive. These embeddings
are then integrated through attention-based information fusion mechanisms.
Despite this progress, effectively harnessing multifaceted information remains
challenging due to inherent heterogeneity. Moreover, while Large Language
Models (LLMs) have exhibited exceptional performance across diverse downstream
tasks by implicitly capturing entity semantics, this implicit knowledge has yet
to be exploited for entity alignment. In this study, we propose a Large
Language Model-enhanced Entity Alignment framework (LLMEA), integrating
structural knowledge from KGs with semantic knowledge from LLMs to enhance
entity alignment. Specifically, LLMEA identifies candidate alignments for a
given entity by considering both embedding similarities between entities across
KGs and edit distances to a virtual equivalent entity. It then engages an LLM
iteratively, posing multiple multi-choice questions to draw upon the LLM's
inference capability. The final prediction of the equivalent entity is derived
from the LLM's output. Experiments conducted on three public datasets reveal
that LLMEA surpasses leading baseline models. Additional ablation studies
underscore the efficacy of our proposed framework.
Related papers
- Web-Scale Visual Entity Recognition: An LLM-Driven Data Approach [56.55633052479446]
Web-scale visual entity recognition presents significant challenges due to the lack of clean, large-scale training data.
We propose a novel methodology to curate such a dataset, leveraging a multimodal large language model (LLM) for label verification, metadata generation, and rationale explanation.
Experiments demonstrate that models trained on this automatically curated data achieve state-of-the-art performance on web-scale visual entity recognition tasks.
arXiv Detail & Related papers (2024-10-31T06:55:24Z) - OneNet: A Fine-Tuning Free Framework for Few-Shot Entity Linking via Large Language Model Prompting [49.655711022673046]
OneNet is an innovative framework that utilizes the few-shot learning capabilities of Large Language Models (LLMs) without the need for fine-tuning.
OneNet is structured around three key components prompted by LLMs: (1) an entity reduction processor that simplifies inputs by summarizing and filtering out irrelevant entities, (2) a dual-perspective entity linker that combines contextual cues and prior knowledge for precise entity linking, and (3) an entity consensus judger that employs a unique consistency algorithm to alleviate the hallucination in the entity linking reasoning.
arXiv Detail & Related papers (2024-10-10T02:45:23Z) - DERA: Dense Entity Retrieval for Entity Alignment in Knowledge Graphs [3.500936203815729]
We propose a dense entity retrieval framework for Entity Alignment (EA)
We leverage language models to uniformly encode various features of entities and facilitate nearest entity search across Knowledge Graphs (KGs)
Our approach achieves state-of-the-art performance compared to existing EA methods.
arXiv Detail & Related papers (2024-08-02T10:12:42Z) - Learning to Extract Structured Entities Using Language Models [52.281701191329]
Recent advances in machine learning have significantly impacted the field of information extraction.
We reformulate the task to be entity-centric, enabling the use of diverse metrics.
We contribute to the field by introducing Structured Entity Extraction and proposing the Approximate Entity Set OverlaP metric.
arXiv Detail & Related papers (2024-02-06T22:15:09Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - Attribute-Consistent Knowledge Graph Representation Learning for
Multi-Modal Entity Alignment [14.658282035561792]
We propose a novel attribute-consistent knowledge graph representation learning framework for MMEA (ACK-MMEA)
Our approach achieves excellent performance compared to its competitors.
arXiv Detail & Related papers (2023-04-04T06:39:36Z) - Multi-modal Contrastive Representation Learning for Entity Alignment [57.92705405276161]
Multi-modal entity alignment aims to identify equivalent entities between two different multi-modal knowledge graphs.
We propose MCLEA, a Multi-modal Contrastive Learning based Entity Alignment model.
In particular, MCLEA firstly learns multiple individual representations from multiple modalities, and then performs contrastive learning to jointly model intra-modal and inter-modal interactions.
arXiv Detail & Related papers (2022-09-02T08:59:57Z) - Informed Multi-context Entity Alignment [27.679124991733907]
We propose an Informed Multi-context Entity Alignment (IMEA) model to address these issues.
In particular, we introduce Transformer to flexibly capture the relation, path, and neighborhood contexts.
holistic reasoning is used to estimate alignment probabilities based on both embedding similarity and the relation/entity functionality.
Results on several benchmark datasets demonstrate the superiority of our IMEA model compared with existing state-of-the-art entity alignment methods.
arXiv Detail & Related papers (2022-01-02T06:29:30Z) - RAGA: Relation-aware Graph Attention Networks for Global Entity
Alignment [14.287681294725438]
We propose a novel framework based on Relation-aware Graph Attention Networks to capture the interactions between entities and relations.
Our framework adopts the self-attention mechanism to spread entity information to the relations and then aggregate relation information back to entities.
arXiv Detail & Related papers (2021-03-01T06:30:51Z) - Cross-lingual Entity Alignment with Incidental Supervision [76.66793175159192]
We propose an incidentally supervised model, JEANS, which jointly represents multilingual KGs and text corpora in a shared embedding scheme.
Experiments on benchmark datasets show that JEANS leads to promising improvement on entity alignment with incidental supervision.
arXiv Detail & Related papers (2020-05-01T01:53:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.