Unlocking the Power of Large Language Models for Entity Alignment
- URL: http://arxiv.org/abs/2402.15048v2
- Date: Wed, 09 Oct 2024 03:22:46 GMT
- Title: Unlocking the Power of Large Language Models for Entity Alignment
- Authors: Xuhui Jiang, Yinghan Shen, Zhichao Shi, Chengjin Xu, Wei Li, Zixuan Li, Jian Guo, Huawei Shen, Yuanzhuo Wang,
- Abstract summary: ChatEA is an innovative framework that incorporates large language models (LLMs) to improve EA.
To address the constraints of limited input KG data, ChatEA introduces a KG-code translation module.
To overcome the over-reliance on entity embedding comparisons, ChatEA implements a two-stage EA strategy.
- Score: 29.628079581217374
- License:
- Abstract: Entity Alignment (EA) is vital for integrating diverse knowledge graph (KG) data, playing a crucial role in data-driven AI applications. Traditional EA methods primarily rely on comparing entity embeddings, but their effectiveness is constrained by the limited input KG data and the capabilities of the representation learning techniques. Against this backdrop, we introduce ChatEA, an innovative framework that incorporates large language models (LLMs) to improve EA. To address the constraints of limited input KG data, ChatEA introduces a KG-code translation module that translates KG structures into a format understandable by LLMs, thereby allowing LLMs to utilize their extensive background knowledge to improve EA accuracy. To overcome the over-reliance on entity embedding comparisons, ChatEA implements a two-stage EA strategy that capitalizes on LLMs' capability for multi-step reasoning in a dialogue format, thereby enhancing accuracy while preserving efficiency. Our experimental results verify ChatEA's superior performance, highlighting LLMs' potential in facilitating EA tasks.
Related papers
- Speculate, then Collaborate: Fusing Knowledge of Language Models during Decoding [27.84669070734852]
Large Language Models (LLMs) often excel in specific domains but fall short in others due to the limitations of their training.
We introduce a novel Collaborative Speculative Decoding (CoSD) algorithm that enables efficient LLM knowledge fusion at test time.
Experimental results demonstrate that CoSD improves accuracy by up to 10% across benchmarks compared to existing methods.
arXiv Detail & Related papers (2025-02-11T23:40:53Z) - Federated Fine-Tuning of LLMs: Framework Comparison and Research Directions [59.5243730853157]
Federated learning (FL) provides a privacy-preserving solution for fine-tuning pre-trained large language models (LLMs) using distributed private datasets.
This article conducts a comparative analysis of three advanced federated LLM (FedLLM) frameworks that integrate knowledge distillation (KD) and split learning (SL) to mitigate these issues.
arXiv Detail & Related papers (2025-01-08T11:37:06Z) - LLM-Align: Utilizing Large Language Models for Entity Alignment in Knowledge Graphs [22.621781704528786]
Embedding-based entity alignment (EA) has recently gained considerable attention.
EA seeks to identify and match corresponding entities across different Knowledge Graphs (KGs)
arXiv Detail & Related papers (2024-12-06T01:05:37Z) - Combining Knowledge Graphs and Large Language Models [4.991122366385628]
Large language models (LLMs) show astonishing results in language understanding and generation.
They still show some disadvantages, such as hallucinations and lack of domain-specific knowledge.
These issues can be effectively mitigated by incorporating knowledge graphs (KGs)
This work collected 28 papers outlining methods for KG-powered LLMs, LLM-based KGs, and LLM-KG hybrid approaches.
arXiv Detail & Related papers (2024-07-09T05:42:53Z) - Entity Alignment with Noisy Annotations from Large Language Models [15.189701951003611]
We propose a unified framework, LLM4EA, to effectively leverage Large Language Models for EA.
Specifically, we design a novel active learning policy to significantly reduce the annotation space.
We iteratively optimize the policy based on the feedback from a base EA model.
arXiv Detail & Related papers (2024-05-27T03:52:55Z) - LEARN: Knowledge Adaptation from Large Language Model to Recommendation for Practical Industrial Application [54.984348122105516]
Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework synergizes open-world knowledge with collaborative knowledge.
We propose an Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework that synergizes open-world knowledge with collaborative knowledge.
arXiv Detail & Related papers (2024-05-07T04:00:30Z) - Rethinking the Roles of Large Language Models in Chinese Grammatical
Error Correction [62.409807640887834]
Chinese Grammatical Error Correction (CGEC) aims to correct all potential grammatical errors in the input sentences.
LLMs' performance as correctors on CGEC remains unsatisfactory due to its challenging task focus.
We rethink the roles of LLMs in the CGEC task so that they can be better utilized and explored in CGEC.
arXiv Detail & Related papers (2024-02-18T01:40:34Z) - KG-Agent: An Efficient Autonomous Agent Framework for Complex Reasoning
over Knowledge Graph [134.8631016845467]
We propose an autonomous LLM-based agent framework, called KG-Agent.
In KG-Agent, we integrate the LLM, multifunctional toolbox, KG-based executor, and knowledge memory.
To guarantee the effectiveness, we leverage program language to formulate the multi-hop reasoning process over the KG.
arXiv Detail & Related papers (2024-02-17T02:07:49Z) - Two Heads Are Better Than One: Integrating Knowledge from Knowledge
Graphs and Large Language Models for Entity Alignment [31.70064035432789]
We propose a Large Language Model-enhanced Entity Alignment framework (LLMEA)
LLMEA identifies candidate alignments for a given entity by considering both embedding similarities between entities across Knowledge Graphs and edit distances to a virtual equivalent entity.
Experiments conducted on three public datasets reveal that LLMEA surpasses leading baseline models.
arXiv Detail & Related papers (2024-01-30T12:41:04Z) - Improving Open Information Extraction with Large Language Models: A
Study on Demonstration Uncertainty [52.72790059506241]
Open Information Extraction (OIE) task aims at extracting structured facts from unstructured text.
Despite the potential of large language models (LLMs) like ChatGPT as a general task solver, they lag behind state-of-the-art (supervised) methods in OIE tasks.
arXiv Detail & Related papers (2023-09-07T01:35:24Z) - A Simple but Effective Pluggable Entity Lookup Table for Pre-trained
Language Models [93.39977756450354]
We propose to build a simple but effective Pluggable Entity Lookup Table (PELT) on demand.
PELT can be compatibly plugged as inputs to infuse entity supplemental knowledge into pre-trained language models.
Experiments on knowledge-related tasks demonstrate that our method, PELT, can flexibly and effectively transfer entity knowledge from related corpora into PLMs.
arXiv Detail & Related papers (2022-02-27T16:30:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.