Attribute-Consistent Knowledge Graph Representation Learning for
Multi-Modal Entity Alignment
- URL: http://arxiv.org/abs/2304.01563v1
- Date: Tue, 4 Apr 2023 06:39:36 GMT
- Title: Attribute-Consistent Knowledge Graph Representation Learning for
Multi-Modal Entity Alignment
- Authors: Qian Li, Shu Guo, Yangyifei Luo, Cheng Ji, Lihong Wang, Jiawei Sheng,
Jianxin Li
- Abstract summary: We propose a novel attribute-consistent knowledge graph representation learning framework for MMEA (ACK-MMEA)
Our approach achieves excellent performance compared to its competitors.
- Score: 14.658282035561792
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The multi-modal entity alignment (MMEA) aims to find all equivalent entity
pairs between multi-modal knowledge graphs (MMKGs). Rich attributes and
neighboring entities are valuable for the alignment task, but existing works
ignore contextual gap problems that the aligned entities have different numbers
of attributes on specific modality when learning entity representations. In
this paper, we propose a novel attribute-consistent knowledge graph
representation learning framework for MMEA (ACK-MMEA) to compensate the
contextual gaps through incorporating consistent alignment knowledge.
Attribute-consistent KGs (ACKGs) are first constructed via multi-modal
attribute uniformization with merge and generate operators so that each entity
has one and only one uniform feature in each modality. The ACKGs are then fed
into a relation-aware graph neural network with random dropouts, to obtain
aggregated relation representations and robust entity representations. In order
to evaluate the ACK-MMEA facilitated for entity alignment, we specially design
a joint alignment loss for both entity and attribute evaluation. Extensive
experiments conducted on two benchmark datasets show that our approach achieves
excellent performance compared to its competitors.
Related papers
- MCSFF: Multi-modal Consistency and Specificity Fusion Framework for Entity Alignment [7.109735168520378]
Multi-modal entity alignment (MMEA) is essential for enhancing knowledge graphs and improving question-answering systems.
Existing methods often focus on integrating modalities through their complementarity but overlook the specificity of each modality.
We propose the Multi-modal Consistency and Specificity Fusion Framework (MCSFF), which innovatively integrates both complementary and specific aspects of modalities.
arXiv Detail & Related papers (2024-10-18T16:35:25Z) - OneNet: A Fine-Tuning Free Framework for Few-Shot Entity Linking via Large Language Model Prompting [49.655711022673046]
OneNet is an innovative framework that utilizes the few-shot learning capabilities of Large Language Models (LLMs) without the need for fine-tuning.
OneNet is structured around three key components prompted by LLMs: (1) an entity reduction processor that simplifies inputs by summarizing and filtering out irrelevant entities, (2) a dual-perspective entity linker that combines contextual cues and prior knowledge for precise entity linking, and (3) an entity consensus judger that employs a unique consistency algorithm to alleviate the hallucination in the entity linking reasoning.
arXiv Detail & Related papers (2024-10-10T02:45:23Z) - DERA: Dense Entity Retrieval for Entity Alignment in Knowledge Graphs [3.500936203815729]
We propose a dense entity retrieval framework for Entity Alignment (EA)
We leverage language models to uniformly encode various features of entities and facilitate nearest entity search across Knowledge Graphs (KGs)
Our approach achieves state-of-the-art performance compared to existing EA methods.
arXiv Detail & Related papers (2024-08-02T10:12:42Z) - NativE: Multi-modal Knowledge Graph Completion in the Wild [51.80447197290866]
We propose a comprehensive framework NativE to achieve MMKGC in the wild.
NativE proposes a relation-guided dual adaptive fusion module that enables adaptive fusion for any modalities.
We construct a new benchmark called WildKGC with five datasets to evaluate our method.
arXiv Detail & Related papers (2024-03-28T03:04:00Z) - Two Heads Are Better Than One: Integrating Knowledge from Knowledge
Graphs and Large Language Models for Entity Alignment [31.70064035432789]
We propose a Large Language Model-enhanced Entity Alignment framework (LLMEA)
LLMEA identifies candidate alignments for a given entity by considering both embedding similarities between entities across Knowledge Graphs and edit distances to a virtual equivalent entity.
Experiments conducted on three public datasets reveal that LLMEA surpasses leading baseline models.
arXiv Detail & Related papers (2024-01-30T12:41:04Z) - Heterogeneous Entity Matching with Complex Attribute Associations using
BERT and Neural Networks [0.7252027234425334]
We introduce a novel entity matching model, dubbed Entity Matching Model for Capturing Complex Attribute Relationships(EMM-CCAR)
Specifically, this model transforms the matching task into a sequence matching problem to mitigate the impact of varying data formats.
In comparison with the DER-SSM and Ditto approaches, our model improvements of approximately 4% and 1% in F1 scores, respectively.
arXiv Detail & Related papers (2023-09-20T03:49:57Z) - Named Entity Recognition via Machine Reading Comprehension: A Multi-Task
Learning Approach [50.12455129619845]
Named Entity Recognition (NER) aims to extract and classify entity mentions in the text into pre-defined types.
We propose to incorporate the label dependencies among entity types into a multi-task learning framework for better MRC-based NER.
arXiv Detail & Related papers (2023-09-20T03:15:05Z) - Multi-modal Contrastive Representation Learning for Entity Alignment [57.92705405276161]
Multi-modal entity alignment aims to identify equivalent entities between two different multi-modal knowledge graphs.
We propose MCLEA, a Multi-modal Contrastive Learning based Entity Alignment model.
In particular, MCLEA firstly learns multiple individual representations from multiple modalities, and then performs contrastive learning to jointly model intra-modal and inter-modal interactions.
arXiv Detail & Related papers (2022-09-02T08:59:57Z) - Entity-Graph Enhanced Cross-Modal Pretraining for Instance-level Product
Retrieval [152.3504607706575]
This research aims to conduct weakly-supervised multi-modal instance-level product retrieval for fine-grained product categories.
We first contribute the Product1M datasets, and define two real practical instance-level retrieval tasks.
We exploit to train a more effective cross-modal model which is adaptively capable of incorporating key concept information from the multi-modal data.
arXiv Detail & Related papers (2022-06-17T15:40:45Z) - MGA-VQA: Multi-Granularity Alignment for Visual Question Answering [75.55108621064726]
Learning to answer visual questions is a challenging task since the multi-modal inputs are within two feature spaces.
We propose Multi-Granularity Alignment architecture for Visual Question Answering task (MGA-VQA)
Our model splits alignment into different levels to achieve learning better correlations without needing additional data and annotations.
arXiv Detail & Related papers (2022-01-25T22:30:54Z) - Informed Multi-context Entity Alignment [27.679124991733907]
We propose an Informed Multi-context Entity Alignment (IMEA) model to address these issues.
In particular, we introduce Transformer to flexibly capture the relation, path, and neighborhood contexts.
holistic reasoning is used to estimate alignment probabilities based on both embedding similarity and the relation/entity functionality.
Results on several benchmark datasets demonstrate the superiority of our IMEA model compared with existing state-of-the-art entity alignment methods.
arXiv Detail & Related papers (2022-01-02T06:29:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.