An Effective and Efficient Time-aware Entity Alignment Framework via
Two-aspect Three-view Label Propagation
- URL: http://arxiv.org/abs/2307.06013v1
- Date: Wed, 12 Jul 2023 08:51:20 GMT
- Title: An Effective and Efficient Time-aware Entity Alignment Framework via
Two-aspect Three-view Label Propagation
- Authors: Li Cai, Xin Mao, Youshao Xiao, Changxu Wu, Man Lan
- Abstract summary: Entity alignment (EA) aims to find the equivalent entity pairs between different knowledge graphs (KGs)
We propose an effective and efficient non-neural EA framework between TKGs, namely LightTEA.
Our proposed model significantly outperforms the SOTA methods for EA between TKGs, and the time consumed by LightTEA is only dozens of seconds at most.
- Score: 8.87467135480257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Entity alignment (EA) aims to find the equivalent entity pairs between
different knowledge graphs (KGs), which is crucial to promote knowledge fusion.
With the wide use of temporal knowledge graphs (TKGs), time-aware EA (TEA)
methods appear to enhance EA. Existing TEA models are based on Graph Neural
Networks (GNN) and achieve state-of-the-art (SOTA) performance, but it is
difficult to transfer them to large-scale TKGs due to the scalability issue of
GNN. In this paper, we propose an effective and efficient non-neural EA
framework between TKGs, namely LightTEA, which consists of four essential
components: (1) Two-aspect Three-view Label Propagation, (2) Sparse Similarity
with Temporal Constraints, (3) Sinkhorn Operator, and (4) Temporal Iterative
Learning. All of these modules work together to improve the performance of EA
while reducing the time consumption of the model. Extensive experiments on
public datasets indicate that our proposed model significantly outperforms the
SOTA methods for EA between TKGs, and the time consumed by LightTEA is only
dozens of seconds at most, no more than 10% of the most efficient TEA method.
Related papers
- FusionLLM: A Decentralized LLM Training System on Geo-distributed GPUs with Adaptive Compression [55.992528247880685]
Decentralized training faces significant challenges regarding system design and efficiency.
We present FusionLLM, a decentralized training system designed and implemented for training large deep neural networks (DNNs)
We show that our system and method can achieve 1.45 - 9.39x speedup compared to baseline methods while ensuring convergence.
arXiv Detail & Related papers (2024-10-16T16:13:19Z) - S^2Former-OR: Single-Stage Bi-Modal Transformer for Scene Graph Generation in OR [50.435592120607815]
Scene graph generation (SGG) of surgical procedures is crucial in enhancing holistically cognitive intelligence in the operating room (OR)
Previous works have primarily relied on multi-stage learning, where the generated semantic scene graphs depend on intermediate processes with pose estimation and object detection.
In this study, we introduce a novel single-stage bi-modal transformer framework for SGG in the OR, termed S2Former-OR.
arXiv Detail & Related papers (2024-02-22T11:40:49Z) - Unsupervised Entity Alignment for Temporal Knowledge Graphs [24.830107011195302]
We present DualMatch, which fuses temporal and relational information for entity alignment.
It is able to perform EA on TKGs with or without supervision, due to its capability of effectively capturing temporal information.
Experiments on three real-world TKG datasets offer the insight that DualMatch outperforms the state-of-the-art methods in terms of H@1 by 2.4% - 10.7%.
arXiv Detail & Related papers (2023-02-01T23:03:22Z) - Guiding Neural Entity Alignment with Compatibility [32.22210683891481]
We argue that different entities within one Knowledge Graph should have compatible counterparts in the other KG.
Making compatible predictions should be one of the goals of training an EA model along with fitting the labelled data.
We devise a training framework by addressing three problems: (1) how to measure the compatibility of an EA model; (2) how to inject the property of being compatible into an EA model; and (3) how to optimise parameters of the compatibility model.
arXiv Detail & Related papers (2022-11-29T00:05:08Z) - LightEA: A Scalable, Robust, and Interpretable Entity Alignment
Framework via Three-view Label Propagation [27.483109233276632]
We argue that existing GNN-based EA methods inherit the inborn defects from their neural network lineage: weak scalability and poor interpretability.
We propose a non-neural EA framework -- LightEA, consisting of three efficient components: (i) Random Orthogonal Label Generation, (ii) Three-view Label propagation, and (iii) Sparse Sinkhorn Iteration.
According to the extensive experiments on public datasets, LightEA has impressive scalability, robustness, and interpretability.
arXiv Detail & Related papers (2022-10-19T10:07:08Z) - Efficient Few-Shot Object Detection via Knowledge Inheritance [62.36414544915032]
Few-shot object detection (FSOD) aims at learning a generic detector that can adapt to unseen tasks with scarce training samples.
We present an efficient pretrain-transfer framework (PTF) baseline with no computational increment.
We also propose an adaptive length re-scaling (ALR) strategy to alleviate the vector length inconsistency between the predicted novel weights and the pretrained base weights.
arXiv Detail & Related papers (2022-03-23T06:24:31Z) - Self-Promoted Supervision for Few-Shot Transformer [178.52948452353834]
Self-promoted sUpervisioN (SUN) is a few-shot learning framework for vision transformers (ViTs)
SUN pretrains the ViT on the few-shot learning dataset and then uses it to generate individual location-specific supervision for guiding each patch token.
Experiments show that SUN using ViTs significantly surpasses other few-shot learning frameworks with ViTs and is the first one that achieves higher performance than those CNN state-of-the-arts.
arXiv Detail & Related papers (2022-03-14T12:53:27Z) - Towards Similarity-Aware Time-Series Classification [51.2400839966489]
We study time-series classification (TSC), a fundamental task of time-series data mining.
We propose Similarity-Aware Time-Series Classification (SimTSC), a framework that models similarity information with graph neural networks (GNNs)
arXiv Detail & Related papers (2022-01-05T02:14:57Z) - A Simple But Powerful Graph Encoder for Temporal Knowledge Graph
Completion [13.047205680129094]
We propose a simple but powerful graph encoder TARGCN for temporal knowledge graphs (TKGs)
Our model can achieve a more than 42% relative improvement on GDELT dataset compared with the state-of-the-art model.
It outperforms the strongest baseline on ICEWS05-15 dataset with around 18.5% fewer parameters.
arXiv Detail & Related papers (2021-12-14T23:30:42Z) - Are Negative Samples Necessary in Entity Alignment? An Approach with
High Performance, Scalability and Robustness [26.04006507181558]
We propose a novel EA method with three new components to enable high Performance, high Scalability, and high Robustness.
We conduct detailed experiments on several public datasets to examine the effectiveness and efficiency of our proposed method.
arXiv Detail & Related papers (2021-08-11T15:20:41Z) - How Knowledge Graph and Attention Help? A Quantitative Analysis into
Bag-level Relation Extraction [66.09605613944201]
We quantitatively evaluate the effect of attention and Knowledge Graph on bag-level relation extraction (RE)
We find that (1) higher attention accuracy may lead to worse performance as it may harm the model's ability to extract entity mention features; (2) the performance of attention is largely influenced by various noise distribution patterns; and (3) KG-enhanced attention indeed improves RE performance, while not through enhanced attention but by incorporating entity prior.
arXiv Detail & Related papers (2021-07-26T09:38:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.