i-Align: an interpretable knowledge graph alignment model
- URL: http://arxiv.org/abs/2308.13755v1
- Date: Sat, 26 Aug 2023 03:48:52 GMT
- Title: i-Align: an interpretable knowledge graph alignment model
- Authors: Bayu Distiawan Trisedya, Flora D Salim, Jeffrey Chan, Damiano Spina,
Falk Scholer, Mark Sanderson
- Abstract summary: Knowledge graphs (KGs) are becoming essential resources for many downstream applications.
One of the strategies to address this problem is KG alignment, forming a more complete KG by merging two or more KGs.
This paper proposes i-Align, an interpretable KG alignment model.
- Score: 35.13345855672941
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge graphs (KGs) are becoming essential resources for many downstream
applications. However, their incompleteness may limit their potential. Thus,
continuous curation is needed to mitigate this problem. One of the strategies
to address this problem is KG alignment, i.e., forming a more complete KG by
merging two or more KGs. This paper proposes i-Align, an interpretable KG
alignment model. Unlike the existing KG alignment models, i-Align provides an
explanation for each alignment prediction while maintaining high alignment
performance. Experts can use the explanation to check the correctness of the
alignment prediction. Thus, the high quality of a KG can be maintained during
the curation process (e.g., the merging process of two KGs). To this end, a
novel Transformer-based Graph Encoder (Trans-GE) is proposed as a key component
of i-Align for aggregating information from entities' neighbors (structures).
Trans-GE uses Edge-gated Attention that combines the adjacency matrix and the
self-attention matrix to learn a gating mechanism to control the information
aggregation from the neighboring entities. It also uses historical embeddings,
allowing Trans-GE to be trained over mini-batches, or smaller sub-graphs, to
address the scalability issue when encoding a large KG. Another component of
i-Align is a Transformer encoder for aggregating entities' attributes. This
way, i-Align can generate explanations in the form of a set of the most
influential attributes/neighbors based on attention weights. Extensive
experiments are conducted to show the power of i-Align. The experiments include
several aspects, such as the model's effectiveness for aligning KGs, the
quality of the generated explanations, and its practicality for aligning large
KGs. The results show the effectiveness of i-Align in these aspects.
Related papers
- GLTW: Joint Improved Graph Transformer and LLM via Three-Word Language for Knowledge Graph Completion [52.026016846945424]
We propose a new method called GLTW, which encodes the structural information of KGs and merges it with Large Language Models.
Specifically, we introduce an improved Graph Transformer (iGT) that effectively encodes subgraphs with both local and global structural information.
Also, we develop a subgraph-based multi-classification training objective, using all entities within KG as classification objects, to boost learning efficiency.
arXiv Detail & Related papers (2025-02-17T06:02:59Z) - Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains [66.55612528039894]
Knowledge Graphs (KGs) can serve as reliable knowledge sources for question answering (QA)
We present DoG (Decoding on Graphs), a novel framework that facilitates a deep synergy between LLMs and KGs.
Experiments across various KGQA tasks with different background KGs demonstrate that DoG achieves superior and robust performance.
arXiv Detail & Related papers (2024-10-24T04:01:40Z) - Explainable Sparse Knowledge Graph Completion via High-order Graph
Reasoning Network [111.67744771462873]
This paper proposes a novel explainable model for sparse Knowledge Graphs (KGs)
It combines high-order reasoning into a graph convolutional network, namely HoGRN.
It can not only improve the generalization ability to mitigate the information insufficiency issue but also provide interpretability.
arXiv Detail & Related papers (2022-07-14T10:16:56Z) - MEKER: Memory Efficient Knowledge Embedding Representation for Link
Prediction and Question Answering [65.62309538202771]
Knowledge Graphs (KGs) are symbolically structured storages of facts.
KG embedding contains concise data used in NLP tasks requiring implicit information about the real world.
We propose a memory-efficient KG embedding model, which yields SOTA-comparable performance on link prediction tasks and KG-based Question Answering.
arXiv Detail & Related papers (2022-04-22T10:47:03Z) - Multilingual Knowledge Graph Completion with Self-Supervised Adaptive
Graph Alignment [69.41986652911143]
We propose a novel self-supervised adaptive graph alignment (SS-AGA) method to predict missing facts in a knowledge graph (KG)
SS-AGA fuses all KGs as a whole graph by regarding alignment as a new edge type.
Experiments on the public multilingual DBPedia KG and newly-created industrial multilingual E-commerce KG empirically demonstrate the effectiveness of SS-AGA.
arXiv Detail & Related papers (2022-03-28T18:00:51Z) - Sequence-to-Sequence Knowledge Graph Completion and Question Answering [8.207403859762044]
We show that an off-the-shelf encoder-decoder Transformer model can serve as a scalable and versatile KGE model.
We achieve this by posing KG link prediction as a sequence-to-sequence task and exchange the triple scoring approach taken by prior KGE methods with autoregressive decoding.
arXiv Detail & Related papers (2022-03-19T13:01:49Z) - Link-Intensive Alignment for Incomplete Knowledge Graphs [28.213397255810936]
In this work, we address the problem of aligning incomplete KGs with representation learning.
Our framework exploits two feature channels: transitivity-based and proximity-based.
The two feature channels are jointly learned to exchange important features between the input KGs.
Also, we develop a missing links detector that discovers and recovers the missing links during the training process.
arXiv Detail & Related papers (2021-12-17T00:41:28Z) - Multi-modal Entity Alignment in Hyperbolic Space [13.789898717291251]
We propose a novel multi-modal entity alignment approach, Hyperbolic multi-modal entity alignment(HMEA)
We first adopt the Hyperbolic Graph Convolutional Networks (HGCNs) to learn structural representations of entities.
We then combine the structure and visual representations in the hyperbolic space and use the aggregated embeddings to predict potential alignment results.
arXiv Detail & Related papers (2021-06-07T13:45:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.