DisenKGAT: Knowledge Graph Embedding with Disentangled Graph Attention
Network
- URL: http://arxiv.org/abs/2108.09628v1
- Date: Sun, 22 Aug 2021 04:10:35 GMT
- Title: DisenKGAT: Knowledge Graph Embedding with Disentangled Graph Attention
Network
- Authors: Junkang Wu, Wentao Shi, Xuezhi Cao, Jiawei Chen, Wenqiang Lei, Fuzheng
Zhang, Wei Wu and Xiangnan He
- Abstract summary: We propose a novel Disentangled Knowledge Graph Attention Network (DisenKGAT) for Knowledge graphs.
DisenKGAT uses both micro-disentanglement and macro-disentanglement to exploit representations behind Knowledge graphs.
Our work has strong robustness and flexibility to adapt to various score functions.
- Score: 48.38954651216983
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graph completion (KGC) has become a focus of attention across deep
learning community owing to its excellent contribution to numerous downstream
tasks. Although recently have witnessed a surge of work on KGC, they are still
insufficient to accurately capture complex relations, since they adopt the
single and static representations. In this work, we propose a novel
Disentangled Knowledge Graph Attention Network (DisenKGAT) for KGC, which
leverages both micro-disentanglement and macro-disentanglement to exploit
representations behind Knowledge graphs (KGs). To achieve
micro-disentanglement, we put forward a novel relation-aware aggregation to
learn diverse component representation. For macro-disentanglement, we leverage
mutual information as a regularization to enhance independence. With the
assistance of disentanglement, our model is able to generate adaptive
representations in terms of the given scenario. Besides, our work has strong
robustness and flexibility to adapt to various score functions. Extensive
experiments on public benchmark datasets have been conducted to validate the
superiority of DisenKGAT over existing methods in terms of both accuracy and
explainability.
Related papers
- Beyond Entity Alignment: Towards Complete Knowledge Graph Alignment via Entity-Relation Synergy [14.459419325027612]
Knowledge Graph alignment aims to integrate knowledge from multiple sources to address the limitations of individual Knowledge Graphs.
Existing models primarily emphasize the linkage of cross-graph entities but overlook aligning relations across KGs.
We propose a novel Expectation-Maximization-based model, EREM, which iteratively optimize both sub-tasks.
arXiv Detail & Related papers (2024-07-25T03:40:09Z) - Redundancy-Free Self-Supervised Relational Learning for Graph Clustering [13.176413653235311]
We propose a novel self-supervised deep graph clustering method named Redundancy-Free Graph Clustering (R$2$FGC)
It extracts the attribute- and structure-level relational information from both global and local views based on an autoencoder and a graph autoencoder.
Our experiments are performed on widely used benchmark datasets to validate the superiority of our R$2$FGC over state-of-the-art baselines.
arXiv Detail & Related papers (2023-09-09T06:18:50Z) - Normalizing Flow-based Neural Process for Few-Shot Knowledge Graph
Completion [69.55700751102376]
Few-shot knowledge graph completion (FKGC) aims to predict missing facts for unseen relations with few-shot associated facts.
Existing FKGC methods are based on metric learning or meta-learning, which often suffer from the out-of-distribution and overfitting problems.
In this paper, we propose a normalizing flow-based neural process for few-shot knowledge graph completion (NP-FKGC)
arXiv Detail & Related papers (2023-04-17T11:42:28Z) - Efficient Relation-aware Neighborhood Aggregation in Graph Neural Networks via Tensor Decomposition [4.041834517339835]
We propose a novel knowledge graph that incorporates tensor decomposition within the aggregation function of Graph Conalvolution Network (R-GCN)
Our model enhances the representation of neighboring entities by employing projection matrices of a low-rank tensor defined by relation types.
We adopt a training strategy inspired by contrastive learning to relieve the training limitation of the 1-k-k encoder method inherent in handling vast graphs.
arXiv Detail & Related papers (2022-12-11T19:07:34Z) - GraphLearner: Graph Node Clustering with Fully Learnable Augmentation [76.63963385662426]
Contrastive deep graph clustering (CDGC) leverages the power of contrastive learning to group nodes into different clusters.
We propose a Graph Node Clustering with Fully Learnable Augmentation, termed GraphLearner.
It introduces learnable augmentors to generate high-quality and task-specific augmented samples for CDGC.
arXiv Detail & Related papers (2022-12-07T10:19:39Z) - KRACL: Contrastive Learning with Graph Context Modeling for Sparse
Knowledge Graph Completion [37.92814873958519]
Knowledge Graph Embeddings (KGE) aim to map entities and relations to low dimensional spaces and have become the textitde-facto standard for knowledge graph completion.
Most existing KGE methods suffer from the sparsity challenge, where it is harder to predict entities that appear less frequently in knowledge graphs.
We propose a novel framework to alleviate the widespread sparsity in KGs with graph context and contrastive learning.
arXiv Detail & Related papers (2022-08-16T09:17:40Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - KACC: A Multi-task Benchmark for Knowledge Abstraction, Concretization
and Completion [99.47414073164656]
A comprehensive knowledge graph (KG) contains an instance-level entity graph and an ontology-level concept graph.
The two-view KG provides a testbed for models to "simulate" human's abilities on knowledge abstraction, concretization, and completion.
We propose a unified KG benchmark by improving existing benchmarks in terms of dataset scale, task coverage, and difficulty.
arXiv Detail & Related papers (2020-04-28T16:21:57Z) - Generative Adversarial Zero-Shot Relational Learning for Knowledge
Graphs [96.73259297063619]
We consider a novel formulation, zero-shot learning, to free this cumbersome curation.
For newly-added relations, we attempt to learn their semantic features from their text descriptions.
We leverage Generative Adrial Networks (GANs) to establish the connection between text and knowledge graph domain.
arXiv Detail & Related papers (2020-01-08T01:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.