Improving Hyper-Relational Knowledge Graph Completion
- URL: http://arxiv.org/abs/2104.08167v1
- Date: Fri, 16 Apr 2021 15:26:41 GMT
- Title: Improving Hyper-Relational Knowledge Graph Completion
- Authors: Donghan Yu and Yiming Yang
- Abstract summary: Hyper-relational KGs (HKGs) allow triplets to be associated with additional relation-entity pairs (a.k.a qualifiers) to convey more complex information.
How to effectively and efficiently model the triplet-qualifier relationship for prediction tasks such as HKG completion is an open challenge for research.
This paper proposes to improve the best-performing method in HKG completion, namely STARE, by introducing two novel revisions.
- Score: 35.487553537419224
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Different from traditional knowledge graphs (KGs) where facts are represented
as entity-relation-entity triplets, hyper-relational KGs (HKGs) allow triplets
to be associated with additional relation-entity pairs (a.k.a qualifiers) to
convey more complex information. How to effectively and efficiently model the
triplet-qualifier relationship for prediction tasks such as HKG completion is
an open challenge for research. This paper proposes to improve the
best-performing method in HKG completion, namely STARE, by introducing two
novel revisions: (1) Replacing the computation-heavy graph neural network
module with light-weight entity/relation embedding processing techniques for
efficiency improvement without sacrificing effectiveness; (2) Adding a
qualifier-oriented auxiliary training task for boosting the prediction power of
our approach on HKG completion. The proposed approach consistently outperforms
STARE in our experiments on three benchmark datasets, with significantly
improved computational efficiency.
Related papers
- TD3: Tucker Decomposition Based Dataset Distillation Method for Sequential Recommendation [50.23504065567638]
This paper introduces textbfTD3, a novel textbfDataset textbfDistillation method within a meta-learning framework.
TD3 distills a fully expressive emphsynthetic sequence summary from original data.
An augmentation technique allows the learner to closely fit the synthetic summary, ensuring an accurate update of it in the emphouter-loop.
arXiv Detail & Related papers (2025-02-05T03:13:25Z) - Efficient Relational Context Perception for Knowledge Graph Completion [25.903926643251076]
Knowledge Graphs (KGs) provide a structured representation of knowledge but often suffer from challenges of incompleteness.
Previous knowledge graph embedding models are limited in their ability to capture expressive features.
We propose Triple Receptance Perception architecture to model sequential information, enabling the learning of dynamic context.
arXiv Detail & Related papers (2024-12-31T11:25:58Z) - TwinCL: A Twin Graph Contrastive Learning Model for Collaborative Filtering [20.26347686022996]
We introduce a twin encoder in place of random augmentations, demonstrating the redundancy of traditional augmentation techniques.
Our proposed Twin Graph Contrastive Learning model -- TwinCL -- aligns positive pairs of user and item embeddings and the representations from the twin encoder.
Our theoretical analysis and experimental results show that the proposed model contributes to better recommendation accuracy and training efficiency performance.
arXiv Detail & Related papers (2024-09-27T22:31:08Z) - Towards Effective Top-N Hamming Search via Bipartite Graph Contrastive Hashing [42.6340751096123]
We investigate the problem of hashing with Graph Convolutional Network for effective Top-N search.
Our findings indicate the learning effectiveness of incorporating hashing techniques within the exploration of bipartite graph reception fields.
We propose Bipartite Graph Contrastive Hashing (BGCH+) to enhance the model performance.
arXiv Detail & Related papers (2024-08-17T16:21:32Z) - A Relation-Interactive Approach for Message Passing in Hyper-relational
Knowledge Graphs [0.0]
We propose a message-passing-based graph encoder with global relation structure awareness ability, which we call ReSaE.
Our experiments demonstrate that ReSaE achieves state-of-the-art performance on multiple link prediction benchmarks.
arXiv Detail & Related papers (2024-02-23T06:55:04Z) - Data Augmentation for Traffic Classification [54.92823760790628]
Data Augmentation (DA) is a technique widely adopted in Computer Vision (CV) and Natural Language Processing (NLP) tasks.
DA has struggled to gain traction in networking contexts, particularly in Traffic Classification (TC) tasks.
arXiv Detail & Related papers (2024-01-19T15:25:09Z) - Explainable Sparse Knowledge Graph Completion via High-order Graph
Reasoning Network [111.67744771462873]
This paper proposes a novel explainable model for sparse Knowledge Graphs (KGs)
It combines high-order reasoning into a graph convolutional network, namely HoGRN.
It can not only improve the generalization ability to mitigate the information insufficiency issue but also provide interpretability.
arXiv Detail & Related papers (2022-07-14T10:16:56Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - How Knowledge Graph and Attention Help? A Quantitative Analysis into
Bag-level Relation Extraction [66.09605613944201]
We quantitatively evaluate the effect of attention and Knowledge Graph on bag-level relation extraction (RE)
We find that (1) higher attention accuracy may lead to worse performance as it may harm the model's ability to extract entity mention features; (2) the performance of attention is largely influenced by various noise distribution patterns; and (3) KG-enhanced attention indeed improves RE performance, while not through enhanced attention but by incorporating entity prior.
arXiv Detail & Related papers (2021-07-26T09:38:28Z) - Highly Efficient Knowledge Graph Embedding Learning with Orthogonal
Procrustes Analysis [10.154836127889487]
Knowledge Graph Embeddings (KGEs) have been intensively explored in recent years due to their promise for a wide range of applications.
This paper proposes a simple yet effective KGE framework which can reduce the training time and carbon footprint by orders of magnitudes.
arXiv Detail & Related papers (2021-04-10T03:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.