Search to Pass Messages for Temporal Knowledge Graph Completion
- URL: http://arxiv.org/abs/2210.16740v1
- Date: Sun, 30 Oct 2022 04:05:06 GMT
- Title: Search to Pass Messages for Temporal Knowledge Graph Completion
- Authors: Zhen Wang, Haotong Du, Quanming Yao, Xuelong Li
- Abstract summary: We propose to use neural architecture search (NAS) to design data-specific message passing architecture for temporal knowledge graphs (TKGs) completion.
In particular, we develop a generalized framework to explore topological and temporal information in TKGs.
We adopt a search algorithm, which trains a supernet structure by sampling single path for efficient search with less cost.
- Score: 97.40256786473516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Completing missing facts is a fundamental task for temporal knowledge graphs
(TKGs). Recently, graph neural network (GNN) based methods, which can
simultaneously explore topological and temporal information, have become the
state-of-the-art (SOTA) to complete TKGs. However, these studies are based on
hand-designed architectures and fail to explore the diverse topological and
temporal properties of TKG. To address this issue, we propose to use neural
architecture search (NAS) to design data-specific message passing architecture
for TKG completion. In particular, we develop a generalized framework to
explore topological and temporal information in TKGs. Based on this framework,
we design an expressive search space to fully capture various properties of
different TKGs. Meanwhile, we adopt a search algorithm, which trains a supernet
structure by sampling single path for efficient search with less cost. We
further conduct extensive experiments on three benchmark datasets. The results
show that the searched architectures by our method achieve the SOTA
performances. Besides, the searched models can also implicitly reveal diverse
properties in different TKGs. Our code is released in
https://github.com/striderdu/SPA.
Related papers
- EM-DARTS: Hierarchical Differentiable Architecture Search for Eye Movement Recognition [54.99121380536659]
Eye movement biometrics have received increasing attention thanks to its high secure identification.
Deep learning (DL) models have been recently successfully applied for eye movement recognition.
DL architecture still is determined by human prior knowledge.
We propose EM-DARTS, a hierarchical differentiable architecture search algorithm to automatically design the DL architecture for eye movement recognition.
arXiv Detail & Related papers (2024-09-22T13:11:08Z) - Exploiting Large Language Models Capabilities for Question Answer-Driven Knowledge Graph Completion Across Static and Temporal Domains [8.472388165833292]
This paper introduces a new generative completion framework called Generative Subgraph-based KGC (GS-KGC)
GS-KGC employs a question-answering format to directly generate target entities, addressing the challenge of questions having multiple possible answers.
Our method generates negative samples using known facts to facilitate the discovery of new information.
arXiv Detail & Related papers (2024-08-20T13:13:41Z) - Adaptive Path-Memory Network for Temporal Knowledge Graph Reasoning [25.84105067110878]
Temporal knowledge graph (TKG) reasoning aims to predict the future missing facts based on historical information.
We propose a novel architecture modeling with relation feature of TKG, namely aDAptivE path-MemOry Network (DaeMon)
DaeMon adaptively models the temporal path information between query subject and each object candidate across history time.
arXiv Detail & Related papers (2023-04-25T06:33:08Z) - Few-Shot Inductive Learning on Temporal Knowledge Graphs using
Concept-Aware Information [31.10140298420744]
We propose a few-shot out-of-graph (OOG) link prediction task for temporal knowledge graphs (TKGs)
We predict the missing entities from the links concerning unseen entities by employing a meta-learning framework.
Our model achieves superior performance on all three datasets.
arXiv Detail & Related papers (2022-11-15T14:23:07Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Edge-featured Graph Neural Architecture Search [131.4361207769865]
We propose Edge-featured Graph Neural Architecture Search to find the optimal GNN architecture.
Specifically, we design rich entity and edge updating operations to learn high-order representations.
We show EGNAS can search better GNNs with higher performance than current state-of-the-art human-designed and searched-based GNNs.
arXiv Detail & Related papers (2021-09-03T07:53:18Z) - Understanding and Accelerating Neural Architecture Search with
Training-Free and Theory-Grounded Metrics [117.4281417428145]
This work targets designing a principled and unified training-free framework for Neural Architecture Search (NAS)
NAS has been explosively studied to automate the discovery of top-performer neural networks, but suffers from heavy resource consumption and often incurs search bias due to truncated training or approximations.
We present a unified framework to understand and accelerate NAS, by disentangling "TEG" characteristics of searched networks.
arXiv Detail & Related papers (2021-08-26T17:52:07Z) - Rethinking Graph Neural Network Search from Message-passing [120.62373472087651]
This paper proposes Graph Neural Architecture Search (GNAS) with novel-designed search space.
We design Graph Neural Architecture Paradigm (GAP) with tree-topology computation procedure and two types of fine-grained atomic operations.
Experiments show that our GNAS can search for better GNNs with multiple message-passing mechanisms and optimal message-passing depth.
arXiv Detail & Related papers (2021-03-26T06:10:41Z) - T-GAP: Learning to Walk across Time for Temporal Knowledge Graph
Completion [13.209193437124881]
Temporal knowledge graphs (TKGs) inherently reflect the transient nature of real-world knowledge, as opposed to static knowledge graphs.
We propose T-GAP, a novel model for TKG completion that maximally utilizes both temporal information and graph structure in its encoder and decoder.
Our experiments demonstrate that T-GAP achieves superior performance against state-of-the-art baselines, and competently generalizes to queries with unseen timestamps.
arXiv Detail & Related papers (2020-12-19T04:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.