Tucker decomposition-based Temporal Knowledge Graph Completion
- URL: http://arxiv.org/abs/2011.07751v1
- Date: Mon, 16 Nov 2020 07:05:52 GMT
- Title: Tucker decomposition-based Temporal Knowledge Graph Completion
- Authors: Pengpeng Shao, Guohua Yang, Dawei Zhang, Jianhua Tao, Feihu Che, Tong
Liu
- Abstract summary: We build a new tensor decomposition model for temporal knowledge graphs completion inspired by the Tucker decomposition of order 4 tensor.
We demonstrate that the proposed model is fully expressive and report state-of-the-art results for several public benchmarks.
- Score: 35.56360622521721
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge graphs have been demonstrated to be an effective tool for numerous
intelligent applications. However, a large amount of valuable knowledge still
exists implicitly in the knowledge graphs. To enrich the existing knowledge
graphs, recent years witness that many algorithms for link prediction and
knowledge graphs embedding have been designed to infer new facts. But most of
these studies focus on the static knowledge graphs and ignore the temporal
information that reflects the validity of knowledge. Developing the model for
temporal knowledge graphs completion is an increasingly important task. In this
paper, we build a new tensor decomposition model for temporal knowledge graphs
completion inspired by the Tucker decomposition of order 4 tensor. We
demonstrate that the proposed model is fully expressive and report
state-of-the-art results for several public benchmarks. Additionally, we
present several regularization schemes to improve the strategy and study their
impact on the proposed model. Experimental studies on three temporal datasets
(i.e. ICEWS2014, ICEWS2005-15, GDELT) justify our design and demonstrate that
our model outperforms baselines with an explicit margin on link prediction
task.
Related papers
- Disentangled Generative Graph Representation Learning [51.59824683232925]
This paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework.
It aims to learn latent disentangled factors and utilize them to guide graph mask modeling.
Experiments on 11 public datasets for two different graph learning tasks demonstrate that DiGGR consistently outperforms many previous self-supervised methods.
arXiv Detail & Related papers (2024-08-24T05:13:02Z) - A Survey on Temporal Knowledge Graph: Representation Learning and Applications [16.447980641446602]
temporal knowledge graph representation learning aims to learn low-dimensional vector embeddings for entities and relations in a knowledge graph.
We conduct a comprehensive survey of temporal knowledge graph representation learning and its applications.
arXiv Detail & Related papers (2024-03-02T16:21:45Z) - Frameless Graph Knowledge Distillation [27.831929635701886]
We show how the graph knowledge supplied by the teacher is learned and digested by the student model via both algebra and geometry.
Our proposed model can generate learning accuracy identical to or even surpass the teacher model while maintaining the high speed of inference.
arXiv Detail & Related papers (2023-07-13T08:56:50Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - A Survey of Knowledge Graph Reasoning on Graph Types: Static, Dynamic,
and Multimodal [57.8455911689554]
Knowledge graph reasoning (KGR) aims to deduce new facts from existing facts based on mined logic rules underlying knowledge graphs (KGs)
It has been proven to significantly benefit the usage of KGs in many AI applications, such as question answering, recommendation systems, and etc.
arXiv Detail & Related papers (2022-12-12T08:40:04Z) - Robust Causal Graph Representation Learning against Confounding Effects [21.380907101361643]
We propose Robust Causal Graph Representation Learning (RCGRL) to learn robust graph representations against confounding effects.
RCGRL introduces an active approach to generate instrumental variables under unconditional moment restrictions, which empowers the graph representation learning model to eliminate confounders.
arXiv Detail & Related papers (2022-08-18T01:31:25Z) - Temporal Knowledge Graph Reasoning with Low-rank and Model-agnostic
Representations [1.8262547855491458]
We introduce Time-LowFER, a family of parameter-efficient and time-aware extensions of the low-rank tensor factorization model LowFER.
Noting several limitations in current approaches to represent time, we propose a cycle-aware time-encoding scheme for time features.
We implement our methods in a unified temporal knowledge graph embedding framework, focusing on time-sensitive data processing.
arXiv Detail & Related papers (2022-04-10T22:24:11Z) - Data Augmentation for Deep Graph Learning: A Survey [66.04015540536027]
We first propose a taxonomy for graph data augmentation and then provide a structured review by categorizing the related work based on the augmented information modalities.
Focusing on the two challenging problems in DGL (i.e., optimal graph learning and low-resource graph learning), we also discuss and review the existing learning paradigms which are based on graph data augmentation.
arXiv Detail & Related papers (2022-02-16T18:30:33Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z) - Disentangle-based Continual Graph Representation Learning [32.081943985875554]
Graph embedding (GE) methods embed nodes (and/or edges) in graph into a low-dimensional semantic space.
Existing GE models are not practical in real-world applications since it overlooked the streaming nature of incoming data.
We propose a disentangle-based continual graph representation learning framework inspired by the human's ability to learn procedural knowledge.
arXiv Detail & Related papers (2020-10-06T09:20:30Z) - A Heterogeneous Graph with Factual, Temporal and Logical Knowledge for
Question Answering Over Dynamic Contexts [81.4757750425247]
We study question answering over a dynamic textual environment.
We develop a graph neural network over the constructed graph, and train the model in an end-to-end manner.
arXiv Detail & Related papers (2020-04-25T04:53:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.