DREAM: Adaptive Reinforcement Learning based on Attention Mechanism for
Temporal Knowledge Graph Reasoning
- URL: http://arxiv.org/abs/2304.03984v1
- Date: Sat, 8 Apr 2023 10:57:37 GMT
- Title: DREAM: Adaptive Reinforcement Learning based on Attention Mechanism for
Temporal Knowledge Graph Reasoning
- Authors: Shangfei Zheng, Hongzhi Yin, Tong Chen, Quoc Viet Hung Nguyen, Wei
Chen, Lei Zhao
- Abstract summary: We propose an adaptive reinforcement learning model based on attention mechanism (DREAM) to predict missing elements in the future.
Experimental results demonstrate DREAM outperforms state-of-the-art models on public dataset.
- Score: 46.16322824448241
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Temporal knowledge graphs (TKGs) model the temporal evolution of events and
have recently attracted increasing attention. Since TKGs are intrinsically
incomplete, it is necessary to reason out missing elements. Although existing
TKG reasoning methods have the ability to predict missing future events, they
fail to generate explicit reasoning paths and lack explainability. As
reinforcement learning (RL) for multi-hop reasoning on traditional knowledge
graphs starts showing superior explainability and performance in recent
advances, it has opened up opportunities for exploring RL techniques on TKG
reasoning. However, the performance of RL-based TKG reasoning methods is
limited due to: (1) lack of ability to capture temporal evolution and semantic
dependence jointly; (2) excessive reliance on manually designed rewards. To
overcome these challenges, we propose an adaptive reinforcement learning model
based on attention mechanism (DREAM) to predict missing elements in the future.
Specifically, the model contains two components: (1) a multi-faceted attention
representation learning method that captures semantic dependence and temporal
evolution jointly; (2) an adaptive RL framework that conducts multi-hop
reasoning by adaptively learning the reward functions. Experimental results
demonstrate DREAM outperforms state-of-the-art models on public dataset
Related papers
- CEGRL-TKGR: A Causal Enhanced Graph Representation Learning Framework for Improving Temporal Knowledge Graph Extrapolation Reasoning [1.6795461001108096]
We propose an innovative causal enhanced graph representation learning framework for temporal knowledge graph reasoning (TKGR)
We first disentangle the evolutionary representations of entities and relations in a temporal graph sequence into two distinct components, namely causal representations and confounding representations.
arXiv Detail & Related papers (2024-08-15T03:34:53Z) - Large Language Models-guided Dynamic Adaptation for Temporal Knowledge Graph Reasoning [87.10396098919013]
Large Language Models (LLMs) have demonstrated extensive knowledge and remarkable proficiency in temporal reasoning.
We propose a Large Language Models-guided Dynamic Adaptation (LLM-DA) method for reasoning on Temporal Knowledge Graphs.
LLM-DA harnesses the capabilities of LLMs to analyze historical data and extract temporal logical rules.
arXiv Detail & Related papers (2024-05-23T04:54:37Z) - Interpretable Imitation Learning with Dynamic Causal Relations [65.18456572421702]
We propose to expose captured knowledge in the form of a directed acyclic causal graph.
We also design this causal discovery process to be state-dependent, enabling it to model the dynamics in latent causal graphs.
The proposed framework is composed of three parts: a dynamic causal discovery module, a causality encoding module, and a prediction module, and is trained in an end-to-end manner.
arXiv Detail & Related papers (2023-09-30T20:59:42Z) - Generic Temporal Reasoning with Differential Analysis and Explanation [61.96034987217583]
We introduce a novel task named TODAY that bridges the gap with temporal differential analysis.
TODAY evaluates whether systems can correctly understand the effect of incremental changes.
We show that TODAY's supervision style and explanation annotations can be used in joint learning.
arXiv Detail & Related papers (2022-12-20T17:40:03Z) - MPLR: a novel model for multi-target learning of logical rules for
knowledge graph reasoning [5.499688003232003]
We study the problem of learning logic rules for reasoning on knowledge graphs for completing missing factual triplets.
We propose a model called MPLR that improves the existing models to fully use training data and multi-target scenarios are considered.
Experimental results empirically demonstrate that our MPLR model outperforms state-of-the-art methods on five benchmark datasets.
arXiv Detail & Related papers (2021-12-12T09:16:00Z) - TimeTraveler: Reinforcement Learning for Temporal Knowledge Graph
Forecasting [12.963769928056253]
We propose the first reinforcement learning method for forecasting. Specifically, the agent travels on historical knowledge graph snapshots to search for the answer.
Our method defines a relative time encoding function to capture the timespan information, and we design a novel time-shaped reward based on Dirichlet distribution to guide the model learning.
We evaluate our method for this link prediction task at future timestamps.
arXiv Detail & Related papers (2021-09-09T08:41:01Z) - Neural-Symbolic Commonsense Reasoner with Relation Predictors [36.03049905851874]
Commonsense reasoning aims to incorporate sets of commonsense facts, retrieved from Commonsense Knowledge Graphs (CKG), to draw conclusion about ordinary situations.
This feature also results in having large-scale sparse Knowledge Graphs, where such reasoning process is needed to predict relations between new events.
We present a neural-symbolic reasoner, which is capable of reasoning over large-scale dynamic CKGs.
arXiv Detail & Related papers (2021-05-14T08:54:25Z) - Abstract Spatial-Temporal Reasoning via Probabilistic Abduction and
Execution [97.50813120600026]
Spatial-temporal reasoning is a challenging task in Artificial Intelligence (AI)
Recent works have focused on an abstract reasoning task of this kind -- Raven's Progressive Matrices ( RPM)
We propose a neuro-symbolic Probabilistic Abduction and Execution learner (PrAE) learner.
arXiv Detail & Related papers (2021-03-26T02:42:18Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Explanation of Reinforcement Learning Model in Dynamic Multi-Agent
System [3.754171077237216]
This paper reports a novel work in generating verbal explanations for DRL behaviors agent.
A learning model is proposed to expand the implicit logic of generating verbal explanation to general situations.
Results show that verbal explanation generated by both models improve subjective satisfaction of users towards the interpretability of DRL systems.
arXiv Detail & Related papers (2020-08-04T13:21:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.