Learning to Walk with Dual Agents for Knowledge Graph Reasoning
- URL: http://arxiv.org/abs/2112.12876v1
- Date: Thu, 23 Dec 2021 23:03:24 GMT
- Title: Learning to Walk with Dual Agents for Knowledge Graph Reasoning
- Authors: Denghui Zhang, Zixuan Yuan, Hao Liu, Xiaodong Lin, Hui Xiong
- Abstract summary: Multi-hop reasoning approaches only work well on short reasoning paths and tend to miss the target entity with the increasing path length.
We propose a dual-agent reinforcement learning framework, which trains two agents (GIANT and DWARF) to walk over a KG jointly and search for the answer collaboratively.
Our approach tackles the reasoning challenge in long paths by assigning one of the agents (GIANT) searching on cluster-level paths quickly and providing stage-wise hints for another agent (DWARF)
- Score: 20.232810842082674
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph walking based on reinforcement learning (RL) has shown great success in
navigating an agent to automatically complete various reasoning tasks over an
incomplete knowledge graph (KG) by exploring multi-hop relational paths.
However, existing multi-hop reasoning approaches only work well on short
reasoning paths and tend to miss the target entity with the increasing path
length. This is undesirable for many reason-ing tasks in real-world scenarios,
where short paths connecting the source and target entities are not available
in incomplete KGs, and thus the reasoning performances drop drastically unless
the agent is able to seek out more clues from longer paths. To address the
above challenge, in this paper, we propose a dual-agent reinforcement learning
framework, which trains two agents (GIANT and DWARF) to walk over a KG jointly
and search for the answer collaboratively. Our approach tackles the reasoning
challenge in long paths by assigning one of the agents (GIANT) searching on
cluster-level paths quickly and providing stage-wise hints for another agent
(DWARF). Finally, experimental results on several KG reasoning benchmarks show
that our approach can search answers more accurately and efficiently, and
outperforms existing RL-based methods for long path queries by a large margin.
Related papers
- Multi-Agent Target Assignment and Path Finding for Intelligent Warehouse: A Cooperative Multi-Agent Deep Reinforcement Learning Perspective [6.148164795916424]
Multi-agent target assignment and path planning (TAPF) are two key problems in intelligent warehouse.
We propose a method to simultaneously solve target assignment and path planning from a perspective of cooperative multi-agent deep reinforcement learning (RL)
Experimental results show that our method performs well in various task settings.
arXiv Detail & Related papers (2024-08-25T07:32:58Z) - Walk Wisely on Graph: Knowledge Graph Reasoning with Dual Agents via Efficient Guidance-Exploration [6.137115941053124]
We propose a multi-hop reasoning model with dual agents based on hierarchical reinforcement learning (HRL)
FULORA tackles the above reasoning challenges by eFficient GUidance-ExpLORAtion between dual agents.
Experiments conducted on three real-word knowledge graph datasets demonstrate that FULORA outperforms RL-based baselines.
arXiv Detail & Related papers (2024-08-03T23:15:57Z) - PathFinder: Guided Search over Multi-Step Reasoning Paths [80.56102301441899]
We propose PathFinder, a tree-search-based reasoning path generation approach.
It enhances diverse branching and multi-hop reasoning through the integration of dynamic decoding.
Our model generalizes well to longer, unseen reasoning chains, reflecting similar complexities to beam search with large branching factors.
arXiv Detail & Related papers (2023-12-08T17:05:47Z) - Monte-Carlo Tree Search for Multi-Agent Pathfinding: Preliminary Results [60.4817465598352]
We introduce an original variant of Monte-Carlo Tree Search (MCTS) tailored to multi-agent pathfinding.
Specifically, we use individual paths to assist the agents with the the goal-reaching behavior.
We also use a dedicated decomposition technique to reduce the branching factor of the tree search procedure.
arXiv Detail & Related papers (2023-07-25T12:33:53Z) - UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question
Answering Over Knowledge Graph [89.98762327725112]
Multi-hop Question Answering over Knowledge Graph(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question.
We propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning.
arXiv Detail & Related papers (2022-12-02T04:08:09Z) - SQUIRE: A Sequence-to-sequence Framework for Multi-hop Knowledge Graph
Reasoning [21.53970565708247]
Given a triple query, multi-hop reasoning task aims to give an evidential path that indicates the inference process.
We present SQUIRE, the first Sequence-to-sequence based multi-hop reasoning framework.
arXiv Detail & Related papers (2022-01-17T04:22:54Z) - Distributed Heuristic Multi-Agent Path Finding with Communication [7.854890646114447]
Multi-Agent Path Finding (MAPF) is essential to large-scale robotic systems.
Recent methods have applied reinforcement learning (RL) to learn decentralized polices in partially observable environments.
This paper combines communication with deep Q-learning to provide a novel learning based method for MAPF.
arXiv Detail & Related papers (2021-06-21T18:50:58Z) - Language-guided Navigation via Cross-Modal Grounding and Alternate
Adversarial Learning [66.9937776799536]
The emerging vision-and-language navigation (VLN) problem aims at learning to navigate an agent to the target location in unseen photo-realistic environments.
The main challenges of VLN arise mainly from two aspects: first, the agent needs to attend to the meaningful paragraphs of the language instruction corresponding to the dynamically-varying visual environments.
We propose a cross-modal grounding module to equip the agent with a better ability to track the correspondence between the textual and visual modalities.
arXiv Detail & Related papers (2020-11-22T09:13:46Z) - Planning to Explore via Self-Supervised World Models [120.31359262226758]
Plan2Explore is a self-supervised reinforcement learning agent.
We present a new approach to self-supervised exploration and fast adaptation to new tasks.
Without any training supervision or task-specific interaction, Plan2Explore outperforms prior self-supervised exploration methods.
arXiv Detail & Related papers (2020-05-12T17:59:45Z) - Meta Reinforcement Learning with Autonomous Inference of Subtask
Dependencies [57.27944046925876]
We propose and address a novel few-shot RL problem, where a task is characterized by a subtask graph.
Instead of directly learning a meta-policy, we develop a Meta-learner with Subtask Graph Inference.
Our experiment results on two grid-world domains and StarCraft II environments show that the proposed method is able to accurately infer the latent task parameter.
arXiv Detail & Related papers (2020-01-01T17:34:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.