Learning Two-Step Hybrid Policy for Graph-Based Interpretable
Reinforcement Learning
- URL: http://arxiv.org/abs/2201.08520v1
- Date: Fri, 21 Jan 2022 03:06:24 GMT
- Title: Learning Two-Step Hybrid Policy for Graph-Based Interpretable
Reinforcement Learning
- Authors: Tongzhou Mu, Kaixiang Lin, Feiyang Niu, Govind Thattai
- Abstract summary: We present a two-step hybrid reinforcement learning (RL) policy that is designed to generate interpretable and robust hierarchical policies on the RL problem with graph-based input.
This two-step hybrid policy presents human-friendly interpretations and achieves better performance in terms of generalization and robustness.
- Score: 7.656272344163665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a two-step hybrid reinforcement learning (RL) policy that is
designed to generate interpretable and robust hierarchical policies on the RL
problem with graph-based input. Unlike prior deep reinforcement learning
policies parameterized by an end-to-end black-box graph neural network, our
approach disentangles the decision-making process into two steps. The first
step is a simplified classification problem that maps the graph input to an
action group where all actions share a similar semantic meaning. The second
step implements a sophisticated rule-miner that conducts explicit one-hop
reasoning over the graph and identifies decisive edges in the graph input
without the necessity of heavy domain knowledge. This two-step hybrid policy
presents human-friendly interpretations and achieves better performance in
terms of generalization and robustness. Extensive experimental studies on four
levels of complex text-based games have demonstrated the superiority of the
proposed method compared to the state-of-the-art.
Related papers
- Fast State-Augmented Learning for Wireless Resource Allocation with Dual Variable Regression [83.27791109672927]
We show how a state-augmented graph neural network (GNN) parametrization for the resource allocation policy circumvents the drawbacks of the ubiquitous dual subgradient methods.<n>Lagrangian maximizing state-augmented policies are learned during the offline training phase.<n>We prove a convergence result and an exponential probability bound on the excursions of the dual function (iterate) optimality gaps.
arXiv Detail & Related papers (2025-06-23T15:20:58Z) - Learning Efficient and Generalizable Graph Retriever for Knowledge-Graph Question Answering [75.12322966980003]
Large Language Models (LLMs) have shown strong inductive reasoning ability across various domains.<n>Most existing RAG pipelines rely on unstructured text, limiting interpretability and structured reasoning.<n>Recent studies have explored integrating knowledge graphs with LLMs for knowledge graph question answering.<n>We propose RAPL, a novel framework for efficient and effective graph retrieval in KGQA.
arXiv Detail & Related papers (2025-06-11T12:03:52Z) - Graph-to-Vision: Multi-graph Understanding and Reasoning using Vision-Language Models [10.813015912529936]
Vision-Language Models (VLMs) have demonstrated exceptional cross-modal relational reasoning capabilities and generalization capacities.
Our benchmark encompasses four graph categories: knowledge graphs, flowcharts, mind maps, and route maps, with each graph group accompanied by three progressively challenging instruction-response pairs.
This study not only addresses the underexplored evaluation gap in multi-graph reasoning for VLMs but also empirically validates their generalization superiority in graph-structured learning.
arXiv Detail & Related papers (2025-03-27T12:20:37Z) - Disentangled Generative Graph Representation Learning [51.59824683232925]
This paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework.
It aims to learn latent disentangled factors and utilize them to guide graph mask modeling.
Experiments on 11 public datasets for two different graph learning tasks demonstrate that DiGGR consistently outperforms many previous self-supervised methods.
arXiv Detail & Related papers (2024-08-24T05:13:02Z) - Balanced Multi-Relational Graph Clustering [5.531383184058319]
Multi-relational graph clustering has demonstrated remarkable success in uncovering underlying patterns in complex networks.
Our empirical study finds the pervasive presence of imbalance in real-world graphs, which is in principle contradictory to the motivation of alignment.
We propose Balanced Multi-Relational Graph Clustering (BMGC), comprising unsupervised dominant view mining and dual signals guided representation learning.
arXiv Detail & Related papers (2024-07-23T22:11:13Z) - Domain Adaptive Graph Classification [0.0]
We introduce the Dual Adversarial Graph Representation Learning (DAGRL), which explore the graph topology from dual branches and mitigate domain discrepancies via dual adversarial learning.
Our approach incorporates adaptive perturbations into the dual branches, which align the source and target distribution to address domain discrepancies.
arXiv Detail & Related papers (2023-12-21T02:37:56Z) - Learning High-level Semantic-Relational Concepts for SLAM [10.528810470934781]
We propose an algorithm for learning high-level semantic-relational concepts that can be inferred from the low-level factor graph.
We validate our method in both simulated and real datasets demonstrating improved performance over two baseline approaches.
arXiv Detail & Related papers (2023-09-30T14:54:31Z) - From Cluster Assumption to Graph Convolution: Graph-based Semi-Supervised Learning Revisited [51.24526202984846]
Graph-based semi-supervised learning (GSSL) has long been a hot research topic.
graph convolutional networks (GCNs) have become the predominant techniques for their promising performance.
arXiv Detail & Related papers (2023-09-24T10:10:21Z) - A Survey of Imbalanced Learning on Graphs: Problems, Techniques, and
Future Directions [64.84521350148513]
Graphs represent interconnected structures prevalent in a myriad of real-world scenarios.
Effective graph analytics, such as graph learning methods, enables users to gain profound insights from graph data.
However, these methods often suffer from data imbalance, a common issue in graph data where certain segments possess abundant data while others are scarce.
This necessitates the emerging field of imbalanced learning on graphs, which aims to correct these data distribution skews for more accurate and representative learning outcomes.
arXiv Detail & Related papers (2023-08-26T09:11:44Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Counterfactual Intervention Feature Transfer for Visible-Infrared Person
Re-identification [69.45543438974963]
We find graph-based methods in the visible-infrared person re-identification task (VI-ReID) suffer from bad generalization because of two issues.
The well-trained input features weaken the learning of graph topology, making it not generalized enough during the inference process.
We propose a Counterfactual Intervention Feature Transfer (CIFT) method to tackle these problems.
arXiv Detail & Related papers (2022-08-01T16:15:31Z) - Pairwise Half-graph Discrimination: A Simple Graph-level Self-supervised
Strategy for Pre-training Graph Neural Networks [17.976090901276905]
We propose a simple and effective self-supervised pre-training strategy, named Pairwise Half-graph Discrimination (PHD)
PHD explicitly pre-trains a graph neural network at graph-level.
arXiv Detail & Related papers (2021-10-26T10:51:13Z) - Compositional Reinforcement Learning from Logical Specifications [21.193231846438895]
Recent approaches automatically generate a reward function from a given specification and use a suitable reinforcement learning algorithm to learn a policy.
We develop a compositional learning approach, called DiRL, that interleaves high-level planning and reinforcement learning.
Our approach then incorporates reinforcement learning to learn neural network policies for each edge (sub-task) within a Dijkstra-style planning algorithm to compute a high-level plan in the graph.
arXiv Detail & Related papers (2021-06-25T22:54:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.