Deep Explainable Relational Reinforcement Learning: A Neuro-Symbolic
Approach
- URL: http://arxiv.org/abs/2304.08349v2
- Date: Fri, 14 Jul 2023 07:01:31 GMT
- Title: Deep Explainable Relational Reinforcement Learning: A Neuro-Symbolic
Approach
- Authors: Rishi Hazra and Luc De Raedt
- Abstract summary: We propose Explainable Reinforcement Learning (DERRL), a framework that exploits the best of both -- neural and symbolic worlds.
DERRL combines relational representations and constraints from symbolic planning with deep learning to extract interpretable policies.
These policies are in the form of logical rules that explain how each decision (or action) is arrived at.
- Score: 18.38878415765146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite numerous successes in Deep Reinforcement Learning (DRL), the learned
policies are not interpretable. Moreover, since DRL does not exploit symbolic
relational representations, it has difficulties in coping with structural
changes in its environment (such as increasing the number of objects).
Relational Reinforcement Learning, on the other hand, inherits the relational
representations from symbolic planning to learn reusable policies. However, it
has so far been unable to scale up and exploit the power of deep neural
networks. We propose Deep Explainable Relational Reinforcement Learning
(DERRL), a framework that exploits the best of both -- neural and symbolic
worlds. By resorting to a neuro-symbolic approach, DERRL combines relational
representations and constraints from symbolic planning with deep learning to
extract interpretable policies. These policies are in the form of logical rules
that explain how each decision (or action) is arrived at. Through several
experiments, in setups like the Countdown Game, Blocks World, Gridworld, and
Traffic, we show that the policies learned by DERRL can be applied to different
configurations and contexts, hence generalizing to environmental modifications.
Related papers
- BlendRL: A Framework for Merging Symbolic and Neural Policy Learning [23.854830898003726]
BlendRL is a neuro-symbolic RL framework that integrates both paradigms within RL agents that use mixtures of both logic and neural policies.
We empirically demonstrate that BlendRL agents outperform both neural and symbolic baselines in standard Atari environments.
We analyze the interaction between neural and symbolic policies, illustrating how their hybrid use helps agents overcome each other's limitations.
arXiv Detail & Related papers (2024-10-15T15:24:20Z) - End-to-End Neuro-Symbolic Reinforcement Learning with Textual Explanations [15.530907808235945]
We present a neuro-symbolic framework for jointly learning structured states and symbolic policies.
We design a pipeline to prompt GPT-4 to generate textual explanations for the learned policies and decisions.
We verify the efficacy of our approach on nine Atari tasks and present GPT-generated explanations for policies and decisions.
arXiv Detail & Related papers (2024-03-19T05:21:20Z) - What Planning Problems Can A Relational Neural Network Solve? [91.53684831950612]
We present a circuit complexity analysis for relational neural networks representing policies for planning problems.
We show that there are three general classes of planning problems, in terms of the growth of circuit width and depth.
We also illustrate the utility of this analysis for designing neural networks for policy learning.
arXiv Detail & Related papers (2023-12-06T18:47:28Z) - Symbolic Visual Reinforcement Learning: A Scalable Framework with
Object-Level Abstraction and Differentiable Expression Search [63.3745291252038]
We propose DiffSES, a novel symbolic learning approach that discovers discrete symbolic policies.
By using object-level abstractions instead of raw pixel-level inputs, DiffSES is able to leverage the simplicity and scalability advantages of symbolic expressions.
Our experiments demonstrate that DiffSES is able to generate symbolic policies that are simpler and more scalable than state-of-the-art symbolic RL methods.
arXiv Detail & Related papers (2022-12-30T17:50:54Z) - Symbolic Distillation for Learned TCP Congestion Control [70.27367981153299]
TCP congestion control has achieved tremendous success with deep reinforcement learning (RL) approaches.
Black-box policies lack interpretability and reliability, and often, they need to operate outside the traditional TCP datapath.
This paper proposes a novel two-stage solution to achieve the best of both worlds: first, to train a deep RL agent, then distill its NN policy into white-box, light-weight rules.
arXiv Detail & Related papers (2022-10-24T00:58:16Z) - Neuro-Symbolic Reinforcement Learning with First-Order Logic [63.003353499732434]
We propose a novel RL method for text-based games with a recent neuro-symbolic framework called Logical Neural Network.
Our experimental results show RL training with the proposed method converges significantly faster than other state-of-the-art neuro-symbolic methods in a TextWorld benchmark.
arXiv Detail & Related papers (2021-10-21T08:21:49Z) - pix2rule: End-to-end Neuro-symbolic Rule Learning [84.76439511271711]
This paper presents a complete neuro-symbolic method for processing images into objects, learning relations and logical rules.
The main contribution is a differentiable layer in a deep learning architecture from which symbolic relations and rules can be extracted.
We demonstrate that our model scales beyond state-of-the-art symbolic learners and outperforms deep relational neural network architectures.
arXiv Detail & Related papers (2021-06-14T15:19:06Z) - Learning Symbolic Rules for Interpretable Deep Reinforcement Learning [31.29595856800344]
We propose a Neural Symbolic Reinforcement Learning framework by introducing symbolic logic into DRL.
We show that our framework has better interpretability, along with competing performance in comparison to state-of-the-art approaches.
arXiv Detail & Related papers (2021-03-15T09:26:00Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Neurosymbolic Reinforcement Learning with Formally Verified Exploration [21.23874800091344]
We present Revel, a framework for provably safe exploration in continuous state and action spaces.
A key challenge for provably safe deep RL is that repeatedly verifying neural networks within a learning loop is computationally infeasible.
We address this challenge using two policy classes: a general, neurosymbolic class with approximate gradients and a more restricted class of symbolic policies that allows efficient verification.
arXiv Detail & Related papers (2020-09-26T14:51:04Z) - Symbolic Relational Deep Reinforcement Learning based on Graph Neural
Networks and Autoregressive Policy Decomposition [0.0]
We focus on reinforcement learning in relational problems that are naturally defined in terms of objects, their relations, and object-centric actions.
We present a deep RL framework based on graph neural networks and auto-regressive policy decomposition that naturally works with these problems and is completely domain-independent.
arXiv Detail & Related papers (2020-09-25T22:41:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.