Learning Symbolic Rules for Interpretable Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2103.08228v2
- Date: Tue, 16 Mar 2021 05:32:42 GMT
- Title: Learning Symbolic Rules for Interpretable Deep Reinforcement Learning
- Authors: Zhihao Ma, Yuzheng Zhuang, Paul Weng, Hankz Hankui Zhuo, Dong Li,
Wulong Liu, Jianye Hao
- Abstract summary: We propose a Neural Symbolic Reinforcement Learning framework by introducing symbolic logic into DRL.
We show that our framework has better interpretability, along with competing performance in comparison to state-of-the-art approaches.
- Score: 31.29595856800344
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent progress in deep reinforcement learning (DRL) can be largely
attributed to the use of neural networks. However, this black-box approach
fails to explain the learned policy in a human understandable way. To address
this challenge and improve the transparency, we propose a Neural Symbolic
Reinforcement Learning framework by introducing symbolic logic into DRL. This
framework features a fertilization of reasoning and learning modules, enabling
end-to-end learning with prior symbolic knowledge. Moreover, interpretability
is achieved by extracting the logical rules learned by the reasoning module in
a symbolic rule space. The experimental results show that our framework has
better interpretability, along with competing performance in comparison to
state-of-the-art approaches.
Related papers
- End-to-End Neuro-Symbolic Reinforcement Learning with Textual Explanations [15.530907808235945]
We present a neuro-symbolic framework for jointly learning structured states and symbolic policies.
We design a pipeline to prompt GPT-4 to generate textual explanations for the learned policies and decisions.
We verify the efficacy of our approach on nine Atari tasks and present GPT-generated explanations for policies and decisions.
arXiv Detail & Related papers (2024-03-19T05:21:20Z) - Weakly Supervised Reasoning by Neuro-Symbolic Approaches [28.98845133698169]
We will introduce our progress on neuro-symbolic approaches to NLP.
We will design a neural system with symbolic latent structures for an NLP task.
We will apply reinforcement learning or its relaxation to perform weakly supervised reasoning in the downstream task.
arXiv Detail & Related papers (2023-09-19T06:10:51Z) - A Novel Neural-symbolic System under Statistical Relational Learning [50.747658038910565]
We propose a general bi-level probabilistic graphical reasoning framework called GBPGR.
In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models.
Our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - Learning Symbolic Rules over Abstract Meaning Representations for
Textual Reinforcement Learning [63.148199057487226]
We propose a modular, NEuroSymbolic Textual Agent (NESTA) that combines a generic semantic generalization with a rule induction system to learn interpretable rules as policies.
Our experiments show that the proposed NESTA method outperforms deep reinforcement learning-based techniques by achieving better to unseen test games and learning from fewer training interactions.
arXiv Detail & Related papers (2023-07-05T23:21:05Z) - Deep Explainable Relational Reinforcement Learning: A Neuro-Symbolic
Approach [18.38878415765146]
We propose Explainable Reinforcement Learning (DERRL), a framework that exploits the best of both -- neural and symbolic worlds.
DERRL combines relational representations and constraints from symbolic planning with deep learning to extract interpretable policies.
These policies are in the form of logical rules that explain how each decision (or action) is arrived at.
arXiv Detail & Related papers (2023-04-17T15:11:40Z) - Symbolic Visual Reinforcement Learning: A Scalable Framework with
Object-Level Abstraction and Differentiable Expression Search [63.3745291252038]
We propose DiffSES, a novel symbolic learning approach that discovers discrete symbolic policies.
By using object-level abstractions instead of raw pixel-level inputs, DiffSES is able to leverage the simplicity and scalability advantages of symbolic expressions.
Our experiments demonstrate that DiffSES is able to generate symbolic policies that are simpler and more scalable than state-of-the-art symbolic RL methods.
arXiv Detail & Related papers (2022-12-30T17:50:54Z) - LogiGAN: Learning Logical Reasoning via Adversarial Pre-training [58.11043285534766]
We present LogiGAN, an unsupervised adversarial pre-training framework for improving logical reasoning abilities of language models.
Inspired by the facilitation effect of reflective thinking in human learning, we simulate the learning-thinking process with an adversarial Generator-Verifier architecture.
Both base and large size language models pre-trained with LogiGAN demonstrate obvious performance improvement on 12 datasets.
arXiv Detail & Related papers (2022-05-18T08:46:49Z) - Neuro-Symbolic Reinforcement Learning with First-Order Logic [63.003353499732434]
We propose a novel RL method for text-based games with a recent neuro-symbolic framework called Logical Neural Network.
Our experimental results show RL training with the proposed method converges significantly faster than other state-of-the-art neuro-symbolic methods in a TextWorld benchmark.
arXiv Detail & Related papers (2021-10-21T08:21:49Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.