Interpretable and Explainable Logical Policies via Neurally Guided
Symbolic Abstraction
- URL: http://arxiv.org/abs/2306.01439v2
- Date: Wed, 25 Oct 2023 16:40:27 GMT
- Title: Interpretable and Explainable Logical Policies via Neurally Guided
Symbolic Abstraction
- Authors: Quentin Delfosse, Hikaru Shindo, Devendra Dhami, Kristian Kersting
- Abstract summary: We introduce Neurally gUided Differentiable loGic policiEs (NUDGE)
NUDGE exploits trained neural network-based agents to guide the search of candidate-weighted logic rules, then uses differentiable logic to train the logic agents.
Our experimental evaluation demonstrates that NUDGE agents can induce interpretable and explainable policies while outperforming purely neural ones and showing good flexibility to environments of different initial states and problem sizes.
- Score: 23.552659248243806
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The limited priors required by neural networks make them the dominating
choice to encode and learn policies using reinforcement learning (RL). However,
they are also black-boxes, making it hard to understand the agent's behaviour,
especially when working on the image level. Therefore, neuro-symbolic RL aims
at creating policies that are interpretable in the first place. Unfortunately,
interpretability is not explainability. To achieve both, we introduce Neurally
gUided Differentiable loGic policiEs (NUDGE). NUDGE exploits trained neural
network-based agents to guide the search of candidate-weighted logic rules,
then uses differentiable logic to train the logic agents. Our experimental
evaluation demonstrates that NUDGE agents can induce interpretable and
explainable policies while outperforming purely neural ones and showing good
flexibility to environments of different initial states and problem sizes.
Related papers
- EXPIL: Explanatory Predicate Invention for Learning in Games [22.235094243063823]
Reinforcement learning (RL) has proven to be a powerful tool for training agents that excel in various games.
Recent research has attempted to address this issue by using the guidance of pretrained neural agents to encode logic-based policies.
We propose a novel approach, Explanatory Predicate Invention for Learning in Games (EXPIL), that identifies and extracts predicates from a pretrained neural agent, later used in the logic-based agents.
arXiv Detail & Related papers (2024-06-10T08:46:49Z) - Distilling Reinforcement Learning Policies for Interpretable Robot Locomotion: Gradient Boosting Machines and Symbolic Regression [53.33734159983431]
This paper introduces a novel approach to distill neural RL policies into more interpretable forms.
We train expert neural network policies using RL and distill them into (i) GBMs, (ii) EBMs, and (iii) symbolic policies.
arXiv Detail & Related papers (2024-03-21T11:54:45Z) - NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants [73.85768093666582]
We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
arXiv Detail & Related papers (2023-01-01T12:48:12Z) - Global Concept-Based Interpretability for Graph Neural Networks via
Neuron Analysis [0.0]
Graph neural networks (GNNs) are highly effective on a variety of graph-related tasks.
They lack interpretability and transparency.
Current explainability approaches are typically local and treat GNNs as black-boxes.
We propose a novel approach for producing global explanations for GNNs using neuron-level concepts.
arXiv Detail & Related papers (2022-08-22T21:30:55Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Neuro-Symbolic Reinforcement Learning with First-Order Logic [63.003353499732434]
We propose a novel RL method for text-based games with a recent neuro-symbolic framework called Logical Neural Network.
Our experimental results show RL training with the proposed method converges significantly faster than other state-of-the-art neuro-symbolic methods in a TextWorld benchmark.
arXiv Detail & Related papers (2021-10-21T08:21:49Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Analyzing Differentiable Fuzzy Logic Operators [3.4806267677524896]
We study how a large collection of logical operators from the fuzzy logic literature behave in a differentiable learning setting.
We show that it is possible to use Differentiable Fuzzy Logics for semi-supervised learning, and compare how different operators behave in practice.
arXiv Detail & Related papers (2020-02-14T16:11:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.