Neural DNF-MT: A Neuro-symbolic Approach for Learning Interpretable and Editable Policies
- URL: http://arxiv.org/abs/2501.03888v2
- Date: Sun, 09 Feb 2025 12:25:10 GMT
- Title: Neural DNF-MT: A Neuro-symbolic Approach for Learning Interpretable and Editable Policies
- Authors: Kexin Gu Baugh, Luke Dickens, Alessandra Russo,
- Abstract summary: We propose a neuro-symbolic approach called neural DNF-MT for end-to-end policy learning.
The differentiable nature of the neural DNF-MT model enables the use of deep actor-critic algorithms for training.
We show how the bivalent representations of deterministic policies can be edited and incorporated back into a neural model.
- Score: 51.03989561425833
- License:
- Abstract: Although deep reinforcement learning has been shown to be effective, the model's black-box nature presents barriers to direct policy interpretation. To address this problem, we propose a neuro-symbolic approach called neural DNF-MT for end-to-end policy learning. The differentiable nature of the neural DNF-MT model enables the use of deep actor-critic algorithms for training. At the same time, its architecture is designed so that trained models can be directly translated into interpretable policies expressed as standard (bivalent or probabilistic) logic programs. Moreover, additional layers can be included to extract abstract features from complex observations, acting as a form of predicate invention. The logic representations are highly interpretable, and we show how the bivalent representations of deterministic policies can be edited and incorporated back into a neural model, facilitating manual intervention and adaptation of learned policies. We evaluate our approach on a range of tasks requiring learning deterministic or stochastic behaviours from various forms of observations. Our empirical results show that our neural DNF-MT model performs at the level of competing black-box methods whilst providing interpretable policies.
Related papers
- Explainability in Neural Networks for Natural Language Processing Tasks [5.812284760539713]
Local Interpretable Model-Agnostic Explanations (LIME) have emerged as essential tools for providing insights into the behavior of complex systems.
This study leverages LIME to interpret a multi-layer perceptron (MLP) neural network trained on a text classification task.
Despite its effectiveness in offering localized explanations, LIME has limitations in capturing global patterns and feature interactions.
arXiv Detail & Related papers (2024-12-23T23:09:56Z) - Distilling Reinforcement Learning Policies for Interpretable Robot Locomotion: Gradient Boosting Machines and Symbolic Regression [53.33734159983431]
This paper introduces a novel approach to distill neural RL policies into more interpretable forms.
We train expert neural network policies using RL and distill them into (i) GBMs, (ii) EBMs, and (iii) symbolic policies.
arXiv Detail & Related papers (2024-03-21T11:54:45Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - A Novel Neural-symbolic System under Statistical Relational Learning [50.747658038910565]
We propose a general bi-level probabilistic graphical reasoning framework called GBPGR.
In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models.
Our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - Adversarial Attacks on the Interpretation of Neuron Activation
Maximization [70.5472799454224]
Activation-maximization approaches are used to interpret and analyze trained deep-learning models.
In this work, we consider the concept of an adversary manipulating a model for the purpose of deceiving the interpretation.
arXiv Detail & Related papers (2023-06-12T19:54:33Z) - Interpretable and Explainable Logical Policies via Neurally Guided
Symbolic Abstraction [23.552659248243806]
We introduce Neurally gUided Differentiable loGic policiEs (NUDGE)
NUDGE exploits trained neural network-based agents to guide the search of candidate-weighted logic rules, then uses differentiable logic to train the logic agents.
Our experimental evaluation demonstrates that NUDGE agents can induce interpretable and explainable policies while outperforming purely neural ones and showing good flexibility to environments of different initial states and problem sizes.
arXiv Detail & Related papers (2023-06-02T10:59:44Z) - Interpreting Deep Learning Models in Natural Language Processing: A
Review [33.80537635077772]
A long-standing criticism against neural network models is the lack of interpretability.
In this survey, we provide a comprehensive review of various interpretation methods for neural models in NLP.
arXiv Detail & Related papers (2021-10-20T10:17:04Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.