Deep Inductive Logic Programming meets Reinforcement Learning
- URL: http://arxiv.org/abs/2308.16210v1
- Date: Wed, 30 Aug 2023 09:08:46 GMT
- Title: Deep Inductive Logic Programming meets Reinforcement Learning
- Authors: Andreas Bueff (University of Edinburgh), Vaishak Belle (University of
Edinburgh)
- Abstract summary: Differentiable Neural Logic (dNL) networks are able to learn functions as their neural architecture includes symbolic reasoning.
We propose an application of dNL in the field ofReinforcement Learning (RRL) to address dynamic continuous environments.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One approach to explaining the hierarchical levels of understanding within a
machine learning model is the symbolic method of inductive logic programming
(ILP), which is data efficient and capable of learning first-order logic rules
that can entail data behaviour. A differentiable extension to ILP, so-called
differentiable Neural Logic (dNL) networks, are able to learn Boolean functions
as their neural architecture includes symbolic reasoning. We propose an
application of dNL in the field of Relational Reinforcement Learning (RRL) to
address dynamic continuous environments. This represents an extension of
previous work in applying dNL-based ILP in RRL settings, as our proposed model
updates the architecture to enable it to solve problems in continuous RL
environments. The goal of this research is to improve upon current ILP methods
for use in RRL by incorporating non-linear continuous predicates, allowing RRL
agents to reason and make decisions in dynamic and continuous environments.
Related papers
- Dynamics of Supervised and Reinforcement Learning in the Non-Linear Perceptron [3.069335774032178]
We use a dataset-process approach to derive flow equations describing learning.
We characterize the effects of the learning rule (supervised or reinforcement learning, SL/RL) and input-data distribution on the perceptron's learning curve.
This approach points a way toward analyzing learning dynamics for more-complex circuit architectures.
arXiv Detail & Related papers (2024-09-05T17:58:28Z) - Entropy-Regularized Token-Level Policy Optimization for Language Agent Reinforcement [67.1393112206885]
Large Language Models (LLMs) have shown promise as intelligent agents in interactive decision-making tasks.
We introduce Entropy-Regularized Token-level Policy Optimization (ETPO), an entropy-augmented RL method tailored for optimizing LLMs at the token level.
We assess the effectiveness of ETPO within a simulated environment that models data science code generation as a series of multi-step interactive tasks.
arXiv Detail & Related papers (2024-02-09T07:45:26Z) - A Neuromorphic Architecture for Reinforcement Learning from Real-Valued
Observations [0.34410212782758043]
Reinforcement Learning (RL) provides a powerful framework for decision-making in complex environments.
This paper presents a novel Spiking Neural Network (SNN) architecture for solving RL problems with real-valued observations.
arXiv Detail & Related papers (2023-07-06T12:33:34Z) - Mastering Symbolic Operations: Augmenting Language Models with Compiled
Neural Networks [48.14324895100478]
"Neural architecture" integrates compiled neural networks (CoNNs) into a standard transformer.
CoNNs are neural modules designed to explicitly encode rules through artificially generated attention weights.
Experiments demonstrate superiority of our approach over existing techniques in terms of length generalization, efficiency, and interpretability for symbolic operations.
arXiv Detail & Related papers (2023-04-04T09:50:07Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Neuro-Symbolic Reinforcement Learning with First-Order Logic [63.003353499732434]
We propose a novel RL method for text-based games with a recent neuro-symbolic framework called Logical Neural Network.
Our experimental results show RL training with the proposed method converges significantly faster than other state-of-the-art neuro-symbolic methods in a TextWorld benchmark.
arXiv Detail & Related papers (2021-10-21T08:21:49Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Incorporating Relational Background Knowledge into Reinforcement
Learning via Differentiable Inductive Logic Programming [8.122270502556374]
We propose a novel deepReinforcement Learning (RRL) based on a differentiable Inductive Logic Programming (ILP)
We show the efficacy of this novel RRL framework using environments such as BoxWorld, GridWorld as well as relational reasoning for the Sort-of-CLEVR dataset.
arXiv Detail & Related papers (2020-03-23T16:56:11Z) - Certified Reinforcement Learning with Logic Guidance [78.2286146954051]
We propose a model-free RL algorithm that enables the use of Linear Temporal Logic (LTL) to formulate a goal for unknown continuous-state/action Markov Decision Processes (MDPs)
The algorithm is guaranteed to synthesise a control policy whose traces satisfy the specification with maximal probability.
arXiv Detail & Related papers (2019-02-02T20:09:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.