Reinforcement Learning with External Knowledge by using Logical Neural
Networks
- URL: http://arxiv.org/abs/2103.02363v1
- Date: Wed, 3 Mar 2021 12:34:59 GMT
- Title: Reinforcement Learning with External Knowledge by using Logical Neural
Networks
- Authors: Daiki Kimura, Subhajit Chaudhury, Akifumi Wachi, Ryosuke Kohita, Asim
Munawar, Michiaki Tatsubori, Alexander Gray
- Abstract summary: A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
- Score: 67.46162586940905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional deep reinforcement learning methods are sample-inefficient and
usually require a large number of training trials before convergence. Since
such methods operate on an unconstrained action set, they can lead to useless
actions. A recent neuro-symbolic framework called the Logical Neural Networks
(LNNs) can simultaneously provide key-properties of both neural networks and
symbolic logic. The LNNs functions as an end-to-end differentiable network that
minimizes a novel contradiction loss to learn interpretable rules. In this
paper, we utilize LNNs to define an inference graph using basic logical
operations, such as AND and NOT, for faster convergence in reinforcement
learning. Specifically, we propose an integrated method that enables model-free
reinforcement learning from external knowledge sources in an LNNs-based logical
constrained framework such as action shielding and guide. Our results
empirically demonstrate that our method converges faster compared to a
model-free reinforcement learning method that doesn't have such logical
constraints.
Related papers
- Learning Interpretable Differentiable Logic Networks [3.8064485653035987]
We introduce a novel method for learning interpretable differentiable logic networks (DLNs)
We train these networks by softening and differentiating their discrete components, through binarization of inputs, binary logic operations, and connections between neurons.
Experimental results on twenty classification tasks indicate that differentiable logic networks can achieve accuracies comparable to or exceeding that of traditional NNs.
arXiv Detail & Related papers (2024-07-04T21:58:26Z) - Injecting Logical Constraints into Neural Networks via Straight-Through
Estimators [5.6613898352023515]
Injecting discrete logical constraints into neural network learning is one of the main challenges in neuro-symbolic AI.
We find that a straight-through-estimator, a method introduced to train binary neural networks, could effectively be applied to incorporate logical constraints into neural network learning.
arXiv Detail & Related papers (2023-07-10T05:12:05Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Neuro-Symbolic Reinforcement Learning with First-Order Logic [63.003353499732434]
We propose a novel RL method for text-based games with a recent neuro-symbolic framework called Logical Neural Network.
Our experimental results show RL training with the proposed method converges significantly faster than other state-of-the-art neuro-symbolic methods in a TextWorld benchmark.
arXiv Detail & Related papers (2021-10-21T08:21:49Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.