A Neuro-Symbolic Approach to Multi-Agent RL for Interpretability and
Probabilistic Decision Making
- URL: http://arxiv.org/abs/2402.13440v1
- Date: Wed, 21 Feb 2024 00:16:08 GMT
- Title: A Neuro-Symbolic Approach to Multi-Agent RL for Interpretability and
Probabilistic Decision Making
- Authors: Chitra Subramanian and Miao Liu and Naweed Khan and Jonathan Lenchner
and Aporva Amarnath and Sarathkrishna Swaminathan and Ryan Riegel and
Alexander Gray
- Abstract summary: Multi-agent reinforcement learning (MARL) is well-suited for runtime decision-making in systems where multiple agents coexist and compete for shared resources.
Applying common deep learning-based MARL solutions to real-world problems suffers from issues of interpretability, sample efficiency, partial observability, etc.
We present an event-driven formulation, where decision-making is handled by distributed co-operative MARL agents using neuro-symbolic methods.
- Score: 42.503612515214044
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-agent reinforcement learning (MARL) is well-suited for runtime
decision-making in optimizing the performance of systems where multiple agents
coexist and compete for shared resources. However, applying common deep
learning-based MARL solutions to real-world problems suffers from issues of
interpretability, sample efficiency, partial observability, etc. To address
these challenges, we present an event-driven formulation, where decision-making
is handled by distributed co-operative MARL agents using neuro-symbolic
methods. The recently introduced neuro-symbolic Logical Neural Networks (LNN)
framework serves as a function approximator for the RL, to train a rules-based
policy that is both logical and interpretable by construction. To enable
decision-making under uncertainty and partial observability, we developed a
novel probabilistic neuro-symbolic framework, Probabilistic Logical Neural
Networks (PLNN), which combines the capabilities of logical reasoning with
probabilistic graphical models. In PLNN, the upward/downward inference
strategy, inherited from LNN, is coupled with belief bounds by setting the
activation function for the logical operator associated with each neural
network node to a probability-respecting generalization of the Fr\'echet
inequalities. These PLNN nodes form the unifying element that combines
probabilistic logic and Bayes Nets, permitting inference for variables with
unobserved states. We demonstrate our contributions by addressing key MARL
challenges for power sharing in a system-on-chip application.
Related papers
- Neural Networks Decoded: Targeted and Robust Analysis of Neural Network Decisions via Causal Explanations and Reasoning [9.947555560412397]
We introduce TRACER, a novel method grounded in causal inference theory to estimate the causal dynamics underpinning DNN decisions.
Our approach systematically intervenes on input features to observe how specific changes propagate through the network, affecting internal activations and final outputs.
TRACER further enhances explainability by generating counterfactuals that reveal possible model biases and offer contrastive explanations for misclassifications.
arXiv Detail & Related papers (2024-10-07T20:44:53Z) - Sequential Recommendation with Probabilistic Logical Reasoning [24.908805534534547]
We combine the Deep Neural Network (DNN) SR models with logical reasoning.
This framework allows SR-PLR to benefit from both similarity matching and logical reasoning.
Experiments on various sequential recommendation models demonstrate the effectiveness of the SR-PLR.
arXiv Detail & Related papers (2023-04-22T12:25:40Z) - Semantic Probabilistic Layers for Neuro-Symbolic Learning [83.25785999205932]
We design a predictive layer for structured-output prediction (SOP)
It can be plugged into any neural network guaranteeing its predictions are consistent with a set of predefined symbolic constraints.
Our Semantic Probabilistic Layer (SPL) can model intricate correlations, and hard constraints, over a structured output space.
arXiv Detail & Related papers (2022-06-01T12:02:38Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - SLASH: Embracing Probabilistic Circuits into Neural Answer Set
Programming [15.814914345000574]
We introduce SLASH -- a novel deep probabilistic programming language (DPPL)
At its core, SLASH consists of Neural-Probabilistic Predicates (NPPs) and logical programs which are united via answer set programming.
We evaluate SLASH on the benchmark data of MNIST addition as well as novel tasks for DPPLs such as missing data prediction and set prediction with state-of-the-art performance.
arXiv Detail & Related papers (2021-10-07T12:35:55Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Relational Neural Machines [19.569025323453257]
This paper presents a novel framework allowing jointly train the parameters of the learners and of a First-Order Logic based reasoner.
A Neural Machine is able recover both classical learning results in case of pure sub-symbolic learning, and Markov Logic Networks.
Proper algorithmic solutions are devised to make learning and inference tractable in large-scale problems.
arXiv Detail & Related papers (2020-02-06T10:53:57Z) - Efficient Probabilistic Logic Reasoning with Graph Neural Networks [63.099999467118245]
Markov Logic Networks (MLNs) can be used to address many knowledge graph problems.
Inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult.
We propose a graph neural network (GNN) variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model.
arXiv Detail & Related papers (2020-01-29T23:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.