Fusing Interpretable Knowledge of Neural Network Learning Agents For
Swarm-Guidance
- URL: http://arxiv.org/abs/2204.00272v1
- Date: Fri, 1 Apr 2022 08:07:41 GMT
- Title: Fusing Interpretable Knowledge of Neural Network Learning Agents For
Swarm-Guidance
- Authors: Duy Tung Nguyen, Kathryn Kasmarik, Hussein Abbass
- Abstract summary: Neural-based learning agents make decisions using internal artificial neural networks.
In certain situations, it becomes pertinent that this knowledge is re-interpreted in a friendly form to both the human and the machine.
We propose an interpretable knowledge fusion framework suited for neural-based learning agents, and propose a Priority on Weak State Areas (PoWSA) retraining technique.
- Score: 0.5156484100374059
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural-based learning agents make decisions using internal artificial neural
networks. In certain situations, it becomes pertinent that this knowledge is
re-interpreted in a friendly form to both the human and the machine. These
situations include: when agents are required to communicate the knowledge they
learn to each other in a transparent way in the presence of an external human
observer, in human-machine teaming settings where humans and machines need to
collaborate on a task, or where there is a requirement to verify the knowledge
exchanged between the agents. We propose an interpretable knowledge fusion
framework suited for neural-based learning agents, and propose a Priority on
Weak State Areas (PoWSA) retraining technique. We first test the proposed
framework on a synthetic binary classification task before evaluating it on a
shepherding-based multi-agent swarm guidance task. Results demonstrate that the
proposed framework increases the success rate on the swarm-guidance environment
by 11% and better stability in return for a modest increase in computational
cost of 14.5% to achieve interpretability. Moreover, the framework presents the
knowledge learnt by an agent in a human-friendly representation, leading to a
better descriptive visual representation of an agent's knowledge.
Related papers
- Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - Representation Engineering: A Top-Down Approach to AI Transparency [132.0398250233924]
We identify and characterize the emerging area of representation engineering (RepE)
RepE places population-level representations, rather than neurons or circuits, at the center of analysis.
We showcase how these methods can provide traction on a wide range of safety-relevant problems.
arXiv Detail & Related papers (2023-10-02T17:59:07Z) - Flexible and Inherently Comprehensible Knowledge Representation for
Data-Efficient Learning and Trustworthy Human-Machine Teaming in
Manufacturing Environments [0.0]
Trustworthiness of artificially intelligent agents is vital for the acceptance of human-machine teaming in industrial manufacturing environments.
We make use of G"ardenfors's cognitively inspired Conceptual Space framework to represent the agent's knowledge.
A simple typicality model is built on top of it to determine fuzzy category membership and classify instances interpretably.
arXiv Detail & Related papers (2023-05-19T11:18:23Z) - Interpreting Neural Policies with Disentangled Tree Representations [58.769048492254555]
We study interpretability of compact neural policies through the lens of disentangled representation.
We leverage decision trees to obtain factors of variation for disentanglement in robot learning.
We introduce interpretability metrics that measure disentanglement of learned neural dynamics.
arXiv Detail & Related papers (2022-10-13T01:10:41Z) - An Interactive Explanatory AI System for Industrial Quality Control [0.8889304968879161]
We aim to extend the defect detection task towards an interactive human-in-the-loop approach.
We propose an approach for an interactive support system for classifications in an industrial quality control setting.
arXiv Detail & Related papers (2022-03-17T09:04:46Z) - Probe-Based Interventions for Modifying Agent Behavior [4.324022085722613]
We develop a method for updating representations in pre-trained neural nets according to externally-specified properties.
In experiments, we show how our method may be used to improve human-agent team performance for a variety of neural networks.
arXiv Detail & Related papers (2022-01-26T19:14:00Z) - Assessing Human Interaction in Virtual Reality With Continually Learning
Prediction Agents Based on Reinforcement Learning Algorithms: A Pilot Study [6.076137037890219]
We investigate how the interaction between a human and a continually learning prediction agent develops as the agent develops competency.
We develop a virtual reality environment and a time-based prediction task wherein learned predictions from a reinforcement learning (RL) algorithm augment human predictions.
Our findings suggest that human trust of the system may be influenced by early interactions with the agent, and that trust in turn affects strategic behaviour.
arXiv Detail & Related papers (2021-12-14T22:46:44Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Learning Human Rewards by Inferring Their Latent Intelligence Levels in
Multi-Agent Games: A Theory-of-Mind Approach with Application to Driving Data [18.750834997334664]
We argue that humans are bounded rational and have different intelligence levels when reasoning about others' decision-making process.
We propose a new multi-agent Inverse Reinforcement Learning framework that reasons about humans' latent intelligence levels during learning.
arXiv Detail & Related papers (2021-03-07T07:48:31Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - Automatic Gesture Recognition in Robot-assisted Surgery with
Reinforcement Learning and Tree Search [63.07088785532908]
We propose a framework based on reinforcement learning and tree search for joint surgical gesture segmentation and classification.
Our framework consistently outperforms the existing methods on the suturing task of JIGSAWS dataset in terms of accuracy, edit score and F1 score.
arXiv Detail & Related papers (2020-02-20T13:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.