Trustworthy Conceptual Explanations for Neural Networks in Robot Decision-Making
- URL: http://arxiv.org/abs/2409.10733v1
- Date: Mon, 16 Sep 2024 21:11:12 GMT
- Title: Trustworthy Conceptual Explanations for Neural Networks in Robot Decision-Making
- Authors: Som Sagar, Aditya Taparia, Harsh Mankodiya, Pranav Bidare, Yifan Zhou, Ransalu Senanayake,
- Abstract summary: We introduce a trustworthy explainable robotics technique based on human-interpretable, high-level concepts.
Our proposed technique provides explanations with associated uncertainty scores by matching neural network's activations with human-interpretable visualizations.
- Score: 9.002659157558645
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Black box neural networks are an indispensable part of modern robots. Nevertheless, deploying such high-stakes systems in real-world scenarios poses significant challenges when the stakeholders, such as engineers and legislative bodies, lack insights into the neural networks' decision-making process. Presently, explainable AI is primarily tailored to natural language processing and computer vision, falling short in two critical aspects when applied in robots: grounding in decision-making tasks and the ability to assess trustworthiness of their explanations. In this paper, we introduce a trustworthy explainable robotics technique based on human-interpretable, high-level concepts that attribute to the decisions made by the neural network. Our proposed technique provides explanations with associated uncertainty scores by matching neural network's activations with human-interpretable visualizations. To validate our approach, we conducted a series of experiments with various simulated and real-world robot decision-making models, demonstrating the effectiveness of the proposed approach as a post-hoc, human-friendly robot learning diagnostic tool.
Related papers
- $π_0$: A Vision-Language-Action Flow Model for General Robot Control [77.32743739202543]
We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge.
We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people, and its ability to acquire new skills via fine-tuning.
arXiv Detail & Related papers (2024-10-31T17:22:30Z) - Artificial General Intelligence (AGI)-Native Wireless Systems: A Journey Beyond 6G [58.440115433585824]
Building future wireless systems that support services like digital twins (DTs) is challenging to achieve through advances to conventional technologies like meta-surfaces.
While artificial intelligence (AI)-native networks promise to overcome some limitations of wireless technologies, developments still rely on AI tools like neural networks.
This paper revisits the concept of AI-native wireless systems, equipping them with the common sense necessary to transform them into artificial general intelligence (AGI)-native systems.
arXiv Detail & Related papers (2024-04-29T04:51:05Z) - How to Raise a Robot -- A Case for Neuro-Symbolic AI in Constrained Task
Planning for Humanoid Assistive Robots [4.286794014747407]
We explore the novel field of incorporating privacy, security, and access control constraints with robot task planning approaches.
We report preliminary results on the classical symbolic approach, deep-learned neural networks, and modern ideas using large language models as knowledge base.
arXiv Detail & Related papers (2023-12-14T11:09:50Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - World Models and Predictive Coding for Cognitive and Developmental
Robotics: Frontiers and Challenges [51.92834011423463]
We focus on the two concepts of world models and predictive coding.
In neuroscience, predictive coding proposes that the brain continuously predicts its inputs and adapts to model its own dynamics and control behavior in its environment.
arXiv Detail & Related papers (2023-01-14T06:38:14Z) - Interpreting Neural Policies with Disentangled Tree Representations [58.769048492254555]
We study interpretability of compact neural policies through the lens of disentangled representation.
We leverage decision trees to obtain factors of variation for disentanglement in robot learning.
We introduce interpretability metrics that measure disentanglement of learned neural dynamics.
arXiv Detail & Related papers (2022-10-13T01:10:41Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Functional neural network for decision processing, a racing network of
programmable neurons with fuzzy logic where the target operating model relies
on the network itself [1.1602089225841632]
This paper introduces a novel model of artificial intelligence, the functional neural network for modeling human decision-making processes.
We believe that this functional neural network has a promising potential to transform the way we can compute decision-making.
arXiv Detail & Related papers (2021-02-24T15:19:35Z) - Axiom Learning and Belief Tracing for Transparent Decision Making in
Robotics [8.566457170664926]
A robot's ability to provide descriptions of its decisions and beliefs promotes effective collaboration with humans.
Our architecture couples the complementary strengths of non-monotonic logical reasoning, deep learning, and decision-tree induction.
During reasoning and learning, the architecture enables a robot to provide on-demand relational descriptions of its decisions, beliefs, and the outcomes of hypothetical actions.
arXiv Detail & Related papers (2020-10-20T22:09:17Z) - Explainable Goal-Driven Agents and Robots -- A Comprehensive Review [13.94373363822037]
The paper reviews approaches on explainable goal-driven intelligent agents and robots.
It focuses on techniques for explaining and communicating agents perceptual functions and cognitive reasoning.
It suggests a roadmap for the possible realization of effective goal-driven explainable agents and robots.
arXiv Detail & Related papers (2020-04-21T01:41:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.