Axiom Learning and Belief Tracing for Transparent Decision Making in
Robotics
- URL: http://arxiv.org/abs/2010.10645v1
- Date: Tue, 20 Oct 2020 22:09:17 GMT
- Title: Axiom Learning and Belief Tracing for Transparent Decision Making in
Robotics
- Authors: Tiago Mota, Mohan Sridharan
- Abstract summary: A robot's ability to provide descriptions of its decisions and beliefs promotes effective collaboration with humans.
Our architecture couples the complementary strengths of non-monotonic logical reasoning, deep learning, and decision-tree induction.
During reasoning and learning, the architecture enables a robot to provide on-demand relational descriptions of its decisions, beliefs, and the outcomes of hypothetical actions.
- Score: 8.566457170664926
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A robot's ability to provide descriptions of its decisions and beliefs
promotes effective collaboration with humans. Providing such transparency is
particularly challenging in integrated robot systems that include
knowledge-based reasoning methods and data-driven learning algorithms. Towards
addressing this challenge, our architecture couples the complementary strengths
of non-monotonic logical reasoning, deep learning, and decision-tree induction.
During reasoning and learning, the architecture enables a robot to provide
on-demand relational descriptions of its decisions, beliefs, and the outcomes
of hypothetical actions. These capabilities are grounded and evaluated in the
context of scene understanding tasks and planning tasks performed using
simulated images and images from a physical robot manipulating tabletop
objects.
Related papers
- SECURE: Semantics-aware Embodied Conversation under Unawareness for Lifelong Robot Learning [17.125080112897102]
This paper addresses a challenging interactive task learning scenario where the robot is unaware of a concept that's key to solving the instructed task.
We propose SECURE, an interactive task learning framework designed to solve such problems by fixing a deficient domain model using embodied conversation.
Using SECURE, the robot not only learns from the user's corrective feedback when it makes a mistake, but it also learns to make strategic dialogue decisions for revealing useful evidence about novel concepts for solving the instructed task.
arXiv Detail & Related papers (2024-09-26T11:40:07Z) - Trustworthy Conceptual Explanations for Neural Networks in Robot Decision-Making [9.002659157558645]
We introduce a trustworthy explainable robotics technique based on human-interpretable, high-level concepts.
Our proposed technique provides explanations with associated uncertainty scores by matching neural network's activations with human-interpretable visualizations.
arXiv Detail & Related papers (2024-09-16T21:11:12Z) - A Survey of Embodied Learning for Object-Centric Robotic Manipulation [27.569063968870868]
Embodied learning for object-centric robotic manipulation is a rapidly developing and challenging area in AI.
Unlike data-driven machine learning methods, embodied learning focuses on robot learning through physical interaction with the environment.
arXiv Detail & Related papers (2024-08-21T11:32:09Z) - Self-Explainable Affordance Learning with Embodied Caption [63.88435741872204]
We introduce Self-Explainable Affordance learning (SEA) with embodied caption.
SEA enables robots to articulate their intentions and bridge the gap between explainable vision-language caption and visual affordance learning.
We propose a novel model to effectively combine affordance grounding with self-explanation in a simple but efficient manner.
arXiv Detail & Related papers (2024-04-08T15:22:38Z) - QUAR-VLA: Vision-Language-Action Model for Quadruped Robots [37.952398683031895]
The central idea is to elevate the overall intelligence of the robot.
We propose QUAdruped Robotic Transformer (QUART), a family of VLA models to integrate visual information and instructions from diverse modalities as input.
Our approach leads to performant robotic policies and enables QUART to obtain a range of emergent capabilities.
arXiv Detail & Related papers (2023-12-22T06:15:03Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Interpreting Neural Policies with Disentangled Tree Representations [58.769048492254555]
We study interpretability of compact neural policies through the lens of disentangled representation.
We leverage decision trees to obtain factors of variation for disentanglement in robot learning.
We introduce interpretability metrics that measure disentanglement of learned neural dynamics.
arXiv Detail & Related papers (2022-10-13T01:10:41Z) - A Road-map to Robot Task Execution with the Functional Object-Oriented
Network [77.93376696738409]
functional object-oriented network (FOON) is a knowledge graph representation for robots.
Taking the form of a bipartite graph, a FOON contains symbolic or high-level information that would be pertinent to a robot's understanding of its environment and tasks.
arXiv Detail & Related papers (2021-06-01T00:43:04Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.