Explainable Autonomous Robots: A Survey and Perspective
- URL: http://arxiv.org/abs/2105.02658v1
- Date: Thu, 6 May 2021 13:38:02 GMT
- Title: Explainable Autonomous Robots: A Survey and Perspective
- Authors: Tatsuya Sakai and Takayuki Nagai
- Abstract summary: Development of explanatory capabilities is an urgent first step toward autonomous robots.
This survey provides an overview of the various types of "explainability" discussed in machine learning research.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advanced communication protocols are critical to enable the coexistence of
autonomous robots with humans. Thus, the development of explanatory
capabilities is an urgent first step toward autonomous robots. This survey
provides an overview of the various types of "explainability" discussed in
machine learning research. Then, we discuss the definition of "explainability"
in the context of autonomous robots (i.e., explainable autonomous robots) by
exploring the question "what is an explanation?" We further conduct a research
survey based on this definition and present some relevant topics for future
research.
Related papers
- A roadmap for AI in robotics [55.87087746398059]
We are witnessing growing excitement in robotics at the prospect of leveraging the potential of AI to tackle some of the outstanding barriers to the full deployment of robots in our daily lives.<n>This article offers an assessment of what AI for robotics has achieved since the 1990s and proposes a short- and medium-term research roadmap listing challenges and promises.
arXiv Detail & Related papers (2025-07-26T15:18:28Z) - Multimodal Coherent Explanation Generation of Robot Failures [1.3965477771846408]
We introduce an approach to generate coherent multimodal explanations by checking the logical coherence of explanations from different modalities.
Our experiments suggest that fine-tuning a neural network that was pre-trained to recognize textual entailment, performs well for coherence assessment.
arXiv Detail & Related papers (2024-10-01T13:15:38Z) - Open-World Object Manipulation using Pre-trained Vision-Language Models [72.87306011500084]
For robots to follow instructions from people, they must be able to connect the rich semantic information in human vocabulary.
We develop a simple approach, which leverages a pre-trained vision-language model to extract object-identifying information.
In a variety of experiments on a real mobile manipulator, we find that MOO generalizes zero-shot to a wide range of novel object categories and environments.
arXiv Detail & Related papers (2023-03-02T01:55:10Z) - A Capability and Skill Model for Heterogeneous Autonomous Robots [69.50862982117127]
capability modeling is considered a promising approach to semantically model functions provided by different machines.
This contribution investigates how to apply and extend capability models from manufacturing to the field of autonomous robots.
arXiv Detail & Related papers (2022-09-22T10:13:55Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - Understanding a Robot's Guiding Ethical Principles via Automatically
Generated Explanations [4.393037165265444]
We build upon an existing ethical framework to allow users to make suggestions about plans and receive automatically generated contrastive explanations.
Results of a user study indicate that the generated explanations help humans to understand the ethical principles that underlie a robot's plan.
arXiv Detail & Related papers (2022-06-20T22:55:00Z) - Two ways to make your robot proactive: reasoning about human intentions,
or reasoning about possible futures [69.03494351066846]
We investigate two ways to make robots proactive.
One way is to recognize humans' intentions and to act to fulfill them, like opening the door that you are about to cross.
The other way is to reason about possible future threats or opportunities and to act to prevent or to foster them.
arXiv Detail & Related papers (2022-05-11T13:33:14Z) - Explain yourself! Effects of Explanations in Human-Robot Interaction [10.389325878657697]
Explanations of robot decisions could affect user perceptions, justify their reliability, and increase trust.
The effects on human perceptions of robots that explain their decisions have not been studied thoroughly.
This study demonstrates the need for and potential of explainable human-robot interaction.
arXiv Detail & Related papers (2022-04-09T15:54:27Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Semantics for Robotic Mapping, Perception and Interaction: A Survey [93.93587844202534]
Study of understanding dictates what does the world "mean" to a robot.
With humans and robots increasingly operating in the same world, the prospects of human-robot interaction also bring semantics into the picture.
Driven by need, as well as by enablers like increasing availability of training data and computational resources, semantics is a rapidly growing research area in robotics.
arXiv Detail & Related papers (2021-01-02T12:34:39Z) - Model Elicitation through Direct Questioning [22.907680615911755]
We show how a robot can interact to localize the human model from a set of models.
We show how to generate questions to refine the robot's understanding of the teammate's model.
arXiv Detail & Related papers (2020-11-24T18:17:16Z) - Integrating Intrinsic and Extrinsic Explainability: The Relevance of
Understanding Neural Networks for Human-Robot Interaction [19.844084722919764]
Explainable artificial intelligence (XAI) can help foster trust in and acceptance of intelligent and autonomous systems.
NICO, an open-source humanoid robot platform, is introduced and how the interaction of intrinsic explanations by the robot itself and extrinsic explanations provided by the environment enable efficient robotic behavior.
arXiv Detail & Related papers (2020-10-09T14:28:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.