Semantic-Based Explainable AI: Leveraging Semantic Scene Graphs and
Pairwise Ranking to Explain Robot Failures
- URL: http://arxiv.org/abs/2108.03554v1
- Date: Sun, 8 Aug 2021 02:44:23 GMT
- Title: Semantic-Based Explainable AI: Leveraging Semantic Scene Graphs and
Pairwise Ranking to Explain Robot Failures
- Authors: Devleena Das, Sonia Chernova
- Abstract summary: We introduce a more generalizable semantic explanation framework.
Our framework autonomously captures the semantic information in a scene to produce semantically descriptive explanations.
Our results show that these semantically descriptive explanations significantly improve everyday users' ability to both identify failures and provide assistance for recovery.
- Score: 18.80051800388596
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When interacting in unstructured human environments, occasional robot
failures are inevitable. When such failures occur, everyday people, rather than
trained technicians, will be the first to respond. Existing natural language
explanations hand-annotate contextual information from an environment to help
everyday people understand robot failures. However, this methodology lacks
generalizability and scalability. In our work, we introduce a more
generalizable semantic explanation framework. Our framework autonomously
captures the semantic information in a scene to produce semantically
descriptive explanations for everyday users. To generate failure-focused
explanations that are semantically grounded, we leverages both semantic scene
graphs to extract spatial relations and object attributes from an environment,
as well as pairwise ranking. Our results show that these semantically
descriptive explanations significantly improve everyday users' ability to both
identify failures and provide assistance for recovery than the existing
state-of-the-art context-based explanations.
Related papers
- Situated Instruction Following [87.37244711380411]
We propose situated instruction following, which embraces the inherent underspecification and ambiguity of real-world communication.
The meaning of situated instructions naturally unfold through the past actions and the expected future behaviors of the human involved.
Our experiments indicate that state-of-the-art Embodied Instruction Following (EIF) models lack holistic understanding of situated human intention.
arXiv Detail & Related papers (2024-07-15T19:32:30Z) - Self-Explainable Affordance Learning with Embodied Caption [63.88435741872204]
We introduce Self-Explainable Affordance learning (SEA) with embodied caption.
SEA enables robots to articulate their intentions and bridge the gap between explainable vision-language caption and visual affordance learning.
We propose a novel model to effectively combine affordance grounding with self-explanation in a simple but efficient manner.
arXiv Detail & Related papers (2024-04-08T15:22:38Z) - A Closer Look at Reward Decomposition for High-Level Robotic
Explanations [18.019811754800767]
We propose an explainable Q-Map learning framework that combines reward decomposition with abstracted action spaces.
We demonstrate the effectiveness of our framework through quantitative and qualitative analysis of two robotic scenarios.
arXiv Detail & Related papers (2023-04-25T16:01:42Z) - Is the Elephant Flying? Resolving Ambiguities in Text-to-Image
Generative Models [64.58271886337826]
We study ambiguities that arise in text-to-image generative models.
We propose a framework to mitigate ambiguities in the prompts given to the systems by soliciting clarifications from the user.
arXiv Detail & Related papers (2022-11-17T17:12:43Z) - Neural Abstructions: Abstractions that Support Construction for Grounded
Language Learning [69.1137074774244]
Leveraging language interactions effectively requires addressing limitations in the two most common approaches to language grounding.
We introduce the idea of neural abstructions: a set of constraints on the inference procedure of a label-conditioned generative model.
We show that with this method a user population is able to build a semantic modification for an open-ended house task in Minecraft.
arXiv Detail & Related papers (2021-07-20T07:01:15Z) - Language Understanding for Field and Service Robots in a Priori Unknown
Environments [29.16936249846063]
This paper provides a novel learning framework that allows field and service robots to interpret and execute natural language instructions.
We use language as a "sensor" -- inferring spatial, topological, and semantic information implicit in natural language utterances.
We incorporate this distribution in a probabilistic language grounding model and infer a distribution over a symbolic representation of the robot's action space.
arXiv Detail & Related papers (2021-05-21T15:13:05Z) - Explainable AI for Robot Failures: Generating Explanations that Improve
User Assistance in Fault Recovery [19.56670862587773]
We introduce a new type of explanation, that explains the cause of an unexpected failure during an agent's plan execution to non-experts.
We investigate how such explanations can be autonomously generated, extending an existing encoder-decoder model.
arXiv Detail & Related papers (2021-01-05T16:16:39Z) - Towards Abstract Relational Learning in Human Robot Interaction [73.67226556788498]
Humans have a rich representation of the entities in their environment.
If robots need to interact successfully with humans, they need to represent entities, attributes, and generalizations in a similar way.
In this work, we address the problem of how to obtain these representations through human-robot interaction.
arXiv Detail & Related papers (2020-11-20T12:06:46Z) - Explainable AI for System Failures: Generating Explanations that Improve
Human Assistance in Fault Recovery [15.359877013989228]
We develop automated, natural language explanations for failures encountered during an AI agents' plan execution.
These explanations are developed with a focus of helping non-expert users understand different point of failures.
We extend an existing sequence-to-sequence methodology to automatically generate our context-based explanations.
arXiv Detail & Related papers (2020-11-18T17:08:50Z) - Lifelong update of semantic maps in dynamic environments [2.343080600040765]
A robot understands its world through the raw information it senses from its surroundings.
A semantic map, containing high-level information that both the robot and user understand, is better suited to be a shared representation.
We use the semantic map as the user-facing interface on our fleet of floor-cleaning robots.
arXiv Detail & Related papers (2020-10-17T18:44:33Z) - Joint Inference of States, Robot Knowledge, and Human (False-)Beliefs [90.20235972293801]
Aiming to understand how human (false-temporal)-belief-a core socio-cognitive ability unify-would affect human interactions with robots, this paper proposes to adopt a graphical model to the representation of object states, robot knowledge, and human (false-)beliefs.
An inference algorithm is derived to fuse individual pg from all robots across multi-views into a joint pg, which affords more effective reasoning inference capability to overcome the errors originated from a single view.
arXiv Detail & Related papers (2020-04-25T23:02:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.