Enabling Morally Sensitive Robotic Clarification Requests
- URL: http://arxiv.org/abs/2007.08670v1
- Date: Thu, 16 Jul 2020 22:12:35 GMT
- Title: Enabling Morally Sensitive Robotic Clarification Requests
- Authors: Ryan Blake Jackson and Tom Williams
- Abstract summary: reflexive generation of clarification requests can lead robots to miscommunicate their moral dispositions.
We present a solution by performing moral reasoning on each potential disambiguation of an ambiguous human utterance.
We then evaluate our method with a human subjects experiment, the results of which indicate that our approach successfully ameliorates the two identified concerns.
- Score: 2.4505259300326334
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The design of current natural language oriented robot architectures enables
certain architectural components to circumvent moral reasoning capabilities.
One example of this is reflexive generation of clarification requests as soon
as referential ambiguity is detected in a human utterance. As shown in previous
research, this can lead robots to (1) miscommunicate their moral dispositions
and (2) weaken human perception or application of moral norms within their
current context. We present a solution to these problems by performing moral
reasoning on each potential disambiguation of an ambiguous human utterance and
responding accordingly, rather than immediately and naively requesting
clarification. We implement our solution in the DIARC robot architecture,
which, to our knowledge, is the only current robot architecture with both moral
reasoning and clarification request generation capabilities. We then evaluate
our method with a human subjects experiment, the results of which indicate that
our approach successfully ameliorates the two identified concerns.
Related papers
- Procedural Dilemma Generation for Evaluating Moral Reasoning in Humans and Language Models [28.53750311045418]
We use a language model to translate causal graphs that capture key aspects of moral dilemmas into prompt templates.
We collect moral permissibility and intention judgments from human participants for a subset of our items.
We find that moral dilemmas in which the harm is a necessary means resulted in lower permissibility and higher intention ratings for both participants and language models.
arXiv Detail & Related papers (2024-04-17T01:13:04Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - Aligning Robot and Human Representations [50.070982136315784]
We argue that current representation learning approaches in robotics should be studied from the perspective of how well they accomplish the objective of representation alignment.
We mathematically define the problem, identify its key desiderata, and situate current methods within this formalism.
arXiv Detail & Related papers (2023-02-03T18:59:55Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Understanding a Robot's Guiding Ethical Principles via Automatically
Generated Explanations [4.393037165265444]
We build upon an existing ethical framework to allow users to make suggestions about plans and receive automatically generated contrastive explanations.
Results of a user study indicate that the generated explanations help humans to understand the ethical principles that underlie a robot's plan.
arXiv Detail & Related papers (2022-06-20T22:55:00Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - I Need Your Advice... Human Perceptions of Robot Moral Advising
Behaviors [2.0743129221959284]
We explore how robots should communicate in moral advising scenarios.
Our results suggest that, in fact, both humans and robots are judged more positively when they provide the advice that favors the common good over an individual's life.
These results raise critical new questions regarding people's moral responses to robots and the design of autonomous moral agents.
arXiv Detail & Related papers (2021-04-14T16:45:02Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Deliberative and Conceptual Inference in Service Robots [0.0]
Service robots need to reason to support people in daily life situations.
Reasoning is an expensive resource that should be used on demand.
We describe a service robot conceptual model and architecture capable of supporting the daily life inference cycle.
arXiv Detail & Related papers (2020-12-13T18:30:15Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.