I Need Your Advice... Human Perceptions of Robot Moral Advising
Behaviors
- URL: http://arxiv.org/abs/2104.06963v1
- Date: Wed, 14 Apr 2021 16:45:02 GMT
- Title: I Need Your Advice... Human Perceptions of Robot Moral Advising
Behaviors
- Authors: Nichole D. Starr and Bertram Malle and Tom Williams
- Abstract summary: We explore how robots should communicate in moral advising scenarios.
Our results suggest that, in fact, both humans and robots are judged more positively when they provide the advice that favors the common good over an individual's life.
These results raise critical new questions regarding people's moral responses to robots and the design of autonomous moral agents.
- Score: 2.0743129221959284
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to their unique persuasive power, language-capable robots must be able to
both act in line with human moral norms and clearly and appropriately
communicate those norms. These requirements are complicated by the possibility
that humans may ascribe blame differently to humans and robots. In this work,
we explore how robots should communicate in moral advising scenarios, in which
the norms they are expected to follow (in a moral dilemma scenario) may be
different from those their advisees are expected to follow. Our results suggest
that, in fact, both humans and robots are judged more positively when they
provide the advice that favors the common good over an individual's life. These
results raise critical new questions regarding people's moral responses to
robots and the design of autonomous moral agents.
Related papers
- SACSoN: Scalable Autonomous Control for Social Navigation [62.59274275261392]
We develop methods for training policies for socially unobtrusive navigation.
By minimizing this counterfactual perturbation, we can induce robots to behave in ways that do not alter the natural behavior of humans in the shared space.
We collect a large dataset where an indoor mobile robot interacts with human bystanders.
arXiv Detail & Related papers (2023-06-02T19:07:52Z) - The dynamic nature of trust: Trust in Human-Robot Interaction revisited [0.38233569758620045]
Socially assistive robots (SARs) assist humans in the real world.
Risk introduces an element of trust, so understanding human trust in the robot is imperative.
arXiv Detail & Related papers (2023-03-08T19:20:11Z) - Aligning Robot and Human Representations [50.070982136315784]
We argue that current representation learning approaches in robotics should be studied from the perspective of how well they accomplish the objective of representation alignment.
We mathematically define the problem, identify its key desiderata, and situate current methods within this formalism.
arXiv Detail & Related papers (2023-02-03T18:59:55Z) - Learning Latent Representations to Co-Adapt to Humans [12.71953776723672]
Non-stationary humans are challenging for robot learners.
In this paper we introduce an algorithmic formalism that enables robots to co-adapt alongside dynamic humans.
arXiv Detail & Related papers (2022-12-19T16:19:24Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - Understanding a Robot's Guiding Ethical Principles via Automatically
Generated Explanations [4.393037165265444]
We build upon an existing ethical framework to allow users to make suggestions about plans and receive automatically generated contrastive explanations.
Results of a user study indicate that the generated explanations help humans to understand the ethical principles that underlie a robot's plan.
arXiv Detail & Related papers (2022-06-20T22:55:00Z) - Two ways to make your robot proactive: reasoning about human intentions,
or reasoning about possible futures [69.03494351066846]
We investigate two ways to make robots proactive.
One way is to recognize humans' intentions and to act to fulfill them, like opening the door that you are about to cross.
The other way is to reason about possible future threats or opportunities and to act to prevent or to foster them.
arXiv Detail & Related papers (2022-05-11T13:33:14Z) - Doing Right by Not Doing Wrong in Human-Robot Collaboration [8.078753289996417]
We propose a novel approach to learning fair and sociable behavior, not by reproducing positive behavior, but rather by avoiding negative behavior.
In this study, we highlight the importance of incorporating sociability in robot manipulation, as well as the need to consider fairness in human-robot interactions.
arXiv Detail & Related papers (2022-02-05T23:05:10Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z) - When Humans Aren't Optimal: Robots that Collaborate with Risk-Aware
Humans [16.21572727245082]
In order to collaborate safely and efficiently, robots need to anticipate how their human partners will behave.
In this paper, we adopt a well-known Risk-Aware human model from behavioral economics called Cumulative Prospect Theory.
We find that this increased modeling accuracy results in safer and more efficient human-robot collaboration.
arXiv Detail & Related papers (2020-01-13T16:27:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.