ICRA Roboethics Challenge 2023: Intelligent Disobedience in an Elderly
Care Home
- URL: http://arxiv.org/abs/2311.08783v1
- Date: Wed, 15 Nov 2023 08:55:19 GMT
- Title: ICRA Roboethics Challenge 2023: Intelligent Disobedience in an Elderly
Care Home
- Authors: Sveta Paster, Kantwon Rogers, Gordon Briggs, Peter Stone, Reuth Mirsky
- Abstract summary: Service robots offer a promising avenue to enhance their well-being in elderly care homes.
Such robots will encounter complex scenarios which will require them to perform decisions with ethical consequences.
We propose to leverage the Intelligent Disobedience framework in order to give the robot the ability to perform a deliberation process over decisions with potential ethical implications.
- Score: 33.42984969429203
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the projected surge in the elderly population, service robots offer a
promising avenue to enhance their well-being in elderly care homes. Such robots
will encounter complex scenarios which will require them to perform decisions
with ethical consequences. In this report, we propose to leverage the
Intelligent Disobedience framework in order to give the robot the ability to
perform a deliberation process over decisions with potential ethical
implications. We list the issues that this framework can assist with, define it
formally in the context of the specific elderly care home scenario, and
delineate the requirements for implementing an intelligently disobeying robot.
We conclude this report with some critical analysis and suggestions for future
work.
Related papers
- Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Understanding a Robot's Guiding Ethical Principles via Automatically
Generated Explanations [4.393037165265444]
We build upon an existing ethical framework to allow users to make suggestions about plans and receive automatically generated contrastive explanations.
Results of a user study indicate that the generated explanations help humans to understand the ethical principles that underlie a robot's plan.
arXiv Detail & Related papers (2022-06-20T22:55:00Z) - Two ways to make your robot proactive: reasoning about human intentions,
or reasoning about possible futures [69.03494351066846]
We investigate two ways to make robots proactive.
One way is to recognize humans' intentions and to act to fulfill them, like opening the door that you are about to cross.
The other way is to reason about possible future threats or opportunities and to act to prevent or to foster them.
arXiv Detail & Related papers (2022-05-11T13:33:14Z) - An Ethical Framework for Guiding the Development of Affectively-Aware
Artificial Intelligence [0.0]
We propose guidelines for evaluating the (moral and) ethical consequences of affectively-aware AI.
We propose a multi-stakeholder analysis framework that separates the ethical responsibilities of AI Developers vis-a-vis the entities that deploy such AI.
We end with recommendations for researchers, developers, operators, as well as regulators and law-makers.
arXiv Detail & Related papers (2021-07-29T03:57:53Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Enabling Morally Sensitive Robotic Clarification Requests [2.4505259300326334]
reflexive generation of clarification requests can lead robots to miscommunicate their moral dispositions.
We present a solution by performing moral reasoning on each potential disambiguation of an ambiguous human utterance.
We then evaluate our method with a human subjects experiment, the results of which indicate that our approach successfully ameliorates the two identified concerns.
arXiv Detail & Related papers (2020-07-16T22:12:35Z) - Towards Contrastive Explanations for Comparing the Ethics of Plans [4.393037165265444]
We present how contrastive explanations can be used for comparing the ethics of plans.
We build upon an existing ethical framework to allow users to make suggestions to plans and receive contrastive explanations.
arXiv Detail & Related papers (2020-06-22T21:38:16Z) - Avoiding Improper Treatment of Persons with Dementia by Care Robots [1.5156879440024376]
We focus in particular on exploring some potential dangers affecting persons with dementia.
We describe a proposed solution involving rich causal models and accountability measures.
Our aim is that the considerations raised could help inform the design of care robots intended to support well-being in PWD.
arXiv Detail & Related papers (2020-05-08T14:34:13Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.