Avoiding Improper Treatment of Persons with Dementia by Care Robots
- URL: http://arxiv.org/abs/2005.06622v1
- Date: Fri, 8 May 2020 14:34:13 GMT
- Title: Avoiding Improper Treatment of Persons with Dementia by Care Robots
- Authors: Martin Cooney, Sepideh Pashami, Eric J\"arpe, Awais Ashfaq
- Abstract summary: We focus in particular on exploring some potential dangers affecting persons with dementia.
We describe a proposed solution involving rich causal models and accountability measures.
Our aim is that the considerations raised could help inform the design of care robots intended to support well-being in PWD.
- Score: 1.5156879440024376
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The phrase "most cruel and revolting crimes" has been used to describe some
poor historical treatment of vulnerable impaired persons by precisely those who
should have had the responsibility of protecting and helping them. We believe
we might be poised to see history repeat itself, as increasingly human-like
aware robots become capable of engaging in behavior which we would consider
immoral in a human--either unknowingly or deliberately. In the current paper we
focus in particular on exploring some potential dangers affecting persons with
dementia (PWD), which could arise from insufficient software or external
factors, and describe a proposed solution involving rich causal models and
accountability measures: Specifically, the Consequences of Needs-driven
Dementia-compromised Behaviour model (C-NDB) could be adapted to be used with
conversation topic detection, causal networks and multi-criteria decision
making, alongside reports, audits, and deterrents. Our aim is that the
considerations raised could help inform the design of care robots intended to
support well-being in PWD.
Related papers
- Envisioning Possibilities and Challenges of AI for Personalized Cancer Care [36.53434633571359]
We identify critical gaps in current healthcare systems such as a lack of personalized care and insufficient cultural and linguistic accommodation.
AI, when applied to care, was seen as a way to address these issues by enabling real-time, culturally aligned, and linguistically appropriate interactions.
We also uncovered concerns about the implications of AI-driven personalization, such as data privacy, loss of human touch in caregiving, and the risk of echo chambers that limit exposure to diverse information.
arXiv Detail & Related papers (2024-08-19T15:55:46Z) - Reinforcement Learning Interventions on Boundedly Rational Human Agents
in Frictionful Tasks [25.507656595628376]
We introduce a framework in which an AI agent intervenes on the parameters of a Markov Decision Process (MDP) belonging to a boundedly rational human agent.
We show that AI planning with our human models can lead to helpful policies on a wide range of more complex, ground-truth humans.
arXiv Detail & Related papers (2024-01-26T14:59:48Z) - ICRA Roboethics Challenge 2023: Intelligent Disobedience in an Elderly
Care Home [33.42984969429203]
Service robots offer a promising avenue to enhance their well-being in elderly care homes.
Such robots will encounter complex scenarios which will require them to perform decisions with ethical consequences.
We propose to leverage the Intelligent Disobedience framework in order to give the robot the ability to perform a deliberation process over decisions with potential ethical implications.
arXiv Detail & Related papers (2023-11-15T08:55:19Z) - Protecting Society from AI Misuse: When are Restrictions on Capabilities
Warranted? [0.0]
We argue that targeted interventions on certain capabilities will be warranted to prevent some misuses of AI.
These restrictions may include controlling who can access certain types of AI models, what they can be used for, whether outputs are filtered or can be traced back to their user.
We apply this reasoning to three examples: predicting novel toxins, creating harmful images, and automating spear phishing campaigns.
arXiv Detail & Related papers (2023-03-16T15:05:59Z) - When to Ask for Help: Proactive Interventions in Autonomous
Reinforcement Learning [57.53138994155612]
A long-term goal of reinforcement learning is to design agents that can autonomously interact and learn in the world.
A critical challenge is the presence of irreversible states which require external assistance to recover from, such as when a robot arm has pushed an object off of a table.
We propose an algorithm that efficiently learns to detect and avoid states that are irreversible, and proactively asks for help in case the agent does enter them.
arXiv Detail & Related papers (2022-10-19T17:57:24Z) - Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards
Individualized and Explainable Robotic Support in Everyday Activities [80.37857025201036]
Key challenge for robotic systems is to figure out the behavior of another agent.
Processing correct inferences is especially challenging when (confounding) factors are not controlled experimentally.
We propose equipping robots with the necessary tools to conduct observational studies on people.
arXiv Detail & Related papers (2022-01-27T22:15:56Z) - A Simulated Experiment to Explore Robotic Dialogue Strategies for People
with Dementia [2.5412519393131974]
We propose a partially observable markov decision process (POMDP) model for the PwD-robot interaction in the context of repetitive questioning.
We used Q-learning to learn an adaptive conversation strategy towards PwDs with different cognitive capabilities and different engagement levels.
This may be a useful step towards the application of conversational social robots to cope with repetitive questioning in PwDs.
arXiv Detail & Related papers (2021-04-18T19:35:19Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z) - COVI White Paper [67.04578448931741]
Contact tracing is an essential tool to change the course of the Covid-19 pandemic.
We present an overview of the rationale, design, ethical considerations and privacy strategy of COVI,' a Covid-19 public peer-to-peer contact tracing and risk awareness mobile application developed in Canada.
arXiv Detail & Related papers (2020-05-18T07:40:49Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.