Value-based Fast and Slow AI Nudging
- URL: http://arxiv.org/abs/2307.07628v1
- Date: Fri, 14 Jul 2023 20:57:27 GMT
- Title: Value-based Fast and Slow AI Nudging
- Authors: Marianna B. Ganapini, Francesco Fabiano, Lior Horesh, Andrea Loreggia,
Nicholas Mattei, Keerthiram Murugesan, Vishal Pallagani, Francesca Rossi,
Biplav Srivastava, Brent Venable
- Abstract summary: Nudging is a behavioral strategy aimed at influencing people's thoughts and actions.
In this paper, we propose and discuss a value-based AI-human collaborative framework where AI systems nudge humans by proposing decision recommendations.
- Score: 37.53694593692918
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Nudging is a behavioral strategy aimed at influencing people's thoughts and
actions. Nudging techniques can be found in many situations in our daily lives,
and these nudging techniques can targeted at human fast and unconscious
thinking, e.g., by using images to generate fear or the more careful and
effortful slow thinking, e.g., by releasing information that makes us reflect
on our choices. In this paper, we propose and discuss a value-based AI-human
collaborative framework where AI systems nudge humans by proposing decision
recommendations. Three different nudging modalities, based on when
recommendations are presented to the human, are intended to stimulate human
fast thinking, slow thinking, or meta-cognition. Values that are relevant to a
specific decision scenario are used to decide when and how to use each of these
nudging modalities. Examples of values are decision quality, speed, human
upskilling and learning, human agency, and privacy. Several values can be
present at the same time, and their priorities can vary over time. The
framework treats values as parameters to be instantiated in a specific decision
environment.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - A Framework for Effective AI Recommendations in Cyber-Physical-Human
Systems [3.066266438258146]
Many cyber-physical-human systems (CPHS) involve a human decision-maker who may receive recommendations from an artificial intelligence (AI) platform.
In such CPHS applications, the human decision-maker may depart from an optimal recommended decision and instead implement a different one for various reasons.
We consider that humans may deviate from AI recommendations as they perceive and interpret the system's state in a different way than the AI platform.
arXiv Detail & Related papers (2024-03-08T23:02:20Z) - Learning Human-like Representations to Enable Learning Human Values [11.236150405125754]
We explore the effects of representational alignment between humans and AI agents on learning human values.
We show that this kind of representational alignment can support safely learning and exploring human values in the context of personalization.
arXiv Detail & Related papers (2023-12-21T18:31:33Z) - Promptable Behaviors: Personalizing Multi-Objective Rewards from Human
Preferences [53.353022588751585]
We present Promptable Behaviors, a novel framework that facilitates efficient personalization of robotic agents to diverse human preferences.
We introduce three distinct methods to infer human preferences by leveraging different types of interactions.
We evaluate the proposed method in personalized object-goal navigation and flee navigation tasks in ProcTHOR and RoboTHOR.
arXiv Detail & Related papers (2023-12-14T21:00:56Z) - From DDMs to DNNs: Using process data and models of decision-making to
improve human-AI interactions [1.1510009152620668]
We argue that artificial intelligence (AI) research would benefit from a stronger focus on insights about how decisions emerge over time.
First, we introduce a highly established computational framework that assumes decisions to emerge from the noisy accumulation of evidence.
Next, we discuss to what extent current approaches in multi-agent AI do or do not incorporate process data and models of decision making.
arXiv Detail & Related papers (2023-08-29T11:27:22Z) - Indecision Modeling [50.00689136829134]
It is important that AI systems act in ways which align with human values.
People are often indecisive, and especially so when their decision has moral implications.
arXiv Detail & Related papers (2020-12-15T18:32:37Z) - Inverse Active Sensing: Modeling and Understanding Timely
Decision-Making [111.07204912245841]
We develop a framework for the general setting of evidence-based decision-making under endogenous, context-dependent time pressure.
We demonstrate how it enables modeling intuitive notions of surprise, suspense, and optimality in decision strategies.
arXiv Detail & Related papers (2020-06-25T02:30:45Z) - Implications of Human Irrationality for Reinforcement Learning [26.76732313120685]
We argue that human decision making may be a better source of ideas for constraining how machine learning problems are defined than would otherwise be the case.
One promising idea concerns human decision making that is dependent on apparently irrelevant aspects of the choice context.
We propose a novel POMDP model for contextual choice tasks and show that, despite the apparent irrationalities, a reinforcement learner can take advantage of the way that humans make decisions.
arXiv Detail & Related papers (2020-06-07T07:44:53Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.