A Framework for Effective AI Recommendations in Cyber-Physical-Human
Systems
- URL: http://arxiv.org/abs/2403.05715v1
- Date: Fri, 8 Mar 2024 23:02:20 GMT
- Title: A Framework for Effective AI Recommendations in Cyber-Physical-Human
Systems
- Authors: Aditya Dave, Heeseung Bang, Andreas A. Malikopoulos
- Abstract summary: Many cyber-physical-human systems (CPHS) involve a human decision-maker who may receive recommendations from an artificial intelligence (AI) platform.
In such CPHS applications, the human decision-maker may depart from an optimal recommended decision and instead implement a different one for various reasons.
We consider that humans may deviate from AI recommendations as they perceive and interpret the system's state in a different way than the AI platform.
- Score: 3.066266438258146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many cyber-physical-human systems (CPHS) involve a human decision-maker who
may receive recommendations from an artificial intelligence (AI) platform while
holding the ultimate responsibility of making decisions. In such CPHS
applications, the human decision-maker may depart from an optimal recommended
decision and instead implement a different one for various reasons. In this
letter, we develop a rigorous framework to overcome this challenge. In our
framework, we consider that humans may deviate from AI recommendations as they
perceive and interpret the system's state in a different way than the AI
platform. We establish the structural properties of optimal recommendation
strategies and develop an approximate human model (AHM) used by the AI. We
provide theoretical bounds on the optimality gap that arises from an AHM and
illustrate the efficacy of our results in a numerical example.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making [47.33241893184721]
In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole.
We propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making.
Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates.
arXiv Detail & Related papers (2024-03-25T14:34:06Z) - Negotiating the Shared Agency between Humans & AI in the Recommender System [1.4249472316161877]
Concerns about user agency have arisen due to the inherent opacity (information asymmetry) and the nature of one-way output (power asymmetry) on algorithms.
We seek to understand how types of agency impact user perception and experience, and bring empirical evidence to refine the guidelines and designs for human-AI interactive systems.
arXiv Detail & Related papers (2024-03-23T19:23:08Z) - Beyond Recommender: An Exploratory Study of the Effects of Different AI
Roles in AI-Assisted Decision Making [48.179458030691286]
We examine three AI roles: Recommender, Analyzer, and Devil's Advocate.
Our results show each role's distinct strengths and limitations in task performance, reliance appropriateness, and user experience.
These insights offer valuable implications for designing AI assistants with adaptive functional roles according to different situations.
arXiv Detail & Related papers (2024-03-04T07:32:28Z) - Decoding AI's Nudge: A Unified Framework to Predict Human Behavior in
AI-assisted Decision Making [24.258056813524167]
We propose a computational framework that can provide an interpretable characterization of the influence of different forms of AI assistance on decision makers.
By conceptualizing AI assistance as the em nudge'' in human decision making processes, our approach centers around modelling how different forms of AI assistance modify humans' strategy in weighing different information in making their decisions.
arXiv Detail & Related papers (2024-01-11T11:22:36Z) - Optimising Human-AI Collaboration by Learning Convincing Explanations [62.81395661556852]
We propose a method for a collaborative system that remains safe by having a human making decisions.
Ardent enables efficient and effective decision-making by adapting to individual preferences for explanations.
arXiv Detail & Related papers (2023-11-13T16:00:16Z) - Learning to Make Adherence-Aware Advice [8.419688203654948]
This paper presents a sequential decision-making model that takes into account the human's adherence level.
We provide learning algorithms that learn the optimal advice policy and make advice only at critical time stamps.
arXiv Detail & Related papers (2023-10-01T23:15:55Z) - Doubting AI Predictions: Influence-Driven Second Opinion Recommendation [92.30805227803688]
We propose a way to augment human-AI collaboration by building on a common organizational practice: identifying experts who are likely to provide complementary opinions.
The proposed approach aims to leverage productive disagreement by identifying whether some experts are likely to disagree with an algorithmic assessment.
arXiv Detail & Related papers (2022-04-29T20:35:07Z) - Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork [54.309495231017344]
We argue that AI systems should be trained in a human-centered manner, directly optimized for team performance.
We study this proposal for a specific type of human-AI teaming, where the human overseer chooses to either accept the AI recommendation or solve the task themselves.
Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance.
arXiv Detail & Related papers (2020-04-27T19:06:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.