Negotiating the Shared Agency between Humans & AI in the Recommender System
- URL: http://arxiv.org/abs/2403.15919v3
- Date: Fri, 19 Apr 2024 18:57:07 GMT
- Title: Negotiating the Shared Agency between Humans & AI in the Recommender System
- Authors: Mengke Wu, Weizi Liu, Yanyun Wang, Mike Yao,
- Abstract summary: Concerns about user agency have arisen due to the inherent opacity (information asymmetry) and the nature of one-way output (power asymmetry) on algorithms.
We seek to understand how types of agency impact user perception and experience, and bring empirical evidence to refine the guidelines and designs for human-AI interactive systems.
- Score: 1.4249472316161877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Smart recommendation algorithms have revolutionized information dissemination, enhancing efficiency and reshaping content delivery across various domains. However, concerns about user agency have arisen due to the inherent opacity (information asymmetry) and the nature of one-way output (power asymmetry) on algorithms. While both issues have been criticized by scholars via advocating explainable AI (XAI) and human-AI collaborative decision-making (HACD), few research evaluates their integrated effects on users, and few HACD discussions in recommender systems beyond improving and filtering the results. This study proposes an incubating idea as a missing step in HACD that allows users to control the degrees of AI-recommended content. Then, we integrate it with existing XAI to a flow prototype aimed at assessing the enhancement of user agency. We seek to understand how types of agency impact user perception and experience, and bring empirical evidence to refine the guidelines and designs for human-AI interactive systems.
Related papers
- When Models Know More Than They Can Explain: Quantifying Knowledge Transfer in Human-AI Collaboration [79.69935257008467]
We introduce Knowledge Integration and Transfer Evaluation (KITE), a conceptual and experimental framework for Human-AI knowledge transfer capabilities.<n>We conduct the first large-scale human study (N=118) explicitly designed to measure it.<n>In our two-phase setup, humans first ideate with an AI on problem-solving strategies, then independently implement solutions, isolating model explanations' influence on human understanding.
arXiv Detail & Related papers (2025-06-05T20:48:16Z) - AI, Help Me Think$\unicode{x2014}$but for Myself: Assisting People in Complex Decision-Making by Providing Different Kinds of Cognitive Support [9.903362030444166]
We explore an alternative interaction model in which the AI outputs build upon users' own decision-making rationales.
We compare this approach, which we call ExtendAI, with a recommendation-based AI.
We found that the AIs had different impacts, with ExtendAI integrating better into the decision-making process and people's own thinking.
arXiv Detail & Related papers (2025-04-09T10:48:17Z) - The Impact of Transparency in AI Systems on Users' Data-Sharing Intentions: A Scenario-Based Experiment [4.20435795138041]
We investigated how transparent and non-transparent data-processing entities influenced data-sharing intentions.<n>Surprisingly, our results revealed no significant difference in willingness to share data across entities.<n>We found that a general attitude of trust towards AI has a significant positive influence, especially in the transparent AI condition.
arXiv Detail & Related papers (2025-02-27T16:28:09Z) - Designing and Evaluating an Educational Recommender System with Different Levels of User Control [0.6291443816903801]
We present the systematic design and evaluation of an interactive educational recommender system (ERS)<n>We introduce user control around the input (i.e., user profile), process (i.e., recommendation algorithm), and output (i.e., recommendations) of the ERS.<n>Our results demonstrate the positive impact of user control on user perceived benefits of the ERS.
arXiv Detail & Related papers (2025-01-22T14:14:49Z) - How Performance Pressure Influences AI-Assisted Decision Making [57.53469908423318]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.
Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Quantifying User Coherence: A Unified Framework for Cross-Domain Recommendation Analysis [69.37718774071793]
This paper introduces novel information-theoretic measures for understanding recommender systems.
We evaluate 7 recommendation algorithms across 9 datasets, revealing the relationships between our measures and standard performance metrics.
arXiv Detail & Related papers (2024-10-03T13:02:07Z) - Interpretable Rule-Based System for Radar-Based Gesture Sensing: Enhancing Transparency and Personalization in AI [2.99664686845172]
We introduce MIRA, a transparent and interpretable multi-class rule-based algorithm tailored for radar-based gesture detection.
We showcase the system's adaptability through personalized rule sets that calibrate to individual user behavior, offering a user-centric AI experience.
Our research underscores MIRA's ability to deliver both high interpretability and performance and emphasizes the potential for broader adoption of interpretable AI in safety-critical applications.
arXiv Detail & Related papers (2024-09-30T16:40:27Z) - Interactive Example-based Explanations to Improve Health Professionals' Onboarding with AI for Human-AI Collaborative Decision Making [2.964175945467257]
A growing research explores the usage of AI explanations on user's decision phases for human-AI collaborative decision-making.
Previous studies found the issues of overreliance on wrong' AI outputs.
We propose interactive example-based explanations to improve health professionals' offboarding with AI.
arXiv Detail & Related papers (2024-09-24T07:20:09Z) - To Err Is AI! Debugging as an Intervention to Facilitate Appropriate Reliance on AI Systems [11.690126756498223]
Vision for optimal human-AI collaboration requires 'appropriate reliance' of humans on AI systems.
In practice, the performance disparity of machine learning models on out-of-distribution data makes dataset-specific performance feedback unreliable.
arXiv Detail & Related papers (2024-09-22T09:43:27Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - ADESSE: Advice Explanations in Complex Repeated Decision-Making Environments [14.105935964906976]
This work considers a problem setup where an intelligent agent provides advice to a human decision-maker.
We develop an approach named ADESSE to generate explanations about the adviser agent to improve human trust and decision-making.
arXiv Detail & Related papers (2024-05-31T08:59:20Z) - Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making [47.33241893184721]
In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole.
We propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making.
Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates.
arXiv Detail & Related papers (2024-03-25T14:34:06Z) - Beyond Recommender: An Exploratory Study of the Effects of Different AI
Roles in AI-Assisted Decision Making [48.179458030691286]
We examine three AI roles: Recommender, Analyzer, and Devil's Advocate.
Our results show each role's distinct strengths and limitations in task performance, reliance appropriateness, and user experience.
These insights offer valuable implications for designing AI assistants with adaptive functional roles according to different situations.
arXiv Detail & Related papers (2024-03-04T07:32:28Z) - Analyzing Operator States and the Impact of AI-Enhanced Decision Support
in Control Rooms: A Human-in-the-Loop Specialized Reinforcement Learning
Framework for Intervention Strategies [0.9378955659006951]
In complex industrial and chemical process control rooms, effective decision-making is crucial for safety andeffi- ciency.
The experiments in this paper evaluate the impact and applications of an AI-based decision support system integrated into an improved human-machine interface.
arXiv Detail & Related papers (2024-02-20T18:31:27Z) - How much informative is your XAI? A decision-making assessment task to
objectively measure the goodness of explanations [53.01494092422942]
The number and complexity of personalised and user-centred approaches to XAI have rapidly grown in recent years.
It emerged that user-centred approaches to XAI positively affect the interaction between users and systems.
We propose an assessment task to objectively and quantitatively measure the goodness of XAI systems.
arXiv Detail & Related papers (2023-12-07T15:49:39Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Confounding-Robust Policy Improvement with Human-AI Teams [8.315707564931465]
We propose a novel solution to address unobserved confounding in human-AI collaboration.
Our approach combines domain expertise with AI-driven statistical modeling to account for potentially hidden confounders.
arXiv Detail & Related papers (2023-10-13T02:39:52Z) - Rational Decision-Making Agent with Internalized Utility Judgment [88.01612847081677]
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications.<n>This paper proposes RadAgent, which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning.<n> Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks.
arXiv Detail & Related papers (2023-08-24T03:11:45Z) - The Impact of Imperfect XAI on Human-AI Decision-Making [8.305869611846775]
We evaluate how incorrect explanations influence humans' decision-making behavior in a bird species identification task.
Our findings reveal the influence of imperfect XAI and humans' level of expertise on their reliance on AI and human-AI team performance.
arXiv Detail & Related papers (2023-07-25T15:19:36Z) - E-NER: Evidential Deep Learning for Trustworthy Named Entity Recognition [69.87816981427858]
Most named entity recognition (NER) systems focus on improving model performance, ignoring the need to quantify model uncertainty.
Evidential deep learning (EDL) has recently been proposed as a promising solution to explicitly model predictive uncertainty for classification tasks.
We propose a trustworthy NER framework named E-NER by introducing two uncertainty-guided loss terms to the conventional EDL, along with a series of uncertainty-guided training strategies.
arXiv Detail & Related papers (2023-05-29T02:36:16Z) - Advancing Human-AI Complementarity: The Impact of User Expertise and
Algorithmic Tuning on Joint Decision Making [10.890854857970488]
Many factors can impact success of Human-AI teams, including a user's domain expertise, mental models of an AI system, trust in recommendations, and more.
Our study examined user performance in a non-trivial blood vessel labeling task where participants indicated whether a given blood vessel was flowing or stalled.
Our results show that while recommendations from an AI-Assistant can aid user decision making, factors such as users' baseline performance relative to the AI and complementary tuning of AI error types significantly impact overall team performance.
arXiv Detail & Related papers (2022-08-16T21:39:58Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Doubting AI Predictions: Influence-Driven Second Opinion Recommendation [92.30805227803688]
We propose a way to augment human-AI collaboration by building on a common organizational practice: identifying experts who are likely to provide complementary opinions.
The proposed approach aims to leverage productive disagreement by identifying whether some experts are likely to disagree with an algorithmic assessment.
arXiv Detail & Related papers (2022-04-29T20:35:07Z) - An Interactive Explanatory AI System for Industrial Quality Control [0.8889304968879161]
We aim to extend the defect detection task towards an interactive human-in-the-loop approach.
We propose an approach for an interactive support system for classifications in an industrial quality control setting.
arXiv Detail & Related papers (2022-03-17T09:04:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.