Incentive-Tuning: Understanding and Designing Incentives for Empirical Human-AI Decision-Making Studies
- URL: http://arxiv.org/abs/2601.15064v1
- Date: Wed, 21 Jan 2026 15:10:46 GMT
- Title: Incentive-Tuning: Understanding and Designing Incentives for Empirical Human-AI Decision-Making Studies
- Authors: Simran Kaur, Sara Salimzadeh, Ujwal Gadiraju,
- Abstract summary: We aim to address the critical role of incentive design for conducting empirical human-AI decision-making studies.<n>We curated a set of guidelines to aid researchers in designing effective incentive schemes for their studies.
- Score: 13.62306779087662
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI has revolutionised decision-making across various fields. Yet human judgement remains paramount for high-stakes decision-making. This has fueled explorations of collaborative decision-making between humans and AI systems, aiming to leverage the strengths of both. To explore this dynamic, researchers conduct empirical studies, investigating how humans use AI assistance for decision-making and how this collaboration impacts results. A critical aspect of conducting these studies is the role of participants, often recruited through crowdsourcing platforms. The validity of these studies hinges on the behaviours of the participants, hence effective incentives that can potentially affect these behaviours are a key part of designing and executing these studies. In this work, we aim to address the critical role of incentive design for conducting empirical human-AI decision-making studies, focusing on understanding, designing, and documenting incentive schemes. Through a thematic review of existing research, we explored the current practices, challenges, and opportunities associated with incentive design for human-AI decision-making empirical studies. We identified recurring patterns, or themes, such as what comprises the components of an incentive scheme, how incentive schemes are manipulated by researchers, and the impact they can have on research outcomes. Leveraging the acquired understanding, we curated a set of guidelines to aid researchers in designing effective incentive schemes for their studies, called the Incentive-Tuning Framework, outlining how researchers can undertake, reflect on, and document the incentive design process. By advocating for a standardised yet flexible approach to incentive design and contributing valuable insights along with practical tools, we hope to pave the way for more reliable and generalizable knowledge in the field of human-AI decision-making.
Related papers
- Report on the Scoping Workshop on AI in Science Education Research 2025 [2.0613397238713778]
Report summarizes the outcomes of a two-day international scoping workshop on the role of artificial intelligence (AI) in science education research.<n>As AI rapidly reshapes scientific practice, classroom learning, and research methods, the field faces both new opportunities and significant challenges.<n>Report concludes with actionable recommendations for training, infrastructure, and standards, along with guidance for funders, policymakers, professional organizations, and academic departments.
arXiv Detail & Related papers (2025-11-18T10:23:22Z) - RelAItionship Building: Analyzing Recruitment Strategies for Participatory AI [7.603490843138366]
We investigate the challenges that researchers face when designing and executing recruitment methodology for Participatory AI projects.<n>We describe the recruitment methodologies used in AI projects using a corpus of 37 projects to capture the diversity of practices in the field and perform an initial analysis on the documentation of recruitment practices.<n>We find that these outcomes are shaped by structural conditions of their work, researchers' own goals and expectations, and the relationships built from the recruitment methodology and subsequent collaboration.
arXiv Detail & Related papers (2025-08-27T18:00:46Z) - Bridging the Gap: Integrating Ethics and Environmental Sustainability in AI Research and Practice [57.94036023167952]
We argue that the efforts aiming to study AI's ethical ramifications should be made in tandem with those evaluating its impacts on the environment.<n>We propose best practices to better integrate AI ethics and sustainability in AI research and practice.
arXiv Detail & Related papers (2025-04-01T13:53:11Z) - How Performance Pressure Influences AI-Assisted Decision Making [52.997197698288936]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Evaluating Human-AI Collaboration: A Review and Methodological Framework [4.41358655687435]
The use of artificial intelligence (AI) in working environments with individuals, known as Human-AI Collaboration (HAIC), has become essential.<n> evaluating HAIC's effectiveness remains challenging due to the complex interaction of components involved.<n>This paper provides a detailed analysis of existing HAIC evaluation approaches and develops a fresh paradigm for more effectively evaluating these systems.
arXiv Detail & Related papers (2024-07-09T12:52:22Z) - Beyond Recommender: An Exploratory Study of the Effects of Different AI
Roles in AI-Assisted Decision Making [48.179458030691286]
We examine three AI roles: Recommender, Analyzer, and Devil's Advocate.
Our results show each role's distinct strengths and limitations in task performance, reliance appropriateness, and user experience.
These insights offer valuable implications for designing AI assistants with adaptive functional roles according to different situations.
arXiv Detail & Related papers (2024-03-04T07:32:28Z) - The Participatory Turn in AI Design: Theoretical Foundations and the
Current State of Practice [64.29355073494125]
This article aims to ground what we dub the "participatory turn" in AI design by synthesizing existing theoretical literature on participation.
We articulate empirical findings concerning the current state of participatory practice in AI design based on an analysis of recently published research and semi-structured interviews with 12 AI researchers and practitioners.
arXiv Detail & Related papers (2023-10-02T05:30:42Z) - The Impact of Imperfect XAI on Human-AI Decision-Making [8.305869611846775]
We evaluate how incorrect explanations influence humans' decision-making behavior in a bird species identification task.
Our findings reveal the influence of imperfect XAI and humans' level of expertise on their reliance on AI and human-AI team performance.
arXiv Detail & Related papers (2023-07-25T15:19:36Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Towards a Science of Human-AI Decision Making: A Survey of Empirical
Studies [22.214709837295906]
There is growing interest in the research community to augment human decision making with AI assistance.
The emerging field of human-AI decision making must embrace empirical approaches to form a foundational understanding of how humans interact and work with AI.
arXiv Detail & Related papers (2021-12-21T19:00:02Z) - Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and
Stir" [76.44130385507894]
This paper aims to ground what we dub a 'participatory turn' in AI design by synthesizing existing literature on participation and through empirical analysis of its current practices.
Based on our literature synthesis and empirical research, this paper presents a conceptual framework for analyzing participatory approaches to AI design.
arXiv Detail & Related papers (2021-11-01T17:57:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.