Homophily and Incentive Effects in Use of Algorithms
- URL: http://arxiv.org/abs/2205.09701v1
- Date: Thu, 19 May 2022 17:11:04 GMT
- Title: Homophily and Incentive Effects in Use of Algorithms
- Authors: Riccardo Fogliato, Sina Fazelpour, Shantanu Gupta, Zachary Lipton,
David Danks
- Abstract summary: We present a crowdsourcing vignette study designed to assess the impacts of two plausible factors on AI-informed decision-making.
First, we examine homophily -- do people defer more to models that tend to agree with them?
Second, we consider incentives -- how do people incorporate a (known) cost structure in the hybrid decision-making setting?
- Score: 17.55279695774825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As algorithmic tools increasingly aid experts in making consequential
decisions, the need to understand the precise factors that mediate their
influence has grown commensurately. In this paper, we present a crowdsourcing
vignette study designed to assess the impacts of two plausible factors on
AI-informed decision-making. First, we examine homophily -- do people defer
more to models that tend to agree with them? -- by manipulating the agreement
during training between participants and the algorithmic tool. Second, we
considered incentives -- how do people incorporate a (known) cost structure in
the hybrid decision-making setting? -- by varying rewards associated with true
positives vs. true negatives. Surprisingly, we found limited influence of
either homophily and no evidence of incentive effects, despite participants
performing similarly to previous studies. Higher levels of agreement between
the participant and the AI tool yielded more confident predictions, but only
when outcome feedback was absent. These results highlight the complexity of
characterizing human-algorithm interactions, and suggest that findings from
social psychology may require re-examination when humans interact with
algorithms.
Related papers
- Reduced-Rank Multi-objective Policy Learning and Optimization [57.978477569678844]
In practice, causal researchers do not have a single outcome in mind a priori.
In government-assisted social benefit programs, policymakers collect many outcomes to understand the multidimensional nature of poverty.
We present a data-driven dimensionality-reduction methodology for multiple outcomes in the context of optimal policy learning.
arXiv Detail & Related papers (2024-04-29T08:16:30Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Advancing Human-AI Complementarity: The Impact of User Expertise and
Algorithmic Tuning on Joint Decision Making [10.890854857970488]
Many factors can impact success of Human-AI teams, including a user's domain expertise, mental models of an AI system, trust in recommendations, and more.
Our study examined user performance in a non-trivial blood vessel labeling task where participants indicated whether a given blood vessel was flowing or stalled.
Our results show that while recommendations from an AI-Assistant can aid user decision making, factors such as users' baseline performance relative to the AI and complementary tuning of AI error types significantly impact overall team performance.
arXiv Detail & Related papers (2022-08-16T21:39:58Z) - Crowdsourcing Impacts: Exploring the Utility of Crowds for Anticipating
Societal Impacts of Algorithmic Decision Making [7.068913546756094]
We employ crowdsourcing to uncover different types of impact areas based on a set of governmental algorithmic decision making tools.
Our findings suggest that this method is effective at leveraging the cognitive diversity of the crowd to uncover a range of issues.
arXiv Detail & Related papers (2022-07-19T19:46:53Z) - The Statistical Complexity of Interactive Decision Making [126.04974881555094]
We provide a complexity measure, the Decision-Estimation Coefficient, that is proven to be both necessary and sufficient for sample-efficient interactive learning.
A unified algorithm design principle, Estimation-to-Decisions (E2D), transforms any algorithm for supervised estimation into an online algorithm for decision making.
arXiv Detail & Related papers (2021-12-27T02:53:44Z) - Understanding Relations Between Perception of Fairness and Trust in
Algorithmic Decision Making [8.795591344648294]
We aim to understand the relationship between induced algorithmic fairness and its perception in humans.
We also study how does induced algorithmic fairness affects user trust in algorithmic decision making.
arXiv Detail & Related papers (2021-09-29T11:00:39Z) - The Impact of Algorithmic Risk Assessments on Human Predictions and its
Analysis via Crowdsourcing Studies [79.66833203975729]
We conduct a vignette study in which laypersons are tasked with predicting future re-arrests.
Our key findings are as follows: Participants often predict that an offender will be rearrested even when they deem the likelihood of re-arrest to be well below 50%.
Judicial decisions, unlike participants' predictions, depend in part on factors that are to the likelihood of re-arrest.
arXiv Detail & Related papers (2021-09-03T11:09:10Z) - Decision-makers Processing of AI Algorithmic Advice: Automation Bias
versus Selective Adherence [0.0]
Key concern is that human overreliance on algorithms introduces new biases in the human-algorithm interaction.
A second concern regards decision-makers inclination to selectively adopt algorithmic advice when it matches their pre-existing beliefs and stereotypes.
We assess these via two studies simulating the use of algorithmic advice in decisions pertaining to the employment of school teachers in the Netherlands.
Our findings of selective, biased adherence belie the promise of neutrality that has propelled algorithm use in the public sector.
arXiv Detail & Related papers (2021-03-03T13:10:50Z) - Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted
Decision-making [46.625616262738404]
We use knowledge from the field of cognitive science to account for cognitive biases in the human-AI collaborative decision-making setting.
We focus specifically on anchoring bias, a bias commonly encountered in human-AI collaboration.
arXiv Detail & Related papers (2020-10-15T22:25:41Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.