Algorithmic Assistance with Recommendation-Dependent Preferences
- URL: http://arxiv.org/abs/2208.07626v3
- Date: Fri, 19 Jan 2024 16:52:27 GMT
- Title: Algorithmic Assistance with Recommendation-Dependent Preferences
- Authors: Bryce McLaughlin and Jann Spiess
- Abstract summary: We consider the effect and design of algorithmic recommendations when they affect choices.
We show that recommendation-dependent preferences create inefficiencies where the decision-maker is overly responsive to the recommendation.
- Score: 2.864550757598007
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When an algorithm provides risk assessments, we typically think of them as
helpful inputs to human decisions, such as when risk scores are presented to
judges or doctors. However, a decision-maker may not only react to the
information provided by the algorithm. The decision-maker may also view the
algorithmic recommendation as a default action, making it costly for them to
deviate, such as when a judge is reluctant to overrule a high-risk assessment
for a defendant or a doctor fears the consequences of deviating from
recommended procedures. To address such unintended consequences of algorithmic
assistance, we propose a principal-agent model of joint human-machine
decision-making. Within this model, we consider the effect and design of
algorithmic recommendations when they affect choices not just by shifting
beliefs, but also by altering preferences. We motivate this assumption from
institutional factors, such as a desire to avoid audits, as well as from
well-established models in behavioral science that predict loss aversion
relative to a reference point, which here is set by the algorithm. We show that
recommendation-dependent preferences create inefficiencies where the
decision-maker is overly responsive to the recommendation. As a potential
remedy, we discuss algorithms that strategically withhold recommendations, and
show how they can improve the quality of final decisions.
Related papers
- Integrating Expert Judgment and Algorithmic Decision Making: An Indistinguishability Framework [12.967730957018688]
We introduce a novel framework for human-AI collaboration in prediction and decision tasks.
Our approach leverages human judgment to distinguish inputs which are algorithmically indistinguishable, or "look the same" to any feasible predictive algorithm.
arXiv Detail & Related papers (2024-10-11T13:03:53Z) - Designing Algorithmic Recommendations to Achieve Human-AI Complementarity [2.4247752614854203]
We formalize the design of recommendation algorithms that assist human decision-makers.
We use a potential-outcomes framework to model the effect of recommendations on a human decision-maker's binary treatment choice.
We derive minimax optimal recommendation algorithms that can be implemented with machine learning.
arXiv Detail & Related papers (2024-05-02T17:15:30Z) - Does AI help humans make better decisions? A statistical evaluation framework for experimental and observational studies [0.43981305860983716]
We show how to compare the performance of three alternative decision-making systems--human-alone, human-with-AI, and AI-alone.
We find that the risk assessment recommendations do not improve the classification accuracy of a judge's decision to impose cash bail.
arXiv Detail & Related papers (2024-03-18T01:04:52Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Doubting AI Predictions: Influence-Driven Second Opinion Recommendation [92.30805227803688]
We propose a way to augment human-AI collaboration by building on a common organizational practice: identifying experts who are likely to provide complementary opinions.
The proposed approach aims to leverage productive disagreement by identifying whether some experts are likely to disagree with an algorithmic assessment.
arXiv Detail & Related papers (2022-04-29T20:35:07Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Bayesian Persuasion for Algorithmic Recourse [28.586165301962485]
In some situations, the underlying predictive model is deliberately kept secret to avoid gaming.
This opacity forces the decision subjects to rely on incomplete information when making strategic feature modifications.
We capture such settings as a game of Bayesian persuasion, in which the decision-maker sends a signal, e.g., an action recommendation, to a decision subject to incentivize them to take desirable actions.
arXiv Detail & Related papers (2021-12-12T17:18:54Z) - Confidence-Budget Matching for Sequential Budgeted Learning [69.77435313099366]
We formalize decision-making problems with querying budget.
We consider multi-armed bandits, linear bandits, and reinforcement learning problems.
We show that CBM based algorithms perform well in the presence of adversity.
arXiv Detail & Related papers (2021-02-05T19:56:31Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.