Persuasion, Delegation, and Private Information in Algorithm-Assisted
Decisions
- URL: http://arxiv.org/abs/2402.09384v2
- Date: Wed, 21 Feb 2024 18:01:48 GMT
- Title: Persuasion, Delegation, and Private Information in Algorithm-Assisted
Decisions
- Authors: Ruqing Xu
- Abstract summary: A principal designs an algorithm that generates a publicly observable prediction of a binary state.
She must decide whether to act directly based on the prediction or to delegate the decision to an agent with private information but potential misalignment.
We study the optimal design of the prediction algorithm and the delegation rule in such environments.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A principal designs an algorithm that generates a publicly observable
prediction of a binary state. She must decide whether to act directly based on
the prediction or to delegate the decision to an agent with private information
but potential misalignment. We study the optimal design of the prediction
algorithm and the delegation rule in such environments. Three key findings
emerge: (1) Delegation is optimal if and only if the principal would make the
same binary decision as the agent had she observed the agent's information. (2)
Providing the most informative algorithm may be suboptimal even if the
principal can act on the algorithm's prediction. Instead, the optimal algorithm
may provide more information about one state and restrict information about the
other. (3) Well-intentioned policies aiming to provide more information, such
as keeping a "human-in-the-loop" or requiring maximal prediction accuracy,
could strictly worsen decision quality compared to systems with no human or no
algorithmic assistance. These findings predict the underperformance of
human-machine collaborations if no measures are taken to mitigate common
preference misalignment between algorithms and human decision-makers.
Related papers
- Integrating Expert Judgment and Algorithmic Decision Making: An Indistinguishability Framework [12.967730957018688]
We introduce a novel framework for human-AI collaboration in prediction and decision tasks.
Our approach leverages human judgment to distinguish inputs which are algorithmically indistinguishable, or "look the same" to any feasible predictive algorithm.
arXiv Detail & Related papers (2024-10-11T13:03:53Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Decision-aid or Controller? Steering Human Decision Makers with
Algorithms [5.449173263947196]
We study a decision-aid algorithm that learns about the human decision maker and provides ''personalized recommendations'' to influence final decisions.
We discuss the potential applications of such algorithms and their social implications.
arXiv Detail & Related papers (2023-03-23T23:24:26Z) - Minimalistic Predictions to Schedule Jobs with Online Precedence
Constraints [117.8317521974783]
We consider non-clairvoyant scheduling with online precedence constraints.
An algorithm is oblivious to any job dependencies and learns about a job only if all of its predecessors have been completed.
arXiv Detail & Related papers (2023-01-30T13:17:15Z) - Algorithmic Decision-Making Safeguarded by Human Knowledge [8.482569811904028]
We study the augmentation of algorithmic decisions with human knowledge.
We show that when the algorithmic decision is optimal with large data, the non-data-driven human guardrail usually provides no benefit.
In these cases, even with sufficient data, the augmentation from human knowledge can still improve the performance of the algorithmic decision.
arXiv Detail & Related papers (2022-11-20T17:13:32Z) - Algorithmic Assistance with Recommendation-Dependent Preferences [2.864550757598007]
We consider the effect and design of algorithmic recommendations when they affect choices.
We show that recommendation-dependent preferences create inefficiencies where the decision-maker is overly responsive to the recommendation.
arXiv Detail & Related papers (2022-08-16T09:24:47Z) - Non-Clairvoyant Scheduling with Predictions Revisited [77.86290991564829]
In non-clairvoyant scheduling, the task is to find an online strategy for scheduling jobs with a priori unknown processing requirements.
We revisit this well-studied problem in a recently popular learning-augmented setting that integrates (untrusted) predictions in algorithm design.
We show that these predictions have desired properties, admit a natural error measure as well as algorithms with strong performance guarantees.
arXiv Detail & Related papers (2022-02-21T13:18:11Z) - Learning Predictions for Algorithms with Predictions [49.341241064279714]
We introduce a general design approach for algorithms that learn predictors.
We apply techniques from online learning to learn against adversarial instances, tune robustness-consistency trade-offs, and obtain new statistical guarantees.
We demonstrate the effectiveness of our approach at deriving learning algorithms by analyzing methods for bipartite matching, page migration, ski-rental, and job scheduling.
arXiv Detail & Related papers (2022-02-18T17:25:43Z) - On the Fairness of Machine-Assisted Human Decisions [3.4069627091757178]
We show that the inclusion of a biased human decision-maker can revert common relationships between the structure of the algorithm and the qualities of resulting decisions.
In the lab experiment, we demonstrate how predictions informed by gender-specific information can reduce average gender disparities in decisions.
arXiv Detail & Related papers (2021-10-28T17:24:45Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.