Algorithmic Decision-Making Safeguarded by Human Knowledge
- URL: http://arxiv.org/abs/2211.11028v1
- Date: Sun, 20 Nov 2022 17:13:32 GMT
- Title: Algorithmic Decision-Making Safeguarded by Human Knowledge
- Authors: Ningyuan Chen, Ming Hu, Wenhao Li
- Abstract summary: We study the augmentation of algorithmic decisions with human knowledge.
We show that when the algorithmic decision is optimal with large data, the non-data-driven human guardrail usually provides no benefit.
In these cases, even with sufficient data, the augmentation from human knowledge can still improve the performance of the algorithmic decision.
- Score: 8.482569811904028
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Commercial AI solutions provide analysts and managers with data-driven
business intelligence for a wide range of decisions, such as demand forecasting
and pricing. However, human analysts may have their own insights and
experiences about the decision-making that is at odds with the algorithmic
recommendation. In view of such a conflict, we provide a general analytical
framework to study the augmentation of algorithmic decisions with human
knowledge: the analyst uses the knowledge to set a guardrail by which the
algorithmic decision is clipped if the algorithmic output is out of bound, and
seems unreasonable. We study the conditions under which the augmentation is
beneficial relative to the raw algorithmic decision. We show that when the
algorithmic decision is asymptotically optimal with large data, the
non-data-driven human guardrail usually provides no benefit. However, we point
out three common pitfalls of the algorithmic decision: (1) lack of domain
knowledge, such as the market competition, (2) model misspecification, and (3)
data contamination. In these cases, even with sufficient data, the augmentation
from human knowledge can still improve the performance of the algorithmic
decision.
Related papers
- Integrating Expert Judgment and Algorithmic Decision Making: An Indistinguishability Framework [12.967730957018688]
We introduce a novel framework for human-AI collaboration in prediction and decision tasks.
Our approach leverages human judgment to distinguish inputs which are algorithmically indistinguishable, or "look the same" to any feasible predictive algorithm.
arXiv Detail & Related papers (2024-10-11T13:03:53Z) - Designing Algorithmic Recommendations to Achieve Human-AI Complementarity [2.4247752614854203]
We formalize the design of recommendation algorithms that assist human decision-makers.
We use a potential-outcomes framework to model the effect of recommendations on a human decision-maker's binary treatment choice.
We derive minimax optimal recommendation algorithms that can be implemented with machine learning.
arXiv Detail & Related papers (2024-05-02T17:15:30Z) - Does AI help humans make better decisions? A statistical evaluation framework for experimental and observational studies [0.43981305860983716]
We show how to compare the performance of three alternative decision-making systems--human-alone, human-with-AI, and AI-alone.
We find that the risk assessment recommendations do not improve the classification accuracy of a judge's decision to impose cash bail.
arXiv Detail & Related papers (2024-03-18T01:04:52Z) - Persuasion, Delegation, and Private Information in Algorithm-Assisted
Decisions [0.0]
A principal designs an algorithm that generates a publicly observable prediction of a binary state.
She must decide whether to act directly based on the prediction or to delegate the decision to an agent with private information but potential misalignment.
We study the optimal design of the prediction algorithm and the delegation rule in such environments.
arXiv Detail & Related papers (2024-02-14T18:32:30Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Decision-aid or Controller? Steering Human Decision Makers with
Algorithms [5.449173263947196]
We study a decision-aid algorithm that learns about the human decision maker and provides ''personalized recommendations'' to influence final decisions.
We discuss the potential applications of such algorithms and their social implications.
arXiv Detail & Related papers (2023-03-23T23:24:26Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - On solving decision and risk management problems subject to uncertainty [91.3755431537592]
Uncertainty is a pervasive challenge in decision and risk management.
This paper develops a systematic understanding of such strategies, determine their range of application, and develop a framework to better employ them.
arXiv Detail & Related papers (2023-01-18T19:16:23Z) - A2Log: Attentive Augmented Log Anomaly Detection [53.06341151551106]
Anomaly detection becomes increasingly important for the dependability and serviceability of IT services.
Existing unsupervised methods need anomaly examples to obtain a suitable decision boundary.
We develop A2Log, which is an unsupervised anomaly detection method consisting of two steps: Anomaly scoring and anomaly decision.
arXiv Detail & Related papers (2021-09-20T13:40:21Z) - Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis [75.64261155172856]
survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime.
We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive.
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
arXiv Detail & Related papers (2020-07-06T15:20:17Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.