Between accurate prediction and poor decision making: the AI/ML gap
- URL: http://arxiv.org/abs/2310.02029v1
- Date: Tue, 3 Oct 2023 13:15:02 GMT
- Title: Between accurate prediction and poor decision making: the AI/ML gap
- Authors: Gianluca Bontempi
- Abstract summary: This paper argues that AI/ML community has taken so far a too unbalanced approach by devoting excessive attention to the estimation of the state probability.
Few evidence exists about the impact of a wrong utility assessment on the resulting expected utility of the decision strategy.
- Score: 0.19580473532948395
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intelligent agents rely on AI/ML functionalities to predict the consequence
of possible actions and optimise the policy. However, the effort of the
research community in addressing prediction accuracy has been so intense (and
successful) that it created the illusion that the more accurate the learner
prediction (or classification) the better would have been the final decision.
Now, such an assumption is valid only if the (human or artificial) decision
maker has complete knowledge of the utility of the possible actions. This paper
argues that AI/ML community has taken so far a too unbalanced approach by
devoting excessive attention to the estimation of the state (or target)
probability to the detriment of accurate and reliable estimations of the
utility. In particular, few evidence exists about the impact of a wrong utility
assessment on the resulting expected utility of the decision strategy. This
situation is creating a substantial gap between the expectations and the
effective impact of AI solutions, as witnessed by recent criticisms and
emphasised by the regulatory legislative efforts. This paper aims to study this
gap by quantifying the sensitivity of the expected utility to the utility
uncertainty and comparing it to the one due to probability estimation.
Theoretical and simulated results show that an inaccurate utility assessment
may as (and sometimes) more harmful than a poor probability estimation. The
final recommendation to the community is then to undertake a focus shift from a
pure accuracy-driven (or obsessed) approach to a more utility-aware
methodology.
Related papers
- Predictions as Surrogates: Revisiting Surrogate Outcomes in the Age of AI [12.569286058146343]
We establish a formal connection between the decades-old surrogate outcome model in biostatistics and the emerging field of prediction-powered inference (PPI)
We develop recalibrated prediction-powered inference, a more efficient approach to statistical inference than existing PPI proposals.
We demonstrate significant gains in effective sample size over existing PPI proposals via three applications leveraging state-of-the-art machine learning/AI models.
arXiv Detail & Related papers (2025-01-16T18:30:33Z) - Performative Prediction on Games and Mechanism Design [69.7933059664256]
We study a collective risk dilemma where agents decide whether to trust predictions based on past accuracy.
As predictions shape collective outcomes, social welfare arises naturally as a metric of concern.
We show how to achieve better trade-offs and use them for mechanism design.
arXiv Detail & Related papers (2024-08-09T16:03:44Z) - Reduced-Rank Multi-objective Policy Learning and Optimization [57.978477569678844]
In practice, causal researchers do not have a single outcome in mind a priori.
In government-assisted social benefit programs, policymakers collect many outcomes to understand the multidimensional nature of poverty.
We present a data-driven dimensionality-reduction methodology for multiple outcomes in the context of optimal policy learning.
arXiv Detail & Related papers (2024-04-29T08:16:30Z) - Robust Design and Evaluation of Predictive Algorithms under Unobserved Confounding [2.8498944632323755]
We propose a unified framework for the robust design and evaluation of predictive algorithms in selectively observed data.
We impose general assumptions on how much the outcome may vary on average between unselected and selected units.
We develop debiased machine learning estimators for the bounds on a large class of predictive performance estimands.
arXiv Detail & Related papers (2022-12-19T20:41:44Z) - Making Decisions under Outcome Performativity [9.962472413291803]
We introduce a new optimality concept -- performative omniprediction.
A performative omnipredictor is a single predictor that simultaneously encodes the optimal decision rule.
We show that efficient performative omnipredictors exist, under a natural restriction of performative prediction.
arXiv Detail & Related papers (2022-10-04T17:04:47Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - Uncertainty-aware Human Motion Prediction [0.4568777157687961]
We propose an uncertainty-aware framework for human motion prediction (UA-HMP)
We first design an uncertainty-aware predictor through Gaussian modeling to achieve the value and the uncertainty of predicted motion.
Then, an uncertainty-guided learning scheme is proposed to quantitate the uncertainty and reduce the negative effect of the noisy samples.
arXiv Detail & Related papers (2021-07-08T03:09:01Z) - When Does Uncertainty Matter?: Understanding the Impact of Predictive
Uncertainty in ML Assisted Decision Making [68.19284302320146]
We carry out user studies to assess how people with differing levels of expertise respond to different types of predictive uncertainty.
We found that showing posterior predictive distributions led to smaller disagreements with the ML model's predictions.
This suggests that posterior predictive distributions can potentially serve as useful decision aids which should be used with caution and take into account the type of distribution and the expertise of the human.
arXiv Detail & Related papers (2020-11-12T02:23:53Z) - Counterfactual Predictions under Runtime Confounding [74.90756694584839]
We study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
arXiv Detail & Related papers (2020-06-30T15:49:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.