Conformal Decision Theory: Safe Autonomous Decisions from Imperfect Predictions
- URL: http://arxiv.org/abs/2310.05921v3
- Date: Thu, 2 May 2024 13:10:06 GMT
- Title: Conformal Decision Theory: Safe Autonomous Decisions from Imperfect Predictions
- Authors: Jordan Lekeufack, Anastasios N. Angelopoulos, Andrea Bajcsy, Michael I. Jordan, Jitendra Malik,
- Abstract summary: We introduce Conformal Decision Theory, a framework for producing safe autonomous decisions despite imperfect machine learning predictions.
Decisions produced by our algorithms are safe in the sense that they come with provable statistical guarantees of having low risk.
Experiments demonstrate the utility of our approach in robot motion planning around humans, automated stock trading, and robot manufacturing.
- Score: 80.34972679938483
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce Conformal Decision Theory, a framework for producing safe autonomous decisions despite imperfect machine learning predictions. Examples of such decisions are ubiquitous, from robot planning algorithms that rely on pedestrian predictions, to calibrating autonomous manufacturing to exhibit high throughput and low error, to the choice of trusting a nominal policy versus switching to a safe backup policy at run-time. The decisions produced by our algorithms are safe in the sense that they come with provable statistical guarantees of having low risk without any assumptions on the world model whatsoever; the observations need not be I.I.D. and can even be adversarial. The theory extends results from conformal prediction to calibrate decisions directly, without requiring the construction of prediction sets. Experiments demonstrate the utility of our approach in robot motion planning around humans, automated stock trading, and robot manufacturing.
Related papers
- A Decision-driven Methodology for Designing Uncertainty-aware AI Self-Assessment [8.482630532500057]
It is unclear if a given AI system's predictions can be trusted by decision-makers in downstream applications.
This manuscript is a practical guide for machine learning engineers and AI system users to select the ideal self-assessment techniques.
arXiv Detail & Related papers (2024-08-02T14:43:45Z) - Conformal Prediction Sets Improve Human Decision Making [5.151594941369301]
We study the usefulness of conformal prediction sets as an aid for human decision making.
We find that when humans are given conformal prediction sets their accuracy on tasks improves compared to fixed-size prediction sets with the same coverage guarantee.
arXiv Detail & Related papers (2024-01-24T19:01:22Z) - Self-Aware Trajectory Prediction for Safe Autonomous Driving [9.868681330733764]
Trajectory prediction is one of the key components of the autonomous driving software stack.
In this paper, a self-aware trajectory prediction method is proposed.
The proposed method performed well in terms of self-awareness, memory footprint, and real-time performance.
arXiv Detail & Related papers (2023-05-16T03:53:23Z) - Calibrating AI Models for Wireless Communications via Conformal
Prediction [55.47458839587949]
Conformal prediction is applied for the first time to the design of AI for communication systems.
This paper investigates the application of conformal prediction as a general framework to obtain AI models that produce decisions with formal calibration guarantees.
arXiv Detail & Related papers (2022-12-15T12:52:23Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - On the Fairness of Machine-Assisted Human Decisions [3.4069627091757178]
We show that the inclusion of a biased human decision-maker can revert common relationships between the structure of the algorithm and the qualities of resulting decisions.
In the lab experiment, we demonstrate how predictions informed by gender-specific information can reduce average gender disparities in decisions.
arXiv Detail & Related papers (2021-10-28T17:24:45Z) - Private Prediction Sets [72.75711776601973]
Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
arXiv Detail & Related papers (2021-02-11T18:59:11Z) - Right Decisions from Wrong Predictions: A Mechanism Design Alternative
to Individual Calibration [107.15813002403905]
Decision makers often need to rely on imperfect probabilistic forecasts.
We propose a compensation mechanism ensuring that the forecasted utility matches the actually accrued utility.
We demonstrate an application showing how passengers could confidently optimize individual travel plans based on flight delay probabilities.
arXiv Detail & Related papers (2020-11-15T08:22:39Z) - AutoCP: Automated Pipelines for Accurate Prediction Intervals [84.16181066107984]
This paper proposes an AutoML framework called Automatic Machine Learning for Conformal Prediction (AutoCP)
Unlike the familiar AutoML frameworks that attempt to select the best prediction model, AutoCP constructs prediction intervals that achieve the user-specified target coverage rate.
We tested AutoCP on a variety of datasets and found that it significantly outperforms benchmark algorithms.
arXiv Detail & Related papers (2020-06-24T23:13:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.