Reconciling Individual Probability Forecasts
- URL: http://arxiv.org/abs/2209.01687v2
- Date: Sat, 6 May 2023 18:57:05 GMT
- Title: Reconciling Individual Probability Forecasts
- Authors: Aaron Roth and Alexander Tolbert and Scott Weinstein
- Abstract summary: We show that two parties who agree on the data cannot disagree on how to model individual probabilities.
We conclude that although individual probabilities are unknowable, they are contestable via a computationally and data efficient process.
- Score: 78.0074061846588
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Individual probabilities refer to the probabilities of outcomes that are
realized only once: the probability that it will rain tomorrow, the probability
that Alice will die within the next 12 months, the probability that Bob will be
arrested for a violent crime in the next 18 months, etc. Individual
probabilities are fundamentally unknowable. Nevertheless, we show that two
parties who agree on the data -- or on how to sample from a data distribution
-- cannot agree to disagree on how to model individual probabilities. This is
because any two models of individual probabilities that substantially disagree
can together be used to empirically falsify and improve at least one of the two
models. This can be efficiently iterated in a process of "reconciliation" that
results in models that both parties agree are superior to the models they
started with, and which themselves (almost) agree on the forecasts of
individual probabilities (almost) everywhere. We conclude that although
individual probabilities are unknowable, they are contestable via a
computationally and data efficient process that must lead to agreement. Thus we
cannot find ourselves in a situation in which we have two equally accurate and
unimprovable models that disagree substantially in their predictions --
providing an answer to what is sometimes called the predictive or model
multiplicity problem.
Related papers
- Reconciling Model Multiplicity for Downstream Decision Making [24.335927243672952]
We show that even when the two predictive models approximately agree on their individual predictions almost everywhere, it is still possible for their induced best-response actions to differ on a substantial portion of the population.
We propose a framework that calibrates the predictive models with regard to both the downstream decision-making problem and the individual probability prediction.
arXiv Detail & Related papers (2024-05-30T03:36:46Z) - Logic of subjective probability [0.0]
I discuss both syntax and semantics of subjective probability.
Jeffreys's law states that two successful probability forecasters must issue forecasts that are close to each other.
I will discuss connections between subjective and frequentist probability.
arXiv Detail & Related papers (2023-09-03T13:31:40Z) - Equalised Odds is not Equal Individual Odds: Post-processing for Group and Individual Fairness [13.894631477590362]
Group fairness is achieved by equalising prediction distributions between protected sub-populations.
individual fairness requires treating similar individuals alike.
This procedure may provide two similar individuals from the same protected group with classification odds that are disparately different.
arXiv Detail & Related papers (2023-04-19T16:02:00Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - Combining Predictions under Uncertainty: The Case of Random Decision
Trees [2.322689362836168]
A common approach to aggregate classification estimates in an ensemble of decision trees is to either use voting or to average the probabilities for each class.
In this paper, we investigate a number of alternative prediction methods.
Our methods are inspired by the theories of probability, belief functions and reliable classification.
arXiv Detail & Related papers (2022-08-15T18:36:57Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Deep Probability Estimation [14.659180336823354]
We investigate probability estimation from high-dimensional data using deep neural networks.
The goal of this work is to investigate probability estimation from high-dimensional data using deep neural networks.
We evaluate existing methods on the synthetic data as well as on three real-world probability estimation tasks.
arXiv Detail & Related papers (2021-11-21T03:55:50Z) - Multivariate Probabilistic Regression with Natural Gradient Boosting [63.58097881421937]
We propose a Natural Gradient Boosting (NGBoost) approach based on nonparametrically modeling the conditional parameters of the multivariate predictive distribution.
Our method is robust, works out-of-the-box without extensive tuning, is modular with respect to the assumed target distribution, and performs competitively in comparison to existing approaches.
arXiv Detail & Related papers (2021-06-07T17:44:49Z) - A Note on High-Probability versus In-Expectation Guarantees of
Generalization Bounds in Machine Learning [95.48744259567837]
Statistical machine learning theory often tries to give generalization guarantees of machine learning models.
Statements made about the performance of machine learning models have to take the sampling process into account.
We show how one may transform one statement to another.
arXiv Detail & Related papers (2020-10-06T09:41:35Z) - Tractable Inference in Credal Sentential Decision Diagrams [116.6516175350871]
Probabilistic sentential decision diagrams are logic circuits where the inputs of disjunctive gates are annotated by probability values.
We develop the credal sentential decision diagrams, a generalisation of their probabilistic counterpart that allows for replacing the local probabilities with credal sets of mass functions.
For a first empirical validation, we consider a simple application based on noisy seven-segment display images.
arXiv Detail & Related papers (2020-08-19T16:04:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.