Risk Measures and Upper Probabilities: Coherence and Stratification
- URL: http://arxiv.org/abs/2206.03183v4
- Date: Mon, 29 Jan 2024 10:01:15 GMT
- Title: Risk Measures and Upper Probabilities: Coherence and Stratification
- Authors: Christian Fr\"ohlich and Robert C. Williamson
- Abstract summary: We look at richer alternatives to classical probability theory as a mathematical foundation for machine learning.
We examine a powerful and rich class of alternative aggregation functionals, known variously as spectral risk measures, Choquet integrals or Lorentz norms.
We empirically demonstrate how this new approach to uncertainty helps tackling practical machine learning problems.
- Score: 7.88657961743755
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Machine learning typically presupposes classical probability theory which
implies that aggregation is built upon expectation. There are now multiple
reasons to motivate looking at richer alternatives to classical probability
theory as a mathematical foundation for machine learning. We systematically
examine a powerful and rich class of alternative aggregation functionals, known
variously as spectral risk measures, Choquet integrals or Lorentz norms. We
present a range of characterization results, and demonstrate what makes this
spectral family so special. In doing so we arrive at a natural stratification
of all coherent risk measures in terms of the upper probabilities that they
induce by exploiting results from the theory of rearrangement invariant Banach
spaces. We empirically demonstrate how this new approach to uncertainty helps
tackling practical machine learning problems.
Related papers
- Decoherence and Probability [0.0]
Non-probabilistic accounts of the emergence of probability via decoherence are unconvincing.
An alternative account of the emergence of probability involves the combination of textitquasi-probabilistic emergence, via a partially interpreted decoherence model.
arXiv Detail & Related papers (2024-10-02T08:16:09Z) - Beyond Expectations: Learning with Stochastic Dominance Made Practical [88.06211893690964]
dominance models risk-averse preferences for decision making with uncertain outcomes.
Despite theoretically appealing, the application of dominance in machine learning has been scarce.
We first generalize the dominance concept to enable feasible comparisons between any arbitrary pair of random variables.
We then develop a simple and efficient approach for finding the optimal solution in terms of dominance.
arXiv Detail & Related papers (2024-02-05T03:21:23Z) - Invariant Causal Set Covering Machines [64.86459157191346]
Rule-based models, such as decision trees, appeal to practitioners due to their interpretable nature.
However, the learning algorithms that produce such models are often vulnerable to spurious associations and thus, they are not guaranteed to extract causally-relevant insights.
We propose Invariant Causal Set Covering Machines, an extension of the classical Set Covering Machine algorithm for conjunctions/disjunctions of binary-valued rules that provably avoids spurious associations.
arXiv Detail & Related papers (2023-06-07T20:52:01Z) - Multivariate Systemic Risk Measures and Computation by Deep Learning
Algorithms [63.03966552670014]
We discuss the key related theoretical aspects, with a particular focus on the fairness properties of primal optima and associated risk allocations.
The algorithms we provide allow for learning primals, optima for the dual representation and corresponding fair risk allocations.
arXiv Detail & Related papers (2023-02-02T22:16:49Z) - Principled Knowledge Extrapolation with GANs [92.62635018136476]
We study counterfactual synthesis from a new perspective of knowledge extrapolation.
We show that an adversarial game with a closed-form discriminator can be used to address the knowledge extrapolation problem.
Our method enjoys both elegant theoretical guarantees and superior performance in many scenarios.
arXiv Detail & Related papers (2022-05-21T08:39:42Z) - Fluctuations, Bias, Variance & Ensemble of Learners: Exact Asymptotics
for Convex Losses in High-Dimension [25.711297863946193]
We develop a theory for the study of fluctuations in an ensemble of generalised linear models trained on different, but correlated, features.
We provide a complete description of the joint distribution of the empirical risk minimiser for generic convex loss and regularisation in the high-dimensional limit.
arXiv Detail & Related papers (2022-01-31T17:44:58Z) - CC-Cert: A Probabilistic Approach to Certify General Robustness of
Neural Networks [58.29502185344086]
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks.
It is important to provide provable guarantees for deep learning models against semantically meaningful input transformations.
We propose a new universal probabilistic certification approach based on Chernoff-Cramer bounds.
arXiv Detail & Related papers (2021-09-22T12:46:04Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z) - Universal time-series forecasting with mixture predictors [10.812772606528172]
This book is devoted to the problem of sequential probability forecasting, that is, predicting the probabilities of the next outcome of a growing sequence of observations given the past.
Main subject is the mixture predictors, which are formed as a combination of a finite or infinite set of other predictors.
Results demonstrate the universality of this method in a very general probabilistic setting, but also show some of its limitations.
arXiv Detail & Related papers (2020-10-01T10:56:23Z) - Logic, Probability and Action: A Situation Calculus Perspective [12.47276164048813]
The unification of logic and probability is a long-standing concern in AI.
We explore recent results pertaining to the integration of logic, probability and actions in the situation calculus.
Results are motivated in the context of cognitive robotics.
arXiv Detail & Related papers (2020-06-17T13:49:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.