Learning Pareto-Efficient Decisions with Confidence
- URL: http://arxiv.org/abs/2110.09864v1
- Date: Tue, 19 Oct 2021 11:32:17 GMT
- Title: Learning Pareto-Efficient Decisions with Confidence
- Authors: Sofia Ek, Dave Zachariah, Petre Stoica
- Abstract summary: The paper considers the problem of multi-objective decision support when outcomes are uncertain.
This enables quantifying trade-offs between decisions in terms of tail outcomes that are relevant in safety-critical applications.
- Score: 21.915057426589748
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The paper considers the problem of multi-objective decision support when
outcomes are uncertain. We extend the concept of Pareto-efficient decisions to
take into account the uncertainty of decision outcomes across varying contexts.
This enables quantifying trade-offs between decisions in terms of tail outcomes
that are relevant in safety-critical applications. We propose a method for
learning efficient decisions with statistical confidence, building on results
from the conformal prediction literature. The method adapts to weak or
nonexistent context covariate overlap and its statistical guarantees are
evaluated using both synthetic and real data.
Related papers
- Uncertainty Quantification and Causal Considerations for Off-Policy Decision Making [4.514386953429771]
Off-policy evaluation (OPE) seeks to assess the performance of a new policy using data collected under a different policy.
Existing OPE methodologies suffer from several limitations arising from statistical uncertainty as well as causal considerations.
We introduce the Marginal Ratio (MR) estimator, a novel OPE method that reduces variance by focusing on the marginal distribution of outcomes.
Next, we propose Conformal Off-Policy Prediction (COPP), a principled approach for uncertainty quantification in OPE.
Finally, we address causal unidentifiability in off-policy decision-making by developing novel bounds for sequential decision settings
arXiv Detail & Related papers (2025-02-09T20:05:19Z) - Decision Making in Changing Environments: Robustness, Query-Based Learning, and Differential Privacy [59.64384863882473]
We study the problem of interactive decision making in which the underlying environment changes over time subject to given constraints.
We propose a framework, which provides an complexity between the complexity and adversarial settings of decision making.
arXiv Detail & Related papers (2025-01-24T21:31:50Z) - Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Measuring Classification Decision Certainty and Doubt [61.13511467941388]
We propose intuitive scores, which we call certainty and doubt, to assess and compare the quality and uncertainty of predictions in (multi-)classification decision machine learning problems.
arXiv Detail & Related papers (2023-03-25T21:31:41Z) - RISE: Robust Individualized Decision Learning with Sensitive Variables [1.5293427903448025]
A naive baseline is to ignore sensitive variables in learning decision rules, leading to significant uncertainty and bias.
We propose a decision learning framework to incorporate sensitive variables during offline training but not include them in the input of the learned decision rule during model deployment.
arXiv Detail & Related papers (2022-11-12T04:31:38Z) - On the Complexity of Adversarial Decision Making [101.14158787665252]
We show that the Decision-Estimation Coefficient is necessary and sufficient to obtain low regret for adversarial decision making.
We provide new structural results that connect the Decision-Estimation Coefficient to variants of other well-known complexity measures.
arXiv Detail & Related papers (2022-06-27T06:20:37Z) - Explainability's Gain is Optimality's Loss? -- How Explanations Bias
Decision-making [0.0]
Explanations help to facilitate communication between the algorithm and the human decision-maker.
Feature-based explanations' semantics of causal models induce leakage from the decision-maker's prior beliefs.
Such differences can lead to sub-optimal and biased decision outcomes.
arXiv Detail & Related papers (2022-06-17T11:43:42Z) - The Statistical Complexity of Interactive Decision Making [126.04974881555094]
We provide a complexity measure, the Decision-Estimation Coefficient, that is proven to be both necessary and sufficient for sample-efficient interactive learning.
A unified algorithm design principle, Estimation-to-Decisions (E2D), transforms any algorithm for supervised estimation into an online algorithm for decision making.
arXiv Detail & Related papers (2021-12-27T02:53:44Z) - Inverse Active Sensing: Modeling and Understanding Timely
Decision-Making [111.07204912245841]
We develop a framework for the general setting of evidence-based decision-making under endogenous, context-dependent time pressure.
We demonstrate how it enables modeling intuitive notions of surprise, suspense, and optimality in decision strategies.
arXiv Detail & Related papers (2020-06-25T02:30:45Z) - Learning Robust Decision Policies from Observational Data [21.05564340986074]
It is of interest to learn robust policies that reduce the risk of outcomes with high costs.
We develop a method for learning policies that reduce tails of the cost distribution at a specified level.
arXiv Detail & Related papers (2020-06-03T16:02:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.