Multi-Target Decision Making under Conditions of Severe Uncertainty
- URL: http://arxiv.org/abs/2212.06832v1
- Date: Tue, 13 Dec 2022 11:47:02 GMT
- Title: Multi-Target Decision Making under Conditions of Severe Uncertainty
- Authors: Christoph Jansen, Georg Schollmeyer, Thomas Augustin
- Abstract summary: We show how incomplete preferential and probabilistic information can be exploited to compare decisions among different targets.
We discuss some interesting properties of the proposed orders between decision options and show how they can be concretely computed by linear optimization.
We conclude the paper by demonstrating our framework in the context of comparing algorithms under different performance measures.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The quality of consequences in a decision making problem under (severe)
uncertainty must often be compared among different targets (goals, objectives)
simultaneously. In addition, the evaluations of a consequence's performance
under the various targets often differ in their scale of measurement,
classically being either purely ordinal or perfectly cardinal. In this paper,
we transfer recent developments from abstract decision theory with incomplete
preferential and probabilistic information to this multi-target setting and
show how -- by exploiting the (potentially) partial cardinal and partial
probabilistic information -- more informative orders for comparing decisions
can be given than the Pareto order. We discuss some interesting properties of
the proposed orders between decision options and show how they can be
concretely computed by linear optimization. We conclude the paper by
demonstrating our framework in an artificial (but quite real-world) example in
the context of comparing algorithms under different performance measures.
Related papers
- Beyond Predictions: A Participatory Framework for Multi-Stakeholder Decision-Making [3.3044728148521623]
We propose a novel participatory framework that redefines decision-making as a multi-stakeholder optimization problem.
Our framework captures each actor's preferences through context-dependent reward functions.
We introduce a synthetic scoring mechanism that exploits user-defined preferences across multiple metrics to rank decision-making strategies.
arXiv Detail & Related papers (2025-02-12T16:27:40Z) - Pareto Optimal Algorithmic Recourse in Multi-cost Function [0.44938884406455726]
algorithmic recourse aims to identify minimal-cost actions to alter an individual features, thereby obtaining a desired outcome.
Most current recourse mechanisms use gradient-based methods that assume cost functions are differentiable, often not applicable in real-world scenarios.
This work proposes an algorithmic recourse framework that handles nondifferentiable and discrete multi-cost functions.
arXiv Detail & Related papers (2025-02-11T03:16:08Z) - An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Output-Constrained Decision Trees [0.0]
This paper introduces new variants of decision trees that can handle not only multi-target output but also the constraints among the targets.
We focus on the customization of conventional decision trees by adjusting the splitting criteria to handle the constraints and obtain feasible predictions.
arXiv Detail & Related papers (2024-05-24T07:54:44Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Best-Effort Adaptation [62.00856290846247]
We present a new theoretical analysis of sample reweighting methods, including bounds holding uniformly over the weights.
We show how these bounds can guide the design of learning algorithms that we discuss in detail.
We report the results of a series of experiments demonstrating the effectiveness of our best-effort adaptation and domain adaptation algorithms.
arXiv Detail & Related papers (2023-05-10T00:09:07Z) - Inferring Preferences from Demonstrations in Multi-objective
Reinforcement Learning: A Dynamic Weight-based Approach [0.0]
In multi-objective decision-making, preference inference is the process of inferring the preferences of a decision-maker for different objectives.
This research proposes a Dynamic Weight-based Preference Inference algorithm that can infer the preferences of agents acting in multi-objective decision-making problems.
arXiv Detail & Related papers (2023-04-27T11:55:07Z) - In Search of Insights, Not Magic Bullets: Towards Demystification of the
Model Selection Dilemma in Heterogeneous Treatment Effect Estimation [92.51773744318119]
This paper empirically investigates the strengths and weaknesses of different model selection criteria.
We highlight that there is a complex interplay between selection strategies, candidate estimators and the data used for comparing them.
arXiv Detail & Related papers (2023-02-06T16:55:37Z) - Robust Design and Evaluation of Predictive Algorithms under Unobserved Confounding [2.8498944632323755]
We propose a unified framework for the robust design and evaluation of predictive algorithms in selectively observed data.
We impose general assumptions on how much the outcome may vary on average between unselected and selected units.
We develop debiased machine learning estimators for the bounds on a large class of predictive performance estimands.
arXiv Detail & Related papers (2022-12-19T20:41:44Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - Learning Overlapping Representations for the Estimation of
Individualized Treatment Effects [97.42686600929211]
Estimating the likely outcome of alternatives from observational data is a challenging problem.
We show that algorithms that learn domain-invariant representations of inputs are often inappropriate.
We develop a deep kernel regression algorithm and posterior regularization framework that substantially outperforms the state-of-the-art on a variety of benchmarks data sets.
arXiv Detail & Related papers (2020-01-14T12:56:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.