Optimization's Neglected Normative Commitments
- URL: http://arxiv.org/abs/2305.17465v2
- Date: Fri, 28 Jul 2023 19:33:40 GMT
- Title: Optimization's Neglected Normative Commitments
- Authors: Benjamin Laufer, Thomas Krendl Gilbert, Helen Nissenbaum
- Abstract summary: A paradigm used to approach potentially high-stakes decisions, optimization relies on abstracting the real world to a set of decision(s), objective(s) and constraint(s)
This paper describes the normative choices and assumptions that are necessarily part of using optimization.
It then identifies six emergent problems that may be neglected.
- Score: 3.3388234549922027
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Optimization is offered as an objective approach to resolving complex,
real-world decisions involving uncertainty and conflicting interests. It drives
business strategies as well as public policies and, increasingly, lies at the
heart of sophisticated machine learning systems. A paradigm used to approach
potentially high-stakes decisions, optimization relies on abstracting the real
world to a set of decision(s), objective(s) and constraint(s). Drawing from the
modeling process and a range of actual cases, this paper describes the
normative choices and assumptions that are necessarily part of using
optimization. It then identifies six emergent problems that may be neglected:
1) Misspecified values can yield optimizations that omit certain imperatives
altogether or incorporate them incorrectly as a constraint or as part of the
objective, 2) Problematic decision boundaries can lead to faulty modularity
assumptions and feedback loops, 3) Failing to account for multiple agents'
divergent goals and decisions can lead to policies that serve only certain
narrow interests, 4) Mislabeling and mismeasurement can introduce bias and
imprecision, 5) Faulty use of relaxation and approximation methods,
unaccompanied by formal characterizations and guarantees, can severely impede
applicability, and 6) Treating optimization as a justification for action,
without specifying the necessary contextual information, can lead to ethically
dubious or faulty decisions. Suggestions are given to further understand and
curb the harms that can arise when optimization is used wrongfully.
Related papers
- Optimal Baseline Corrections for Off-Policy Contextual Bandits [61.740094604552475]
We aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric.
We propose a single framework built on their equivalence in learning scenarios.
Our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it.
arXiv Detail & Related papers (2024-05-09T12:52:22Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - On solving decision and risk management problems subject to uncertainty [91.3755431537592]
Uncertainty is a pervasive challenge in decision and risk management.
This paper develops a systematic understanding of such strategies, determine their range of application, and develop a framework to better employ them.
arXiv Detail & Related papers (2023-01-18T19:16:23Z) - A Framework for Inherently Interpretable Optimization Models [0.0]
Solution of large-scale problems that seemed intractable decades ago are now a routine task.
One major barrier is that the optimization software can be perceived as a black box.
We propose an optimization framework to derive solutions that inherently come with an easily comprehensible explanatory rule.
arXiv Detail & Related papers (2022-08-26T10:32:00Z) - An Approach to Ordering Objectives and Pareto Efficient Solutions [0.0]
Solutions to multi-objective optimization problems can generally not be compared or ordered.
Decision-makers are often made to believe that scaled objectives can be compared.
We present a method that uses the probability integral transform in order to map the objectives of a problem into scores that all share the same range.
arXiv Detail & Related papers (2022-05-30T17:55:53Z) - Off-Policy Evaluation with Policy-Dependent Optimization Response [90.28758112893054]
We develop a new framework for off-policy evaluation with a textitpolicy-dependent linear optimization response.
We construct unbiased estimators for the policy-dependent estimand by a perturbation method.
We provide a general algorithm for optimizing causal interventions.
arXiv Detail & Related papers (2022-02-25T20:25:37Z) - Bayesian Persuasion for Algorithmic Recourse [28.586165301962485]
In some situations, the underlying predictive model is deliberately kept secret to avoid gaming.
This opacity forces the decision subjects to rely on incomplete information when making strategic feature modifications.
We capture such settings as a game of Bayesian persuasion, in which the decision-maker sends a signal, e.g., an action recommendation, to a decision subject to incentivize them to take desirable actions.
arXiv Detail & Related papers (2021-12-12T17:18:54Z) - Goal Seeking Quadratic Unconstrained Binary Optimization [0.5439020425819]
We present two variants of goal-seeking QUBO that minimize the deviation from the goal through a tabu-search based greedy one-flip.
In this paper, we present two variants of goal-seeking QUBO that minimize the deviation from the goal through a tabu-search based greedy one-flip.
arXiv Detail & Related papers (2021-03-24T03:03:13Z) - Inverse Active Sensing: Modeling and Understanding Timely
Decision-Making [111.07204912245841]
We develop a framework for the general setting of evidence-based decision-making under endogenous, context-dependent time pressure.
We demonstrate how it enables modeling intuitive notions of surprise, suspense, and optimality in decision strategies.
arXiv Detail & Related papers (2020-06-25T02:30:45Z) - Decisions, Counterfactual Explanations and Strategic Behavior [16.980621769406923]
We find policies and counterfactual explanations that are optimal in terms of utility in a strategic setting.
We show that, given a pre-defined policy, the problem of finding the optimal set of counterfactual explanations is NP-hard.
We demonstrate that, by incorporating a matroid constraint into the problem formulation, we can increase the diversity of the optimal set of counterfactual explanations.
arXiv Detail & Related papers (2020-02-11T12:04:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.