Balancing Competing Objectives with Noisy Data: Score-Based Classifiers
for Welfare-Aware Machine Learning
- URL: http://arxiv.org/abs/2003.06740v4
- Date: Thu, 16 Jul 2020 03:57:38 GMT
- Title: Balancing Competing Objectives with Noisy Data: Score-Based Classifiers
for Welfare-Aware Machine Learning
- Authors: Esther Rolf and Max Simchowitz and Sarah Dean and Lydia T. Liu and
Daniel Bj\"orkegren and Moritz Hardt and Joshua Blumenstock
- Abstract summary: We study algorithmic policies which explicitly trade off between a private objective (such as profit) and a public objective (such as social welfare)
Our results shed light on inherent trade-offs in using machine learning for decisions that impact social welfare.
- Score: 43.518329314620416
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While real-world decisions involve many competing objectives, algorithmic
decisions are often evaluated with a single objective function. In this paper,
we study algorithmic policies which explicitly trade off between a private
objective (such as profit) and a public objective (such as social welfare). We
analyze a natural class of policies which trace an empirical Pareto frontier
based on learned scores, and focus on how such decisions can be made in noisy
or data-limited regimes. Our theoretical results characterize the optimal
strategies in this class, bound the Pareto errors due to inaccuracies in the
scores, and show an equivalence between optimal strategies and a rich class of
fairness-constrained profit-maximizing policies. We then present empirical
results in two different contexts -- online content recommendation and
sustainable abalone fisheries -- to underscore the applicability of our
approach to a wide range of practical decisions. Taken together, these results
shed light on inherent trade-offs in using machine learning for decisions that
impact social welfare.
Related papers
- Optimal Baseline Corrections for Off-Policy Contextual Bandits [61.740094604552475]
We aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric.
We propose a single framework built on their equivalence in learning scenarios.
Our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it.
arXiv Detail & Related papers (2024-05-09T12:52:22Z) - Non-linear Welfare-Aware Strategic Learning [10.448052192725168]
This paper studies algorithmic decision-making in the presence of strategic individual behaviors.
We first generalize the agent best response model in previous works to the non-linear setting.
We show the three welfare can attain the optimum simultaneously only under restrictive conditions.
arXiv Detail & Related papers (2024-05-03T01:50:03Z) - Reduced-Rank Multi-objective Policy Learning and Optimization [57.978477569678844]
In practice, causal researchers do not have a single outcome in mind a priori.
In government-assisted social benefit programs, policymakers collect many outcomes to understand the multidimensional nature of poverty.
We present a data-driven dimensionality-reduction methodology for multiple outcomes in the context of optimal policy learning.
arXiv Detail & Related papers (2024-04-29T08:16:30Z) - Optimizing Credit Limit Adjustments Under Adversarial Goals Using
Reinforcement Learning [42.303733194571905]
We seek to find and automatize an optimal credit card limit adjustment policy by employing reinforcement learning techniques.
Our research establishes a conceptual structure for applying reinforcement learning framework to credit limit adjustment.
arXiv Detail & Related papers (2023-06-27T16:10:36Z) - Fair Off-Policy Learning from Observational Data [30.77874108094485]
We propose a novel framework for fair off-policy learning.
We first formalize different fairness notions for off-policy learning.
We then propose a neural network-based framework to learn optimal policies under different fairness notions.
arXiv Detail & Related papers (2023-03-15T10:47:48Z) - Reinforcement Learning with Stepwise Fairness Constraints [50.538878453547966]
We introduce the study of reinforcement learning with stepwise fairness constraints.
We provide learning algorithms with strong theoretical guarantees in regard to policy optimality and fairness violation.
arXiv Detail & Related papers (2022-11-08T04:06:23Z) - Off-Policy Evaluation with Policy-Dependent Optimization Response [90.28758112893054]
We develop a new framework for off-policy evaluation with a textitpolicy-dependent linear optimization response.
We construct unbiased estimators for the policy-dependent estimand by a perturbation method.
We provide a general algorithm for optimizing causal interventions.
arXiv Detail & Related papers (2022-02-25T20:25:37Z) - Coping with Mistreatment in Fair Algorithms [1.2183405753834557]
We study the algorithmic fairness in a supervised learning setting and examine the effect of optimizing a classifier for the Equal Opportunity metric.
We propose a conceptually simple method to mitigate this bias.
We rigorously analyze the proposed method and evaluate it on several real world datasets demonstrating its efficacy.
arXiv Detail & Related papers (2021-02-22T03:26:06Z) - Decentralized Reinforcement Learning: Global Decision-Making via Local
Economic Transactions [80.49176924360499]
We establish a framework for directing a society of simple, specialized, self-interested agents to solve sequential decision problems.
We derive a class of decentralized reinforcement learning algorithms.
We demonstrate the potential advantages of a society's inherent modular structure for more efficient transfer learning.
arXiv Detail & Related papers (2020-07-05T16:41:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.